id
stringlengths 10
10
| title
stringlengths 26
192
| abstract
stringlengths 172
1.92k
| authors
stringlengths 7
591
| published_date
stringlengths 20
20
| link
stringlengths 33
33
| markdown
stringlengths 269
344k
|
---|---|---|---|---|---|---|
2307.09770 | Perturbing a Neural Network to Infer Effective Connectivity: Evidence
from Synthetic EEG Data | Identifying causal relationships among distinct brain areas, known as
effective connectivity, holds key insights into the brain's information
processing and cognitive functions. Electroencephalogram (EEG) signals exhibit
intricate dynamics and inter-areal interactions within the brain. However,
methods for characterizing nonlinear causal interactions among multiple brain
regions remain relatively underdeveloped. In this study, we proposed a
data-driven framework to infer effective connectivity by perturbing the trained
neural networks. Specifically, we trained neural networks (i.e., CNN, vanilla
RNN, GRU, LSTM, and Transformer) to predict future EEG signals according to
historical data and perturbed the networks' input to obtain effective
connectivity (EC) between the perturbed EEG channel and the rest of the
channels. The EC reflects the causal impact of perturbing one node on others.
The performance was tested on the synthetic EEG generated by a
biological-plausible Jansen-Rit model. CNN and Transformer obtained the best
performance on both 3-channel and 90-channel synthetic EEG data, outperforming
the classical Granger causality method. Our work demonstrated the potential of
perturbing an artificial neural network, learned to predict future system
dynamics, to uncover the underlying causal structure. | Peizhen Yang, Xinke Shen, Zongsheng Li, Zixiang Luo, Kexin Lou, Quanying Liu | 2023-07-19T06:14:54Z | http://arxiv.org/abs/2307.09770v1 | # Perturbing a Neural Network to Infer Effective Connectivity: Evidence from Synthetic EEG Data
###### Abstract
Identifying causal relationships among distinct brain areas, known as effective connectivity, holds key insights into the brain's information processing and cognitive functions. Electroencephalogram (EEG) signals exhibit intricate dynamics and inter-areal interactions within the brain. However, methods for characterizing nonlinear causal interactions among multiple brain regions remain relatively underdeveloped. In this study, we proposed a data-driven framework to infer effective connectivity by perturbing the trained neural networks. Specifically, we trained neural networks (_i.e._, CNN, vanilla RNN, GRU, LSTM, and Transformer) to predict future EEG signals according to historical data and perturbed the networks' input to obtain effective connectivity (EC) between the perturbed EEG channel and the rest of the channels. The EC reflects the causal impact of perturbing one node on others. The performance was tested on the synthetic EEG generated by a biological-plausible Jansen-Rit model. CNN and Transformer obtained the best performance on both 3-channel and 90-channel synthetic EEG data, outperforming the classical Granger causality method. Our work demonstrated the potential of perturbing an artificial neural network, learned to predict future system dynamics, to uncover the underlying causal structure.
## 1 Introduction
The brain is a profoundly intricate, interwoven network characterized by causal influences among its various regions [13, 14, 15]. From brain recordings like functional magnetic resonance imaging (fMRI) or electroencephalogram (EEG), one can easily compute the correlation of signal dynamics across different regions, which is typically referred to as functional connectivity (FC) [16, 17, 18]. However, FC cannot characterize the _directionality_ and the _sign_ of connectivity [1]. Instead, effective connectivity (EC) can reflect the underlying directional causal influence from one brain region to another with the strength, directionality, and sign [19, 10]. It characterizes the information flow across brain regions and is critical for understanding how information is integrated or segregated during the cognitive process. Therefore, developing methods to reliably estimate EC from the recorded neural data stands as a significant endeavor in the field of neuroscience [11].
A number of computational methods have been proposed to investigate effective connectivity among multiple brain regions based on time series of neural signals, including Granger causality(GC) [19] and dynamic causal modeling (DCM) [15]. However, these methods are limited in their capacity to accurately encapsulate complex, non-linear interactions. For example, Granger causality operates on the premise that if the past activities of brain region A can predict the activities of another region B, there should be a causal interaction from region A to B. Granger causality typically relies on the assumptions of linear dependency and is sensitive to the effect of noise [15]. On the other hand, DCM employs a Bayesian
Figure 1: **Framework of perturbation-based EC inference on EEG data.****a**, Training artificial neural networks (ANNs) for EEG signal prediction. ANNs are trained to predict EEG signals of the following time steps from multiple previous steps. **b**, Perturbing the trained ANNs to infer EC. The trained ANNs are used as surrogate brains. EC is estimated by sequentially perturbing each region of the surrogate brain and measuring the stimulation-induced responses.
framework to identify the nonlinear input-state-output systems from observed data [14, 15]. Despite its popularity in neuroscience, DCM is limited by the pre-designed system dynamic model and its computational demand with an increasing number of nodes [1, 1].
Rather than identified with computational methods, effective connectivity can be directly mapped with in-vivo experiments by applying electrical or optical stimulations to a specific brain region and then evaluating the resultant impact on other regions [13]. Although such perturbation-based experimental approaches are straightforward, they are typically unfeasible in human subjects due to ethical reasons and technical limitations. To address this challenge, we developed a data-driven framework that leverages a surrogate brain model to capture brain dynamics. Within this framework, we apply in-silico perturbations to various brain regions and observe the subsequent influence on other regions. This conceptually simple approach allows us to investigate effective connectivity by virtual perturbations.
Here, we employed artificial neural networks (ANNs) to characterize the nonlinear interactions among brain regions. By training a neural network to predict the future dynamics of neural signals, we create a surrogate brain that could be perturbed to uncover causal interactions (Fig. 1. This approach enables us to leverage state-of-the-art ANN architectures in time series forecasting and comprehensively investigate the optimal characterization of intricate nonlinear relationships among brain regions.
In our experiment, we used synthetic EEG data generated by a biological-plausible Jansen-Rit (J-R) mmodel [1], which can capture the fast-changing nonlinear dynamics of EEG. As the ground-truth effective connectivity is unknown in real EEG data, we developed a testbed for the framework using synthetic data. The ground-truth effective connectivity can be obtained from the synthetic data by perturbing the hidden variables during data generation. We generated two datasets: 1) A simple J-R synthesized dataset with 3 regions and pre-defined connections as a proof-of-concept; 2) A J-R synthesized dataset with 90 regions, in which the connections were real structural connectivity measured from diffusion tensor imaging (DTI).
The contributions of this paper are two-fold:
* We presented a testbed for verifying the data-driven EC inference framework on fast-changing synthetic EEG data with known real EC. Various neural network models were tested in this testbed.
* We validated the effectiveness of the neural perturbational inference framework in comparison to classical EC estimation methods. The results underline the importance of selecting a proper model to serve as a surrogate brain.
## 2 Problem statement
The primary aim of this work is to estimate the causal influence of one brain region on others. To achieve this end, we implement virtual perturbation on the trained neural networks that can predict the dynamics of neural signals. The prediction model can be represented as
\[\hat{\mathbf{x}}_{t+1:t+T^{\prime}}=f(\mathbf{x}_{t-T:t},\theta), \tag{1}\]
which means predicting neural activities \(\hat{\mathbf{x}}_{t+1:t+T^{\prime}}\) with a nonlinear model \(f\) based on previous activities \(\mathbf{x}_{t-T:t}\). After the model is trained, we perturb one region at a time and see the changes in the predicted signals of other regions, which can be formulated as
\[\delta_{A\to B}(t+t^{\prime})=\mathbf{E}[(B_{t+t^{\prime}}\mid A_{t}+ \Delta)-(B_{t+t^{\prime}}\mid A_{t})],\\ t^{\prime}=1,2,...,T^{\prime} \tag{2}\]
Here, we add perturbation \(\Delta\) on region \(A\) at time \(t\) and see the expected changes in the region \(B\) at time \(t+1\) to \(t+T^{\prime}\). We perturb every region in a loop and see the causal influence between any two region pairs in this way.
## 3 Related Work
### Classical EC estimation methods
The classical computational methods for estimating effective connectivity from neural data can be broadly categorized based on two aspects: i) linearity/nonlinearity and ii) bivariate/multivariate analysis. For instance, Granger causality, as a linear bivariate method, usually focuses on the linear interactions between two brain regions [14]. Transfer entropy, as a nonlinear bivariate method, quantifies the directed transfer of information between two regions [23]. On the other hand, multivariate models take into account the interactions among multiple regions concurrently. These multivariate models have the advantage of mitigating spurious connections that may arise in bivariate models. For example, the partially directed coherence [1] and directed transfer function [17] derive causal interactions from the Fourier transform of multivariate autoregressive parameters. The capability of these methods to simultaneously identify nonlinear multivariate interactions are underdeveloped. Recently, deep neural networks have shown their great expressive power for multivariate time-series prediction [23, 1, 10]. However, whether their expressive power can transfer to accurately capture the complex and nonlinear interactions in the multivariate data requires further investigation.
### Neural perturbational inference
The data-driven framework of Neural Perturbational Inference (NPI) was proposed by Luo et al. [11]. NPI uses an artificial neural network (ANN) that learns neural dynamics as a surrogate brain. Perturbing the surrogate brain (_i.e._, the trained ANN), region by region, and simultaneously observing the evoked neural response at all unperturbed regions provides the whole-brain effective connectivity. The ANN in NPI is instantiated with a four-layer perceptron and is trained using a one-step-ahead prediction error, where the next state of fMRI is predicted given the current state. After ANN is trained, each region of ANN is sequentially perturbed, realized as a small increase or decrease of
neural signal in the perturbed region. The EC is computed by the difference between the one-step neural responses with and without perturbation.
The NPI framework has demonstrated its ability to infer EC from fMRI signals [14]. Owing to the long timescale and slow dynamics in fMRI signals, the current state contains most of the useful information to effectively forecast the next state [15]. Therefore, the ANN in NPI is simply realized with a multi-layer perceptron trained using one-step-ahead prediction error [14]. However, this one-step prediction approach may fall short for capturing EEG dynamics. As the nature of EEG dynamics is highly nonlinear and complex, with rich information for predicting the next EEG signal is contained in many previous steps. Therefore, NPI developed for fMRI data cannot be directly applied to EEG. Here, we extended the original NPI framework with two factors: i) the ANN in the NPI framework to predict EEG dynamics is replaced with the state-of-the-art time-series prediction models (_e.g._, RNN, GRU, LSTM, CNN, and Transformer), ii) the EC is estimated with the multi-step response after perturbation, rather than one-step transient response.
## 4 Methods
### The framework
The framework of our method is shown in Fig. 1. To learn the system dynamics from EEG signals, we trained a time series forecasting model to predict the subsequent signals using previous \(n\) steps of EEG data (Fig. 1(a)). The virtual perturbation was applied to a region at the \(76^{th}\) step. (Fig. 2(b) left), and the responses across all brain regions at future \(77\) to \(99\) steps were predicted by the trained models (Fig.2(b) right). The estimated EC was calculated as the difference between the expected signals with and without disturbance. To obtain the whole-brain causal connection between any two regions, we individually make an impulse perturbation (unit=0.1) into each region.
### The time series forecasting models
Following the instruction of the NPI framework, the EC inference largely relies on the time series forecasting model. In this work, we realized five artificial neural network models (CNN, vanilla RNN, LSTM, GRU and transform) as the EEG series forecasting model. The models are detailed in the Appendix.
### Synthetic EEG data
We used the biologically plausible Jansen-Rit model, as shown in Eq.(3), to generate EEG data. The Jansen-Rit model is a mathematical model used to simulate the macroscopic electrical behavior observed in EEG signals. The simulated data mimic the real EEG data in nonlinear dynamics and complex inter-regional interactions. For each brain region, it assumes three populations of neurons: pyramidal neurons, excitatory interneurons, and inhibitory interneurons. Pyramidal neurons have projections to the other two populations. Excitatory and inhibitory interneurons project back to pyramidal neurons. The pyramidal neurons also have long-range excitatory projections to other brain regions. The dynamics of each region are represented as follows:
\[\begin{split}\dot{x}_{0,i}(t)&=y_{0,i}(t)\\ \dot{y}_{0,i}(t)&=Aa\left[S\left(C_{2}x_{1,i}(t)-C_ {4}x_{2,i}(t)+C\alpha z_{i}(t),r_{0}\right)\right]\\ &\quad-2ay_{0,i}(t)-a^{2}x_{0,i}(t)\\ \dot{x}_{1,i}(t)&=y_{1,i}(t)\\ \dot{y}_{1,i}(t)&=Aa\left[p(t)+S\left(C_{1}x_{0,i}( t)-C\beta x_{2,i},r_{1}\right)\right]\\ &\quad-2ay_{1,i}(t)-a^{2}x_{1,i}(t)\\ \dot{x}_{2,i}(t)&=y_{2,i}(t)\\ \dot{y}_{2,i}(t)&=Bb\left[S\left(C_{3}x_{0,i}(t),r_ {2}\right)\right]-2by_{2,i}(t)-b^{2}x_{2,i}(t)\\ \dot{x}_{3,i}(t)&=y_{3,i}(t)\\ \dot{y}_{3,i}(t)&=A\bar{a}\left[S\left(C_{2}x_{1,i}( t)-C_{4}x_{2,i}(t)+C\alpha z_{i}(t),r_{0}\right)\right]\\ &\quad-2\bar{a}y_{3,i}(t)-\bar{a}_{i}^{2}x_{3,i}(t)\end{split} \tag{3}\]
where \(x_{0}\), \(x_{1}\) and \(x_{2}\) represent the output of the pyramidal neurons, excitatory interneurons, and inhibitory interneurons, respectively. \(x_{3}\) represents the long-range output of the pyramidal neurons to other regions. \(S\) is a sigmoid function:
\[S(v,r)=\frac{\zeta_{\max}}{1+e^{r(\theta-v)}} \tag{4}\]
\(z_{i}\) is the overall input from other regions to region \(i\):
\[z_{i}(t)=\sum_{j=1,j\neq i}^{n}\widetilde{M_{ij}}x_{3,j}(t) \tag{5}\]
where \(\widetilde{M_{ij}}\) is the normalized structural connectivity matrix:
\[\tilde{M_{ij}}=\frac{M_{ij}}{\sum_{j=1,j\neq i}^{n}M_{ij}} \tag{6}\]
\(M_{ij}\) represents the underlying structural connectivity from region \(j\) to region \(i\). The EEG-like signal is calculated as:
\[v_{i}(t)=C_{2}x_{1,i}(t)-C_{4}x_{2,i}(t)+C\alpha z_{i}(t) \tag{7}\]
which represents the postsynaptic potentials of pyramidal neurons in region \(i\).
We generated two versions of synthetic data: 1) A toy example with 3 nodes. The structural connectivity among the 3 nodes was set manually, with node 0 exerting directed connections to node 1 and node 2 (Fig. 2a). 2) A whole-brain model with 90 nodes. Connectivity among the nodes was determined by real structural connectivity measured from DTI. The brain was parcellated into 90 regions with Anatomical Automatic Labeling (AAL) atlas. The structural connectivity was calculated from the average of 245 subjects in the Human Connectome Project ([https://www.humanconnectome.org/study/hcp-young-adult/document/1200-subjects-data-release](https://www.humanconnectome.org/study/hcp-young-adult/document/1200-subjects-data-release)) (Fig. 3(a,b)).
**Hyperparameter settings of Jansen-Rit model.** In this study, the excitatory gain \(\alpha\) and the inhibitory gain \(\beta\) were set as 0.71 and 0.4, respectively, to generate reasonable power spectra and functional connectivity properties in synthetic
data. All other hyperparameters were set the same as in [Coronel-Oliveros _et al._, 2021]. The neural dynamic described in Eq.(3) was transformed into a discretized formula by the forward-Euler method and evolved with a time step of 0.001 seconds. Then the generated EEG signals were downsampled to 100 Hz, with a time interval of 0.01 seconds between two consecutive time steps.
### Implementation details of ANN models
We implemented five ANN models, including temporal CNN, vanilla RNN, LSTM, GRU, and Transformer models. Each ANN model consists of two hidden layers with 8, 32, 128, and 512 units, respectively, thus we can examine the impact of the model complexity on the performance of data prediction and EC inference. A linear readout layer was employed to predict future EEG dynamics. We applied Adam optimizer and ReduceLROnPlateau scheduler while training. The initial learning rate was \(1e^{-4}\) and the batch size was 30. The number of training epochs was determined according to the minimum validation loss of each ANN model.
Training and testing datasets.For training and validating the forecasting model, an EEG signal with 900,000 time points was generated. The first 70% of the generated time series was used in training and the remaining 30% was used in validation. Each training or validation sample contains signals of 100 time points (i.e., 1 second). Adjacent samples do not overlap. The model needs to predict the following 24-step signals based on the previous 76-time steps. We reported the validation mean squared error as the model's prediction performance.
### Virtual perturbation of ANN models
For model perturbation, we generated another time series of 100,000 time points, which formed 1,000 samples. During the generation of synthetic data, we added a perturbation with a value of 0.1 to the excitatory interneurons \(x_{1}\) at the \(76^{th}\) step of each sample and recorded the changes in the following steps. The perturbation was applied to one region and will affect other regions. The real EC was calculated as the average difference between the following generated data with and without perturbation. For the trained ANNs, we input the perturbed data (time steps 1-76 of each sample) and obtained the predicted signals of the following time steps. The estimated EC was calculated from the average difference between the predicted data with and without perturbation in the input.
## 5 Results
### Results on 3-channel synthetic EEG
We first examined the model performance for EC estimation with 3-channel synthetic EEG data. The model performance was evaluated based on two metrics: time series prediction error and correlation between the real EC and the predicted EC on testing data, as shown in Table 1. The time series prediction error is calculated as the Mean Square Error (MSE) between the real signal and the predicted signal of the last 24 time steps in each sample. EC correlation is defined as the Pearson correlation coefficient between the real EC and the
\begin{table}
\begin{tabular}{c c c c} \hline \hline ANN & Hidden & Prediction & EC \\ model & Units & Error\(\downarrow\) & Correlation\(\uparrow\) \\ \hline \hline \multirow{4}{*}{CNN} & 8 & 5.4477 & 0.7442 \\ \cline{2-4} & 32 & 5.4032 & 0.7274 \\ \cline{2-4} & 128 & **5.3920** & **0.8785** \\ \cline{2-4} & 512 & 5.4202 & 0.7451 \\ \hline \hline \multirow{4}{*}{RNN} & 8 & **5.3026** & -0.1080 \\ \cline{2-4} & 32 & 5.3057 & -0.0890 \\ \cline{2-4} & 128 & 5.3047 & **0.2636** \\ \cline{2-4} & 512 & 5.3514 & 0.1820 \\ \hline \hline \multirow{4}{*}{LSTM} & 8 & 5.6079 & 0.2593 \\ \cline{2-4} & 32 & **5.4021** & 0.5353 \\ \cline{2-4} & 128 & 5.5618 & 0.2984 \\ \cline{2-4} & 512 & 5.6119 & **0.6430** \\ \hline \hline \multirow{4}{*}{GRU} & 8 & 5.5250 & 0.3675 \\ \cline{2-4} & 32 & **5.3751** & 0.6186 \\ \cline{1-1} \cline{2-4} & 128 & 5.3816 & **0.6436** \\ \cline{1-1} \cline{2-4} & 512 & 5.5334 & 0.4198 \\ \hline \hline \multirow{4}{*}{Transformer} & 8 & 5.4221 & 0.7680 \\ \cline{1-1} \cline{2-4} & 32 & 5.4861 & 0.6780 \\ \cline{1-1} \cline{2-4} & 128 & **5.3803** & **0.8110** \\ \cline{1-1} \cline{2-4} & 512 & 5.4132 & 0.8022 \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Related statistics of 3-channel EEG data prediction.** Five ANN models, including CNN, RNN, GRU, LSTM, and Transformer with 8, 32, 128, and 512 hidden units, were trained to predict EEG signals, respectively. The prediction error and the correlation between the real EC and the NPI-EC were calculated with the test data.
Figure 2: **Data and results visualization of the 3-channel synthetic EEG.****a**, The setting for 3-channel structural connectivity matrix. **b**, An example of 3-channel EEG data. **c**, The time course of the original EEG data (the green lines) and the perturbed EEG data (the red lines). **d**, the comparison among the real EC (left), the EC inferred by perturbing a CNN model (middle), and EC inferred by Granger causality (right). All the EC matrices were re-scaled to the range 0-1 for visualization.
NPI-EC without considering matrix diagonals. We reported the performance of CNN, RNN, LSTM, GRU, and Transformer models with different hidden dimensions. CNN model with 128 hidden dimensions achieved the best EC correlation of 0.8785. The transformer model follows with an EC correlation of 0.8110. For these two models, the hidden dimension with a higher EC correlation also accompanies a lower time series prediction error. LSTM and GRU obtained an inferior EC correlation of 0.6430 and 0.6436, respectively. Vanilla RNN obtained the worst EC correlation of 0.2636. The results indicated that recurrent neural networks are worse than CNN and Transformer in recovering the underlying causal interactions from the synthetic EEG data, although they all achieved comparable time series prediction errors.
We visualized the real EC and the EC inferred from CNN at time step 3 (30 ms) after perturbation (Fig. 2d, left and middle columns). The inferred EC faithfully recovers the causal interaction from node 0 to nodes 1 and 2. We also visualized the EC estimated by Granger causality (Fig. 2d, right column). We used multivariate GC to calculate the direct connection between two channels and choose 12 as the input for maxlag based on the minimum value of bic. There are small false positive connections between node 1 and node 2 in EC estimated by GC, which is better suppressed in the NPI-EC.
### Results on whole-brain synthetic EEG
For the whole-brain synthetic EEG with 90 regions, CNN obtained the highest correlation between the real EC and the NPI-EC (\(R=0.3340\)), compared with Transformer (\(R=0.3245\)), LSTM (\(R=0.2055\)), GRU (\(R=-0.0096\)) and RNN (\(R=-0.0006\)). This trend is similar to that of 3-channel synthetic data. GRU and RNN wrongly estimate the causality. GRU's failure may be due to the simpler design of the gating mechanism in contrast to LSTM.
We visualized the real EC, the EC inferred by CNN and Granger causality at time step 16 (160 ms) after perturbation (Fig. 3c, from left to right). The inferred EC can recover the real EC faithfully, with a high EC correlation (\(R=0.7081\)) (for this time step). In contrast, the Granger causality inferred EC is much worse (\(R=0.3136\)).
To exhibit the effect of perturbing one region on the others, we show the spatial distribution of signal changes resulting from perturbing a specific seed region in the J-R model (i.e., real EC) in Fig. 3d, as well as the NPI-EC of CNN model in Fig. 3e. The NPI-EC recovers the general distribution of real EC. To show the temporal evolution of the signals after perturbation, we also compare the real event-related potentials (ERP) and the predicted ERP under perturbation in Fig. 3f. After the perturbation was given at time point 0, the predicted ERP and the real ERP show a similar trend of change, although their exact values were different.
## 6 Discussion
In this study, we presented a testbed for perturbation-based EC estimation methods with synthetic EEG data. Our results validated that by perturbing specific types of ANN prediction models (i.e., CNN and Transformer), we can estimate the underlying causal interactions among different nodes effectively.
ANN models have been widely used in time series forecasting and achieved SOTA performance. However, it is
Figure 3: **Data and results visualization of 90-channel synthetic EEG.****a**, The real structural connectivity measured from diffusion tensor imaging (mapped on 90 brain regions). **b**, The structural connectivity matrix. **c**, The comparison among the real EC (left), the EC inferred by perturbing the CNN model (middle), and EC inferred by Granger causality (right). All the EC matrices were re-scaled to the range 0-1 for visualization. **d**, Spatial distribution of the real EC (i.e., neural responses) by perturbing left amygdala (left), left rectus (middle), and left cuneus (right). **e**, Spatial distribution of the NPI-EC by perturbing the three regions same as in **d**. The perturbed region is indicated with an arrow in each panel. **f**, Sample event-related potential (ERP) of the stimulus-evoked neural responses. ERP is the average response to \(1000\) times perturbation. Perturbation is given at time point 0. We show the predicted and real response of the inferior frontal gyrus to heschl gyrus perturbation (left) and the response of the inferior parietal gyrus to the posterior cingulate gyrus perturbation (right).
unclear whether the models can reveal the causal interactions among different variates. Our experiments showed that specific types of models can encapsulate the underlying causal interactions of synthetic EEG data. CNN and Transformer achieved higher performance here, probably due to they can capture the oscillation characteristics in synthetic EEG data [18, 20].
In future studies, several important questions remain to be investigated. Firstly, what are the effects of different types of perturbation? Some perturbations may cause the signals to be outside the manifold of natural signals, while others may not [2]. Specific forms of perturbations may resemble those in real brain stimulation. It is critical to investigate these different types of perturbations on EC estimation. Secondly, it still lacks a clear explanation of why CNN and Transformer work better on EC estimation than RNN-series models. Do they also work well on real EEG data? How to choose the proper model for different types of data? These questions need to be further investigated in the future.
## Acknowledgements
We thank Mr. Zhichao Liang for sharing some code, and Mr. Song Wang, and Mr. Kaining Peng for their useful discussions. This work was funded in part by Shenzhen Science and Technology Innovation Committee (2022410129, 20200925155957004, KCXFZ2020122117340001, JCYJ20220818100213029, SGDX2020110309280100), Guangdong Provincial Key Laboratory of Advanced Biomaterials (2022B1212010003). 0
## Appendix
**Convolutional Neural Network (CNN).** The Convolutional Neural Network is a feedforward multilayered hierarchical network, which is a widely used ANN model. It uses a combination of convolutional layers, nonlinear processing units, and subsampling layers to automatically extract features from the raw pixel data of the image for improved categorization with 2-dimensional data. It can be applied to 1-dimensional time series data by temporal convolution.
**Vanilla RNN.** Vanilla RNN was used as an example of a simple nonlinear forecasting model. The iteration of the hidden state is represented as
\[h_{t}=tanh(x_{t}W_{ih}^{T}+b_{ih}+h_{t-1}W_{hh}^{T}+b_{hh}), \tag{8}\]
where \(x_{t}\) is the input and \(h_{t}\) is the hidden state.
**Long Short-Term Memory (LSTM).** LSTM adds gate design to vanilla RNN to capture long-term dependencies in the time series. The detailed computation in an LSTM unit is shown below:
\[\begin{split} i_{t}&=\sigma\left(W_{ii}x_{t}+b_{ii} +W_{hi}h_{t-1}+b_{hi}\right)\\ f_{t}&=\sigma\left(W_{if}x_{t}+b_{if}+W_{hf}h_{t-1} +b_{hf}\right)\\ g_{t}&=\tanh\left(W_{ig}x_{t}+b_{ig}+W_{hg}h_{t-1} +b_{hg}\right)\\ o_{t}&=\sigma\left(W_{io}x_{t}+b_{io}+W_{ho}h_{t-1} +b_{ho}\right)\\ c_{t}&=f_{t}\odot c_{t-1}+i_{t}\odot g_{t}\\ h_{t}&=o_{t}\odot\tanh\left(c_{t}\right)\end{split} \tag{9}\]
where \(x_{t}\) is the input and \(h_{t}\) is the hidden state. \(i_{t}\), \(f_{t}\), and \(o_{t}\) represent the output of the input gate, the forget gate, and the output gate, respectively. \(g_{t}\) is the candidate cell state and \(c_{t}\) is the cell state.
**Gated Recurrent Unit (GRU).** GRU simplified the gate design in LSTM to improve the computational efficiency:
\[\begin{split} r_{t}&=\sigma\left(W_{ir}x_{t}+b_{ir} +W_{hr}h_{(t-1)}+b_{hr}\right)\\ z_{t}&=\sigma\left(W_{iz}x_{t}+b_{iz}+W_{hz}h_{(t-1)} +b_{hz}\right)\\ n_{t}&=\tanh\left(W_{in}x_{t}+b_{in}+r_{t}*\left(W_ {hn}h_{(t-1)}+b_{hn}\right)\right)\\ h_{t}&=(1-z_{t})*n_{t}+z_{t}*h_{(t-1)}\end{split} \tag{10}\]
where \(x_{t}\) is the input and \(h_{t}\) is the hidden state. \(r_{t}\) and \(z_{t}\) represent the output of the reset gate and the update gate, respectively. \(n_{t}\) is the candidate's hidden state.
**Transformer.** The Transformer model employs a self-attention mechanism. By encoding the input time series into a set of vectors and applying self-attention across all time steps, the Transformer model captures both local and global dependencies, enabling accurate predictions. The attention weights are obtained through a softmax function applied to the scaled dot-product of query, key, and value embeddings. It computes the attention function on a set of queries simultaneously, packed together into a matrix Q. The keys and values are also packed together into matrices K and V. The output is computed as
\[\mathrm{Attention}(Q,K,V)=\mathrm{softmax}\left(\frac{QK^{T}}{\sqrt{d_{k}}} \right)V, \tag{11}\]
where \(d_{k}\) is the dimension of queries and keys. By iteratively updating the hidden states through the self-attention layers, the Transformer model learns to capture complex temporal dependencies, facilitating accurate prediction of future values in the time series.
where \(\mathbf{y}_{i}\) is a vector of observed variables at time \(t\), \(\mathbf{c}\) is a constant vector, \(\mathbf{A}i\) is the coefficient matrix associated with the \(i\)-th lag, \(\mathbf{y}t-i\) represents the vector of lagged variables, and \(\mathbf{e}_{t}\) is a vector of error terms assumed to be white noise.
|
2304.04062 | Predicting multiple sclerosis disease severity with multimodal deep
neural networks | Multiple Sclerosis (MS) is a chronic disease developed in human brain and
spinal cord, which can cause permanent damage or deterioration of the nerves.
The severity of MS disease is monitored by the Expanded Disability Status Scale
(EDSS), composed of several functional sub-scores. Early and accurate
classification of MS disease severity is critical for slowing down or
preventing disease progression via applying early therapeutic intervention
strategies. Recent advances in deep learning and the wide use of Electronic
Health Records (EHR) creates opportunities to apply data-driven and predictive
modeling tools for this goal. Previous studies focusing on using single-modal
machine learning and deep learning algorithms were limited in terms of
prediction accuracy due to the data insufficiency or model simplicity. In this
paper, we proposed an idea of using patients' multimodal longitudinal and
longitudinal EHR data to predict multiple sclerosis disease severity at the
hospital visit. This work has two important contributions. First, we describe a
pilot effort to leverage structured EHR data, neuroimaging data and clinical
notes to build a multi-modal deep learning framework to predict patient's MS
disease severity. The proposed pipeline demonstrates up to 25% increase in
terms of the area under the Area Under the Receiver Operating Characteristic
curve (AUROC) compared to models using single-modal data. Second, the study
also provides insights regarding the amount useful signal embedded in each data
modality with respect to MS disease prediction, which may improve data
collection processes. | Kai Zhang, John A. Lincoln, Xiaoqian Jiang, Elmer V. Bernstam, Shayan Shams | 2023-04-08T16:23:18Z | http://arxiv.org/abs/2304.04062v1 | # Predicting multiple sclerosis disease severity with multimodal deep neural networks
###### Abstract
Multiple Sclerosis (MS) is a chronic disease developed in human brain and spinal cord, which can cause permanent damage or deterioration of the nerves. The severity of MS disease is monitored by the Expanded Disability Status Scale (EDSS), composed of several functional sub-scores. Early and accurate classification of MS disease severity is critical for slowing down or preventing disease progression via applying early therapeutic intervention strategies. Recent advances in deep learning and the wide use of Electronic Health Records (EHR) creates opportunities to apply data-driven and predictive modeling tools for this goal. Previous studies focusing on using single-modal machine learning and deep learning algorithms were limited in terms of prediction accuracy due to the data insufficiency or model simplicity. In this paper, we proposed an idea of using patients' multimodal longitudinal and longitudinal EHR data to predict multiple sclerosis disease severity at the hospital visit. This work has two important contributions. First, we describe a pilot effort to leverage structured EHR data, neuroimaging data and clinical notes to build a multi-modal deep learning framework to predict patient's MS disease severity. The proposed pipeline demonstrates up to 25% increase in terms of the area under the Area Under the Receiver Operating Characteristic curve (AUROC) compared to models using single-modal data. Second, the study also provides insights regarding the amount useful signal embedded in each data modality with respect to MS disease prediction, which may improve data collection processes.
Database; Deep neural network; Multiple sclerosis; Expanded disability status scale
## Background
Recent advantages in deep learning have shown success in various areas of healthcare, such as brain Magnetic Resonance Imaging (MRI) automatic volume segmentation and classification [1], clinical text mining and disease prediction [2], risk predictions [3], etc. The fast-growing Electronic Health Records (EHR) in healthcare provides a great number of opportunities for both the data mining and deep learning communities to explore the rich information embedded in different data modalities and tap the potentiality of using this information for predictive modeling, to benefit effective healthcare delivery and better-quality caring for patients.
Multiple sclerosis (MS) is a potentially disabling disease that affects the human brain and spinal cord. An estimation of MS prevalence by the year 2010 of 10-year accumulation shows there are over 700,000 MS cases in adults in the United States [4]. Recent advantages in MS disease research found that patients who died from MS suffer up to 39% neuron count loss compared to usual patients without MS [5]. The
human brain has mechanisms for self-repair and regenerative potential that could repair the brain plaques [6], however, such ability is very limited. Therefore, prompt action to prevent or slow down brain damage is critical to MS disease treatment [7]. Effective treatment relies on a correct grading of the MS severity, and scoring systems are widely used to achieve this goal. The Expanded Disability Status Scale (EDSS) score [8] is a widely used ordinal scoring system by healthcare providers to monitor clinical disability in MS. It is composed of diverse functional systems, including pyramidal functions (muscle strength, tone, and reflexes), cerebellar functions (coordination and balance), brainstem functions (eye movements, speech, and swallowing), sensory functions (light touch, pain, and vibratory sense), bowel and bladder functions, visual functions, cerebral (cognition) and ambulation. Based on EDSS, Roxburgh et al. proposed a Multiple Sclerosis Severity Score (MSSS) which can be used to determine MS disease progression using single assessment data (when a patient has only one assessment during the disease course) [9].
Several milestones of the EDSS score have been commonly used to define different stages of the MS disease course. The EDSS 4 (significant disability but able to walk without aid or rest for 500 m), EDSS 6 (requires unilateral assistance to walk about 100 m with or without resting) and EDSS 7 (ability to walk no more than 10 m without rest while leaning against a wall or holding onto furniture for support) were commonly-used milestones for studying MS disease severity. For example, Confavreux et al. used the above milestones to study the effect of relapses on the progression of irreversible disability [10]. The same milestones have also been used to study the contribution of relapses to worsening disability and evaluate the MS therapies' effect on delaying the disability accumulation [11]. A Sweden research group studied whether the risk of reaching the above disability milestones in MS has changed over the last decade [12]. Rzepinski et al. used the EDSS milestones to explore early clinical features of MS and how they affect patients' long-term disability progression [13]. The same milestones were also used to study how these factors affect the time to transition from relapsing-remitting MS (RRMS) to secondary progressive MS (SPMS).
A patient's EDSS score needs to be evaluated by a well-trained specialist to ensure that the assessment is correctly performed, which limits the application of EDSS to clinics with MS disease specialties. Several research studies have attempted to address this problem using machine learning or deep learning models. In particular, Pinto et al. proposed to use machine learning models to predict MS progression, based on the clinical characteristics of the first five years of the disease [14]. Zhao et al. used a support vector machine (SVM) classifier and demographic, clinical, and MRI data obtained at years one and two to predict patients' EDSS at five years follow-ups [15]. Sacca et al. explored different machine learning models (Random Forest, Support Vector Machine, Naive-Bayes, K-nearest-neighbor, and Artificial Neural Network) and used the features extracted from functional-MRI to perform MS disease severity classification [16]. Narayana et al. proposed to use the VGG-16 convolutional neural network (CNN) to predict enhancing lesions in MS patients using non-contrast MRIs [17]. D'Costa et al. proposed a transformer model named MS-BERT to predict EDSS score from patient's neurological consult note [2]. Ciotti proposed a clinical instrument to retrospectively capture levels of EDSS and the algorithm got a Kappa score of 0.80 between captured EDSS and real EDSS [18].
Chase et al. also used neurological consult notes but with simpler models (Naive Bayes classification model) and features (word frequency) [19]. Dekker et al. used multiple linear regression models on patient brain lesion volumes and its variation over the years to predict physical disability [20]. The aforementioned studies explored the idea of using machine learning and deep learning methods on various modalities of EHR datasets to predict patient's EDSS of the current hospital visit or in the near future. The above works only explored a limited amount of patient information (either clinical notes, or basic lesion volume information extracted from MRI, or patient clinical characteristics), by adopting the off-the-shelf machine learning models, or deep learning models that were developed for general tasks and utilized without being customized to capture the complex nature of this prediction problem. Based on the above studies and the recent research advances in multimodal deep learning, it is reasonable to assume that using multimodal deep learning methods could integrate fragmented information from each modality and brings more accurate predictions for MS disease. Therefore, this study tried to answer the question of whether can we harmonize all the available EHR data modalities collected from patient clinic visits and use longitudinal data to perform more accurate MS severity prediction. A few research study findings have found that MRI data and some lab tests can contain useful information about MS disease severity. For example, studies have shown that the thickness of cortical and deep grey matter has a high correlation with the MS disease severity, suggesting that the MRI images are an informative data source to predict MS severity [21, 22]. Some laboratory tests were also documented as playing an important role in this regard, such as the cerebrospinal fluid (CSF) [23, 24], and serum neurofilament light chain (nFl) [25].
This study tried to answer the above question using a data-driven approach. We explored the idea of using patients' MRI images, clinical notes, and structured EHR data (including laboratory tests, vital sign observations, medication prescriptions, patient demographics) that were collected during patients' clinic visits to predict MS disease severity at the visit. We propose a multimodal deep neural network that takes MS patient Electronic Health Records of multiple modalities, including the MRI images (pre- and post-contrast T1 weighted image, T2 weighted image, fluid-attenuated inversion recovery image and proton density image), patient's clinical notes data and structured EHR (laboratory tests, vital signs, medications, demographic information) to perform MS disease severity prediction.
We also propose to use patients' longitudinal data for EDSS milestone prediction, based on the fact that evidence about patients' MS disease severity should not only be embedded in the most recent EHR data but also richly contained in data of all previous clinic visits. Compared to using cross-sectional data (e.g. using clinical notes of the current visit to predict EDSS score [2]), we propose to use the patient's both current clinic visit and historical EHR data to train a multimodal deep neural network. The longitudinal data contains more MS disease progression information compared to cross-sectional data and will help the model make more accurate predictions of the patient's status at the moment. The contributions of this study are three-fold.
* A novel deep learning architecture (a multimodal neural network) and data fusion mechanism which takes Electronic Health Records including medications, vital signs, laboratory test results, clinical imaging, and physician notes to tackle the difficult problem of MS disease severity prediction. The results show significant prediction accuracy improvements compared to using single-modality data or simpler models.
* Using longitudinal data (both current and historical visits data) instead of cross-sectional data (data of current visit) to accurately classify patient EDSS score milestones at the current clinic visit.
* Exhibits how much useful information is embedded in each data modality for the prediction of MS severity. Various attention mechanisms are adopted in the proposed neural network to provide model explainability and enhance prediction accuracy.
* An end-to-end AI model that works on readily available data with a limited pre-processing process (e.g. does not need feature extraction as a pre-processing step, such as extracting the thalamic volume, lateral ventricle volume, etc. to train the model)
The paper is structured as follows. The Data section explains the dataset that was used in this study and the details of each data modality. The subsequent section explains our designed deep neural network architecture, followed by an Experiment section including our experiment design, the obtained results, and the discussion. Finally, a summary and conclusion are given in the Discussion section.
### Data
Our database contains a rich set of 300 MS patients, patients' demographic information in summarized in Table 1. Each patient's data contain three modalities: 1) the neuroimaging data, 2) structured EHR data, and 3) clinical notes. The neuroimaging data is stored in NIFTI format. Most patients have multiple clinical visits, and during each clinic visit, a patient may have multiple laboratory tests, recorded vital signs, different prescription drugs, diagnoses, certain medical procedures, and treatments, that are recorded in the structured EHR data in separate tables. The clinical notes contain the physician's description of the patient's status at each clinic visit. Our proposed novel neural network architecture is designed to handle heterogeneous structure databases via learning representations of each modality. The prediction goal is set to a classification problem to predict if the patient has reached certain EDSS milestones at the current clinic visit. All 300 patient's EDSS scores were evaluated by physicians at the end of each clinic visit and were recorded in a table in the structured EHR. The real EDSS score was extracted from the EHR table and served as the ground truth label. The research goal is to develop a deep learning model to predict the EDSS score at the current visit using all other information and masking the true EDSS score. Figure 1 demonstrates the distributions of patients' age and EDSS. Figure 2 plots all patients EDSS historical scores along their disease course.
**Brain MRI.** We obtain a total of 360 MRI images for all 300 patients. All imaging studies were performed on a Philips 3.0T Ingenia scanner (Philips Medical Systems, Best, Netherlands). Some patients can have multiple MRIs from different clinic visits. The MRIs include five sequences: pre-contrast and post-contrast T1-weighted
sequences (T1-pre, T1-post), T2-weighted sequences, proton density-weighted sequences (PD) and fluid-attenuated inversion recovery sequences (FLAIR). All sequences were acquired with a field of view of 256 mm x 256 mm x 44 mm. For each patient, the MRI images were acquired in the axial plane. Figure 3 displays the MRI sequences of a sample patient. All MRI sequences are skull-stripped using Simple Skull Stripping (S3) [26] and the SRI24 template [27], bias-corrected using N4 Bias Field Correction to adjust the low-frequency intensity [28], and co-registered using FreeSurfer [29] to a common template (SRI24).
**Clinical Notes.** Patient's clinical notes are in free text format and contain the physician's description of the patient's health status, patient basic health information such as weight, height, BMI (body mass index), physiological status, diagnosis, medications and received treatments. We de-identified all clinical notes data by removing patients' and Physician's personal data.
**Structured EHR.** Patient's structured EHR contains laboratory tests measurements (float), vital sign observations (float), medication administrations (0/1 indicator - taken or not taken), and demographic information (age: float, race/ethnicity/gender: 0/1) in a tabular format. We construct the tables in the format of rows being observational time stamps and columns representing a number of features. The features in each table are fixed for all patients, and the number of rows for different tables and different patients is different depending on how many observational time points a patient has. For each patient's laboratory test, vital signs,s and medication table, we set the time granularity to be 4 hours. We
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|} \hline & Average \(\pm\) SD & Minimum & Maximum &.25 quantile &.75 quantile \\ \hline Age & 43.62 \(\pm\) 11.20 & 19.00 & 71.00 & 34.00 & 52.00 \\ \hline EDSS \# baseline & 1.93 \(\pm\) 1.59 & 0 & 7.50 & 1.00 & 2.50 \\ \hline EDSS \# last visit & 2.90 \(\pm\) 1.96 & 0 & 9.50 & 1.50 & 3.50 \\ \hline Number of visits & 3.39 \(\pm\) 1.60 & 1 & 13 & 2.00 & 4.00 \\ \hline Years b/w first and last visits & 5.14 \(\pm\) 4.34 & 0 & 22.66 & 2.03 & 7.01 \\ \hline Number of MRI sessions/patient & 1.20 \(\pm\) 0.96 & 0 & 4 & 0 & 2 \\ \hline \end{tabular}
\end{table}
Table 1: An overview of patient statistics in the dataset (SD: standard deviation).
Figure 1: The histograms of all patients by Age; Baseline EDSS (at initial hospital visit); EDSS at the last hospital visit; Total hospital visits; Years between the first and the last hospital visit; Number of hospital visits during which brain MRI scan was performed.
record each feature's average value if it has multiple observations during a 4-hour window. This helps to reduce table dimensions, eliminate data observational noises, and avoid creating large and sparse tables which impedes neural network training. If certain features have no values during the four-hour windows, the corresponding entry will be zero. The window size is treated as a hyper-parameter to be optimized to strike a balance between table dimension and information loss (a large averaging window smooths feature observations and blurs useful information, and a small window makes the table have a high dimension on the time-axis which and decrease the efficiency of network training). The optimal window size depends on the density of observations which can be varying for different datasets, and we found the 4-hour window is a suitable value for our dataset. We fill the entry with zero if there is no observation during that 4-hour window. Every 4-hour window is taken within a clinic encounter and can not cross two different encounters to ensure the feature values from different encounters will not be averaged together. For instance, a patient having 2 clinic encounters from 2014-05-05 1:15:00PM to 2014-05-05 6:00:00PM and 2015-09-20 9:12:00AM to 2015-09-20 1:00:00PM will have 4 rows in each table, representing the observations from 2014-05-05 12PM to 2014-05-05 4PM, 2014-05-05 12PM to 2015-05-05 12PM to 2015-05-05 12PM to 2015-05-05 12PM to 2015-05-05 12PM to 2015-05-05 12PM to 2015-05-05 12PM to 2015-05-05 12PM to 2015-05-05 12PM to 2015-05-05 12PM to 2015-05-05 12PM to 2015-05-05 12PM to 2015-05-05 12PM to 2015-05-05 12PM to 2015-05-05 12PM to 2015-05-05 12PM to 2015-05-05 12PM to 2015-05-05 12PM to 2015-05-05 12PM to 2015-05-05 12PM to 2015-05-05 12PM to 2015-05-05 12PM to 2015-05-05 12PM to 2015-05-05 12PM to 2015-05-05 12PM to 2015-05-05 12PM to 2
05 4PM to 2014-05-05 8PM, 2015-09-20 8AM to 2015-09-20 12PM, and 2015-09-20 12PM to 2015-09-20 4PM. Furthermore, we delete a row if it contains all zeros (no observations for any feature). Table 2 shows the variables we used in our dataset. All patients' demographic data is constructed as a fixed-size vector.
Figure 4 demonstrates an example patient's three clinic encounters. Note that not all data modalities were observed in each encounter.
### An encoder-decoder architecture for data fusion
We propose a multimodal neural network that takes data in various modalities (structured EHR, clinical notes, MRIs) as input and is trained to predict a patient's EDSS score. The proposed neural network adopts an encoder-decoder schema in a sequential structure with the self-attention module.
#### Encoder Network
The goal of the encoder network is to process data of different modalities and transform them into dense embeddings belonging to the same high-dimensional
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & LABORORORY TEST & \multicolumn{2}{c}{VITAL, SIGN} & MEDICATION \\ \hline Mean Composite Homoglobin & Carbon Dioxide & Albumin & Diastolic Blood Pressure & Euclidean \\ End End Clarification With & Biosynthesis & Glucose Land & Synthetic Blood Pressure & Galap
latent space. For different modalities, the neural network employs different encoder neural network architectures that are suitable for the learning task (CNN for image processing and structured EHR, Graph Neural Network or BERT for clinical notes), see Figure 5.
**Structured EHR.** The encoder network for structured EHR is composed of multiple parallel channels, and different channels embrace homogeneous network structure but with various hyper-parameters to fit the patient's structured EHR table of different sizes, see Figure 6.
The number of rows for tables can be different across different patients and across different tables of the same patient (if a certain table has no observations in a certain 4-hour window). Such an irregular-sampling format causes heterogeneity along the time axis. Traditional imputation-based methods usually define a shared regular-spaced time axis for all patients and all tables and tries to impute the missing values on the unobserved time points using various techniques, such as filling with the value of zero, average, or majority, etc., forward or backward filling, or advanced techniques such as multiple imputation [30], or the Gaussian process [3]. Imputation-based methods can be computationally expensive, time-consuming, and most importantly, increase the time dimensions, especially when the time between two clinic encounters is long.
We introduce a self-attention module into the encoder network to handle the irregularity sampling issue, by promoting the neural network to automatically pay distinctive attention (by assigning different attention weights) to different time points of patient history, aggregating them and producing a vectored representation (embedding) of each table. An attention weight is computed for each row (4-hour time window) by applying multiple layers of 1-dimensional convolutional neural networks (CNNs) on the feature dimension and outputting an attention weight for each time stamp. The attention vector can be seen as attention weights on different time stamps, and after being applied to the original input data, the network will generate an embedding of a fixed dimension that is consistent for all patients.
Figure 5: The encoder network for our proposed deep neural network.
To be specific, all channels consist of multiple stacked 1D convolution layers followed by the ReLU activation layer and dropout layers. The number of layers is set up differently for different channels according to the number of features in the input tables. For the \(i\)-th patient, the \(k\)-th data table \(\mathbf{D}_{k}^{i}\) of dimension \(t_{k}^{i}\times f_{k}\) is fed into the \(k\)-th channel, where \(t_{k}\) rows represent the time stamps of clinic visits and \(f_{k}\) columns represent variables. Note that different EHR tables (laboratory tests, vital signs, medications, etc.) have different \(f_{k}\) and different patients have different numbers of clinic visits \(t_{k}^{i}\). Each row of the table is processed through a stack of multiple 1D CNNs (see Figure 6) and is reduced to a single value (attention weight). The entire table will generate an attention weight vector \(\mathbf{\alpha}_{k}^{i}\) of size \(t_{k}^{i}\times 1\). The attention weights can be viewed as the weight factor of all \(f_{k}\) features at different time points. In the following, we omit the patient index \(i\).
We multiply the attention vector \(\mathbf{\alpha}_{k}\) with the input matrix \(\mathbf{D}_{k}\) to get the feature map \(\mathbf{e}_{k}\) for each table,
\[\mathbf{e}_{k}=\mathbf{\alpha}_{k}^{T}\cdot\mathbf{D}_{k}. \tag{1}\]
where \(\mathbf{e}_{k}\) is of size \(1\times f_{k}\). Specifically, each element in \(\mathbf{e}_{k}\) is calculated as
\[\mathbf{e}_{k}[j]=\sum_{m=1}^{t_{k}}\mathbf{\alpha}_{k}[m]\mathbf{D}_{k}[m,:],\text{ for }j=1,\ldots,f_{k}, \tag{2}\]
and \(\mathbf{e}_{k}\) is the embedding vector of the \(k\)-th table for a certain patient.
**Image Embedding.** The encoder channel for patient MRI images takes a different network structure from the structured EHR. We leverage the ResNet [31] to process the MRI images. Each MRI sequence (T1-pre, T1-post, T2, PD, FLAIR) is fed into a respective ResNet model. The output is a fixed-length latent representation of each MRI sequence as an embedding vector of a fixed preset dimension. Alternatively, other ResNet variants [32] could also serve as embedding learning network in our task. Our experiment shows the adoption of different network structure for the MRI sequences brings trivial accuracy improvements on the final prediction performance, due to the reason that 1) the ResNet model itself is powerful enough to capture the key features in the MRI images and generate diverse embeddings
Figure 6: The detailed architecture of one of the encoder channels for processing structured EHR data. The figure shows the lab test channel as an example.
for positive and negative patients; 2) the MRI data only accounts for a partial of all the input multimodal data, therefore, the effect of the ResNet variations on the final outcome will be diluted by other data modalities.
**Clinical Notes Embedding.** The encoder channel for patient clinical notes data is processed using a graph attention convolution model, which takes text as input and outputs an embedding for each document [33]. The medical word embeddings are from a pre-trained database which was trained on PubMed+MIMIC-III [34]. The graph attention model treats the entire document as a word co-occurrence network by representing words in the corpus of all patients' documents as graph nodes. In addition, we add another "document node" which represents the entire document and connects to all other nodes. The model maintains a sliding window to capture word co-occurrences, which will be represented as edges of the graph. The edge is directed and weighted, in order to represent the correct word orders in the sliding window and retain maintain meaningful semantics and word co-occurrence counts. The entire network is trained through message passing. We define \(G(V,E)\) as the graphical network, and denote a node \(v(\in V)\)'s neighbors as \(\mathcal{N}(v)\). A node \(v\) constructs a broadcasting message by aggregating (using multi-layer perceptron) its neighbor node embeddings,
\[\mathbf{m}_{v}^{t+1}=\text{ AGGREGATE }^{t+1}\left(\left\{\mathbf{h}_{w}^{t}\mid w\in \mathcal{N}(v)\right\}\right), \tag{3}\]
which can proceed in a parallel manner using matrix format,
\[\mathbf{M}^{t+1}=\text{MLP}^{t+1}\left(\mathbf{D}^{-1}\mathbf{A}\mathbf{H}^{t}\right), \tag{4}\]
where \(\mathbf{H}^{t}\in\mathbf{R}^{n\times d}\) is the \(d\)-dimensional node features of \(n\) nodes and \(\mathbf{A}\in\mathbf{R}^{n\times n}\) is the adjacency matrix, and MLP is multiple layer perceptrons neural network.
All nodes update themselves by their own embedding and all messages from their neighbors using a Gated Recurrent Unit (GRU) network,
\[\mathbf{h}_{v}^{t+1}=\text{ COMBINE }^{t+1}\left(\mathbf{h}_{v}^{t},\mathbf{m}_{v}^{t+1} \right), \tag{5}\]
again in matrix format,
\[\mathbf{H}^{t+1}=\text{GRU}\left(\mathbf{H}^{t},\mathbf{M}^{t+1}\right). \tag{6}\]
After \(T\) steps, a final self-attention read-out layer is used to aggregate all nodes embeddings and output a latent vector to represent the entire document,
\[\mathbf{Y}^{T} =\tanh\left(\hat{\mathbf{H}}^{T}\mathbf{W}_{A}^{T}\right) \tag{7}\] \[\mathbf{\beta}_{i}^{T} =\frac{\exp\left(\mathbf{Y}_{i}^{T}\cdot\mathbf{v}^{T}\right)}{\sum_{j=1} ^{n-1}\exp\left(\mathbf{Y}_{j}^{T}\cdot\mathbf{v}^{T}\right)}\] (8) \[\mathbf{u}^{T} =\sum_{i=1}^{n-1}\mathbf{\beta}_{i}^{T}\hat{\mathbf{H}}_{i}^{T} \tag{9}\]
where \(\hat{\mathbf{H}}^{T}\in\mathbf{R}^{n\times d}\) is the final node representation of all \(n-1\) nodes (remove the document node) after \(T\) time steps, and \(\mathbf{W}_{A}^{T}\) is the network parameters (a dense layer). Therefore, \(\mathbf{u}^{T}\in\mathbf{R}^{d}\) would be the final representation of the document, i.e. aggregation of all node features, which will be fed into a classification layer for document classification.
### Multi-modality Fusion
Medical data often have multiple types of information (demographics, vital, lab, diagnosis, procedure, medication, etc.) and there is intrinsic logic behind them. For example, vitals, and labs contribute to the diagnosis, and diagnosis will determine the procedure and medication. Some of this information is temporally invariant (e.g., demographics) and others are changing over time. Therefore, they need to be handled differently. Based on the causal relationships (vital, lab, MRI scan) \(\rightarrow\) (diagnosis) \(\rightarrow\) (prescription, procedure) \(\rightarrow\) (medicine administration), we build our data fusion pipeline for time-variant information through the aforementioned bidirectional GRU-based decoder. The order of the inputs (see the left part of Figure 7) are organized in a way to learn the intrinsic relationships of such information. The latent representation vectors from each encoder network channel are stacked into a regular matrix \(\mathbf{E}\) (zero-padded if not the same length), where each row represents a modality
\[\mathbf{E}=[\text{ZeroPadding}(\mathbf{e}_{1})^{T},\ldots,\text{ZeroPadding}(\mathbf{e}_ {K})^{T}]^{T}, \tag{10}\]
where \(\mathbf{E}\) is of dimension \(K\times d,d=\max(f_{1},\ldots,f_{K})\).
We integrate the time-invariant demographics at the end of the layer as a late fusion step (see the right part of Figure 7) to combine both pieces of information holistically.
### Decoder Network
We propose a decoder network structure that is composed of a stacked bidirectional GRU (Bi-GRU) network with a self-attention module taking the feature matrix \(\mathbf{E}\) as input. The self-attention serves to learn important weights on the state vectors from different data modalities. The Bi-GRU network takes \(K\) as the sequence length and \(d\) as the input size. We use \(\mathbf{C}\) to denote the stack of hidden states of all time points, which is of dimension \(K\times h,h=2\times\text{hiddensize}\) (note that factor 2 comes from the bi-direction network being used).
Each state of the bidirectional GRU network is fed into an attention module, which is 1D convolution layer of multiple output channels. The attention module outputs a vector of attention weights \(\mathbf{\gamma}\) of length \(g\) (hyper-parameter, depending on the output channel of the convolution layer), and
\[\mathbf{B}=[\mathbf{\gamma}_{1}^{T},\ldots,\mathbf{\gamma}_{K}^{T}]^{T}, \tag{11}\]
where \(\mathbf{B}\) is of dimension \(K\times g\) denoting the attention matrix. The attention matrix is multiplied with the GRU output,
\[\mathbf{O}=\mathbf{B}^{T}\cdot\mathbf{C}. \tag{12}\]
where \(\mathbf{O}\) is of dimension \(g\times h\). Note that the purpose of this attention layer is to enforce a feature reduction from the high-dimensional GRU outputs to a smaller and more informative lower-dimensional embedding not only for reducing the noise but also for increasing the efficiency of neural network training.
The output matrix \(\mathbf{O}\) is flattened, and concatenated with the patient demographic data vector \(\mathbf{d}\), and fed into a fully-connected (FC) layer for prediction,
\[o=\text{FC}(\text{Concat}(\text{Flatten}(\mathbf{O}),\mathbf{d})). \tag{13}\]
see Figure 7.
## Experiments
### Images
We introduce five channels to process the MRI sequence, where each channel employs a ResNet structure. The five channels are independent and each is trained to learn from one sequence (T1-pre, T1-post, T2, FLAIR and PD). All MRI images are bias-corrected, skull-stripped, and registered and the intensity scale is normalized [35]. If a patient performed MRI scans in more than one hospital visit, we use the last scan as it represents the patient's most recent disease status. Due to the relatively high imbalance of the positive and negative samples, we performed 10-fold re-sampling for the negative training samples during model training. To add robustness to the learning of ResNet, all input MRI images are also randomly rotated with a probability of 0.5 by a maximum of \(\pm 0.02\) degrees on all three dimensions. For each channel, a respective ResNet model is trained on the training dataset, and we select the trained model with the best performance on the validation dataset. Our goal is to learn a latent vector representation instead of performing disease classification, therefore, the training process is formulated as a metric learning task where each channel's ResNet is trained to learn an embedding for each MRI sequence of a patient. The triplet margin loss [36] operates directly on embedding distances by
Figure 7: The decoder network for our proposed deep neural network.
promoting the matching point (positive) to the reference point (anchor) and the non-matching point (negative) away from the anchor. By using a triplet margin loss, the network learns well-separated embedding vectors for positive and negative patients for downstream decoding networks to perform classification. The triplet margin loss is defined as
\[loss=\sum_{a_{i},n_{i},p_{i}\in\mathrm{batch}}\max\left(d\left(a_{i},p_{i} \right)-d\left(a_{i},n_{i}\right)+\mathrm{margin},0\right), \tag{14}\]
where \(a_{i},p_{i}\), and \(n_{i}\) are an anchor, positive and negative sample in the batch, respectively. We set the anchor point in our model as a fixed point in the embedding space, therefore, the distance from the positive samples to the anchor is minimized and the distance from the negative samples to the anchor is maximized.
The margin in the triplet margin loss is chosen to be 1.5. The learning rate is set to be \(10^{-5}\) and the batch size is 10. The ResNet in each encoder channel is trained for 500 epochs. Early stopping criteria of not-improving for consecutive 50 epochs on the validation dataset is adopted.
We leverage the gradient-weighted class activation mapping (Grad-CAM) [37] model to locate and visualize the important regions the ResNet neural network is learning for predicting the target. The Grad-CAM uses flowing gradients of the prediction target into the last convolutional layer of the ResNet to produce a heatmap of the regions according to their contributions to the prediction, see Figure 8,
Figure 8: Attention maps for MRI sequences of a sample patient.
### Clinical Notes
We preprocess patients' clinical notes by identifying and then removing all sensitive patient health information that is irrelevant to our prediction task, including the patient and physician's name, address, phone number, and email address. Similar to the MRI image data, we formulate the embedding generation problem from clinical notes as a metric learning problem, where the message-passing graph neural network is trained to learn meaningful embeddings and their distances between positive and negative samples. Hence, the same loss function (14) is used for this encoder channel.
We set the size of the window to be 10 (covering 10 consecutive words) and the message passing layer to be 2. The hidden side of the GRU network is 64. We trained the graph network with 500 epochs with a batch size of 128, the learning rate of \(10^{-3}\), and early stopping criteria of 50 epochs (no improvements on the validation dataset). We choose the best-performing model on the validation dataset and run it on the test dataset to get the model's final performance.
### Structured EHR
The patient's structured EHR consists of tables of 4 categories, the laboratory tests table, the vital signs table, the medications table, and the demographics table. The first 3 categories are in the format _number of timestamps \(\times\) number of features_ containing the laboratory test results (float), vital sign measurements (float), and medications (0/1 indicators), respectively. Table 2 shows a pre-selected subset of all the variables from the above 3 categories to be used in our model, based on their observation frequency. The demographic table contains race (0/1, one-hot encoded), ethnicity (0/1, one-hot encoded), sex (0/1, male/female), and age (float, min-max normalized). The encoder network consistents 3 channels for each of the first 3 categories, and the network parameters are described in Table 3.
A patient's three structured EHR's embeddings produced by the encoder network will be concatenated with the five MRI image embeddings produced by the ResNet, and together with the clinical note embedding to be fed into the decoder network. In the situation of a patient (a small amount) without MRI or clinical notes, the corresponding embedding will be set to an all-zero vector. In the decoder network, the bidirectional GRU network is set to have 4 layers and hidden size of 512.
### Results
We used 5-fold cross-validation by randomly split the 300 patients into five folds and iteratively using each fold as the hold-out test set (20%) and the remaining as the training set (80%). The model's performance of predicting EDSS \(>\) 4.0 using different data modalities and their combinations are presented in Table 4. The prediction goal is whether the patient's EDSS \(>\) 4 of the current clinic visit, using longitudinal data of the current clinic visit and all previous visits. Multimodal data inputs in general perform much better than single modal input, and the top-3
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline & Candid & Dupont & Candid & RLO \(>\) Dupont & Comid & RLO \(>\) Dupont & Pougles \\ \hline Observed 1 (left) & 1.0 & 0.8 & 0.8 & 0.8 & 0.8 & 0.8 & 0.8 & 0.8 & 0.8 & Age \\ Observed 2 (Vital Sign Observ.) & 1.1 & 0.8 & 0.8 & 0.8 & 0.8 & 0.8 & 0.8 & 0.8 & Age \\ Observed 3 (Medicine) & 1.1 & 0.8 & 0.8 & 0.8 & 0.8 & 0.8 & 0.8 & 0.8 & Age \\ \hline \hline \end{tabular}
\end{table}
Table 3: Encoder network parameters (I: input channel size, O: output channel size, K: kernel size, S: stride size, P: padding size, R: (dropout) rate).
AUROC performances are when all data (0.9301), EHR & Notes (0.9196), and MRIs & Notes (0.9201) are being used. The degradation in performance by deleting MRI or EHR information from the input data is very limited. However, if clinical notes were to be deleted, the performance drops to 0.8499. Table 5 shows the model's performance for predicting other EDSS milestones (EDSS\(>\)6 and EDSS\(>\)7) using all data modalities.
The encoder channels (laboratory tests, vital signs, medications) for the structured EHR, as a self-attention network by itself, could also generate the feature importance. Notice that the feature importance can be generated both on an individual-level and global level. The latter is evaluated as the average of all feature importance of all individuals. Figure 9 shows the global importance of all laboratory features evaluated on all patients in the test set, where a larger value corresponds to the higher importance of a feature. The top 3 important features for all patients, as can be seen from the figure, are "Absolute Neutrophils", "Absolute Lymphocytes" and "Absolute Monocytes". Figure 10 shows the global-feature importance on all vital signs and medications, some medicine such as the "Baclofen 10 MG Oral Tablet", "Gabapentin 300 MG Oral Capsule", "predniSONE 50 MG Oral Tablet" were commonly used to treat MS symptoms which were identified and assigned with high importance by our algorithm. For the attention weights on vital signs, the "Temperature", "Respiration", "Pulse Quality" and "Respiration Quality" are reasonably assigned with the least importance for the prediction of MS.
brain MRI images and the clinical notes, and structured EHR contributes the least to this prediction goal.
The results shows that despite the many publications, conventional MRI contains relatively less information about MS severity compared to other data modalities. However, T2 and Flair MRI performed relatively better than other MRI sequences. Clinical notes were well-documented to be used for the prediction of EDSS, which has been re-verified in our experiment as the relatively not good performance of using MRIs, or EHR, or MRIs & EHR were all improved when clinical notes were added to the input. A re-examination of the data reveals a reasonable explanation that the clinical notes contain rich patient general disease information including patient status, medical procedures, and treatment information, which implicitly and partially embeds information from the EHR data and MRI images.
Future research directions and limitations
The study focuses on predicting patients' MS severity as current clinic visits by using current and historical medical information, with the goal to develop an AI
Figure 9: The attention weights for laboratory tests.
based patient disease status evaluation tool to replace the human expert. A more interesting research question is to predict a patient's MS disease progression in the future. This necessarily needs to consider the EDSS change "rate" by considering the disease duration. For example, having an EDSS of 4.0 at age of 65 and a disease duration of 40 years would mean a relatively benign disease but having an EDSS score of 4.0 only after 5 years of MS diagnosis is considered as "aggressive" MS. Moreover, as one of the reviewers that has helped improve this paper pointed out, the severity of MS can be seen as a relative concept instead of an absolute one. The severity of MS should be studied based on an understanding of the "natural" disease progression, and it varies in terms of many factors (eg. sex, disease duration, lesion load, atrophy, etc.) Limited by the data size and commonly agreed on criteria to distinguish the "aggressive" cases from the rest, we focus on developing a tool to predict EDSS milestones at the moment and leave the decision of MS severity to MS specialists by jointly considering all the above factors. In addition, this problem itself is quite an interesting research problem and could potentially be studied using survival methods, the results will have a high impact on the prevention of rapid disease progression through early intervention.
The second is the limitation of the imaging data. While random rotation of MRI scans (a data augmentation technique used to train ResNet on the MRI sequences) helps generalizability, the use of only one scanner for all datasets makes it difficult to infer if the model would work in the same way when introduced to new images from a different scanner. Therefore, our work serves as a proof-of-concept regarding this question. Ideally, more data (especially data from external sources) needs to be collaboratively collected to verify the inclusion of MRI potentially has a positive impact on a multi-modal model.
Thirdly, the study was conducted on a cohort of 300 MS patients from a local academic medical center. An important future research direction is to evaluate the generalizability of the proposed model to other institutions. The result replicability should be checked from two perspectives, the first is the prediction accuracy with or without model retraining, and the second is if the ranking of importance for different data modalities is the same in general, for example, MRI images and clinical notes contains more signals compared to the structured EHR. If the results in this study are verified, it may serve as a cost-effective study recommending which electronic health information should be collected to reach maximum prediction accuracy.
Figure 10: The attention weights for (a) medications and (b) vital signs.
## Abbreviations
MS: Multiple sclerosis; MSSS: Multiple sclerosis severity scores; P-MSSS: Patient-derived MSSS; EDSS: Expanded disability status scale; EHR: Electronic health records; AUROC: Area Under the receiver operating characteristic curve; AUPRC: Area under the precision-recall curve MRI: Magnetic resonance imaging; SD: standard deviation; PD: Proton density; GRU: Gated Recurrent unit; Grad-CAM: Gradient-weighted class activation mapping; BCE: binary cross entropy.
## Supplementary Information
### Acknowledgements
We thank the reviewers for proposing a critical perspective of viewing this research problem which greatly helped improve the quality of this manuscript.
### Authors' contributions
KZ, S5, and XJ conceived the original idea and designed the model. KZ implemented the model and conducted the experiments. EB and JL contributed to the conception, data acquisition, and interpretation. KZ formed the manuscript. SE, EB, JL, and XJ critically revised the manuscript. EB and JL were in charge of overall direction and planning and helped supervise the project.
### Funding
XJ is CPRIT Scholar in Cancer Research (RR180012), and he was supported in part by Christopher Saroffin Family Professorship, UT Stars award, UTHealth startup, the National Institute of Health (NIH) under award number R01AG066749 and U01TR002062,
### Availability of data and materials
The data that support the findings of this study are available on request from the corresponding author SS. The data are not publicly available due to their containing information that could compromise the privacy of research participants. Code is publicly available on Github: [https://github.com/anotherkaihang/MS](https://github.com/anotherkaihang/MS).
### Declarations
**Ethical approval and consent to participate**
The study protocol was approved before the initiation of this study by the Committee for the Protection of Human Subjects of the University of Texas Health Science Center at Houston under IRB: HSC-MS-02-090. All recruited patients provided written informed consent upon enrollment. All methods were performed in accordance with the Declarations of Helsinki.
### Consent for publication
Not applicable.
### Competing interests
The authors declare that there is no conflict of interest.
### Author details
1School of Biomedical Informatics, University of Texas Health Sciences Center at Houston, Houston, TX, United States. 2Department of Neurology, University of Texas Health Sciences Center, McGowern Medical School, Houston, TX, United States. 3Division of General Internal Medicine, Department of Internal Medicine, University of Texas Health Sciences Center, McGowern Medical School, Houston, TX, United States. 4Department of Applied Data Science, San Jose State University, San Jose, CA, United States.
|
2305.15508 | How to Fix a Broken Confidence Estimator: Evaluating Post-hoc Methods
for Selective Classification with Deep Neural Networks | This paper addresses the problem of selective classification for deep neural
networks, where a model is allowed to abstain from low-confidence predictions
to avoid potential errors. We focus on so-called post-hoc methods, which
replace the confidence estimator of a given classifier without modifying or
retraining it, thus being practically appealing. Considering neural networks
with softmax outputs, our goal is to identify the best confidence estimator
that can be computed directly from the unnormalized logits. This problem is
motivated by the intriguing observation in recent work that many classifiers
appear to have a "broken" confidence estimator, in the sense that their
selective classification performance is much worse than what could be expected
by their corresponding accuracies. We perform an extensive experimental study
of many existing and proposed confidence estimators applied to 84 pretrained
ImageNet classifiers available from popular repositories. Our results show that
a simple $p$-norm normalization of the logits, followed by taking the maximum
logit as the confidence estimator, can lead to considerable gains in selective
classification performance, completely fixing the pathological behavior
observed in many classifiers. As a consequence, the selective classification
performance of any classifier becomes almost entirely determined by its
corresponding accuracy. Moreover, these results are shown to be consistent
under distribution shift. Our code is available at
https://github.com/lfpc/FixSelectiveClassification. | Luís Felipe P. Cattelan, Danilo Silva | 2023-05-24T18:56:55Z | http://arxiv.org/abs/2305.15508v4 | Improving selective classification performance of deep neural networks through post-hoc logit normalization and temperature scaling
###### Abstract
This paper addresses the problem of selective classification for deep neural networks, where a model is allowed to abstain from low-confidence predictions to avoid potential errors. Specifically, we tackle the problem of optimizing the confidence estimator of a fixed classifier, aiming to enhance its misclassification detection performance, i.e., its ability to discriminate between correct and incorrect predictions by assigning higher confidence values to the correct ones. Previous work has found that different classifiers exhibit varying levels of misclassification detection performance, particularly when using the maximum softmax probability (MSP) as a measure of confidence. However, we argue that these findings are mainly due to a sub-optimal confidence estimator being used for each model. To overcome this issue, we propose a simple and efficient post-hoc confidence estimator, named \(p\)-NormSoftmax, which consists of transforming the logits through \(p\)-norm normalization and temperature scaling, followed by taking the MSP, where \(p\) and the temperature are optimized based on a hold-out set. This estimator can be easily applied on top of an already trained model and, in many cases, can significantly improve its selective classification performance. When applied to 84 pretrained Imagenet classifiers, our method yields an average improvement of 16% in the area under the risk-coverage curve (AURC), exceeding 40% for some models. Furthermore, after applying \(p\)-NormSoftmax, we observe that these models exhibit approximately the same level of misclassification detection performance, implying that a model's selective classification performance is almost entirely determined by its accuracy at full coverage.
## 1 Introduction
A reliable model must be able to identify cases where it is likely to make an incorrect prediction and withhold the output to prevent a wrong decision. This ability is essential in many real-world applications, such as in finance, medical diagnosis, and autonomous driving, where the consequences of erroneous decisions can be severe (Zou et al., 2023; Neumann et al., 2018). However, it is well-known that modern deep neural networks, which have increasingly been considered for such applications, often exhibit overconfidence in their predictions (Guo et al., 2017; Goodfellow et al., 2014). This issue has motivated a lot of recent research in the general subject of uncertainty estimation in deep learning (Gawlikowski et al., 2022; Zhang et al., 2023; Abdar et al., 2021).
The task of enhancing a classifier's accuracy by abstaining from low-confidence predictions is known as _selective classification_(Geifman and El-Yaniv, 2017), which is essentially equivalent to the task of misclassification detection (Hendrycks and Gimpel, 2016). In the case of neural networks
with softmax outputs, the natural baseline is to take the maximum softmax probability (MSP) as a confidence estimator (Geifman and El-Yaniv, 2017; Hendrycks and Gimpel, 2016). The vast majority of papers in the area attempt to improve upon this baseline by either modifying the training procedure or designing a specific architecture to provide better uncertainty quantification. While such an approach is potentially optimal, the fact that it requires retraining a model is a significant practical drawback.
An alternative, less explored approach is that of post-hoc learning a confidence estimator, which does not require retraining. Papers that follow this approach typically construct a _meta-model_ that feeds on intermediate features of the base model and is trained to predict whether or not the base model is correct on hold-out samples (Corbiere et al., 2022; Shen et al., 2022). However, depending on the size of such a meta-model, its training may still be computationally demanding. Another approach that may be considered post-hoc is the use of certain ensembles that do not require retraining, such as Monte-Carlo dropout (Gal and Ghahramani, 2016). However, the fact that multiple inference passes need to be performed significantly increases the computational burden at test time.
In this paper, we focus on simple post-hoc methods for confidence estimation that can be computed directly from the network unnormalized _logits_ (pre-softmax output). This approach is practically appealing as it can be directly applied to any pre-trained model. Such post-hoc methods are common in the related (but fundamentally different) task of probability calibration, which aims to provide probability estimates representative of the true likelihood of correctness. The most prominent example is temperature scaling (TS) (Guo et al., 2017), a simple and efficient logit-based method that requires tuning a single parameter and is shown to be remarkably effective for calibration. The usefulness of TS for selective classification has been investigated by (Galil et al., 2023), who observed that, depending on the model, TS may improve or harm selective classification performance. To the best of our knowledge, designing and optimizing logit-based confidence estimators for selective classification has not been attempted before.
Inspired by (Galil et al., 2023), we propose a post-hoc confidence estimator that combines three previous ideas: temperature scaling, logit normalization (Wei et al., 2022) and logit centralization (Jiang et al., 2023), the latter two originally proposed as training techniques. We further extend these ideas by considering a more general \(p\)-norm normalization and by optimizing the temperature (as well as \(p\)) directly to improve selective classification performance. In addition, we propose a simple heuristic to choose the temperature so that only \(p\) needs to be optimized. Our approach, named \(p\)-NormSoftmax, is practically appealing as it can be applied to any existing model without retraining, requires tuning a single parameter, is straightforward to implement, and is very data-efficient. Moreover, it can provide significant gains in selective classification performance for a variety of existing models.
Figure 1: A comparison of RC curves made by three models selected in (Galil et al., 2023), including examples of highest (ViT-L/16-384) and lowest (EfficientNet-V2-XL) AUROC. After the application of our post-hoc method, the apparent pathology in EfficientNet-V2-XL completely disappears, resulting in significantly improved selective classification performance.
Our method apparently solves an intriguing problem reported in (Gail et al., 2023) and illustrated in Fig. 1: some state-of-the-art ImageNet classifiers, despite attaining excellent predictive performance, nevertheless exhibit appallingly poor performance at misclassification detection. After applying our method, this issue completely disappears, suggesting that such pathologies are fixable.
The contributions of this work are:
* We investigate the trade-off between calibration and selective classification metrics for temperature scaling;
* We propose a simple and efficient post-hoc confidence estimator optimized for selective classification;
* An experimental study of the selective classification performance of 84 ImageNet classifiers, showing an average gain of 16% in AURC after the application of our method;
* A comparison of these models showing that, after \(p\)-NormSoftmax, all models exhibit approximately the same level of misclassification detection performance.
## 2 Related Work
Selective prediction is also known as learning with a reject option (see (Zhang et al., 2023; Hendrickx et al., 2021) and references therein), where the rejector is usually a thresholded confidence estimator. Essentially the same problem is studied under the equivalent terms misclassification detection (Hendrycks and Gimpel, 2016), failure prediction (Corbiere et al., 2022; Zhu et al., 2022), and (ordinal) ranking (Moon et al., 2020; Galil et al., 2023). Uncertainty estimation is a more general term that encompasses these tasks (where confidence may be taken as negative uncertainty) as well as other tasks where uncertainty might be useful, such as calibration and out-of-distribution (OOD) detection, among others (Gawlikowski et al., 2022; Abdar et al., 2021). These tasks are generally not aligned: for instance, optimizing for calibration may harm selective classification performance (Ding et al., 2020; Zhu et al., 2022; Galil et al., 2023). Our focus here is on in-distribution selective classification, although we also study robustness to distribution shift. While most approaches consider the base model as part of the learning problem (Geifman and El-Yaniv, 2019; Huang et al., 2020; Liu et al., 2019), we focus on simple post-hoc estimators that can be computed from the logits.1. Note that, from a post-hoc perspective, other tasks can be treated as independent problems.
Footnote 1: Interestingly, Feng et al. (2023) has found that, for some of these approaches, MSP is still the best selective mechanism after the base model is trained.
A popular tool in the uncertainty literature is the use of ensembles (Lakshminarayanan et al., 2017; Gal and Ghahramani, 2016; Teye et al., 2018; Ayhan and Berens, 2018). While constructing a confidence estimator from ensemble component outputs may be considered post-hoc if the ensemble is already trained, recent work has found evidence that ensembles may not be fundamental for uncertainty but simply better predictive models (Abe et al., 2022; Cattelan and Silva, 2022; Xia and Bouganis, 2022). Thus, we do not consider ensembles here.
Applying TS to improve calibration (of the MSP confidence estimator) was proposed in (Guo et al., 2017) based on the negative log-likelihood. Optimizing TS for other metrics has been explored in (Mukhoti et al., 2020; Karandikar et al., 2021; Clarke et al., 2023) for calibration and in (Liang et al., 2023) for OOD detection, but had not been proposed for selective classification. A generalization of TS is adaptive TS (ATS) (A. Balanya et al., 2023), which uses an input-dependent temperature based on logits. Our approach can be seen as a special case of ATS, as logit norms may be seen as an input-dependent temperature; however A. Balanya et al. (2023) investigate a different temperature function than ours and focuses on calibration. Other logit-based confidence estimators proposed for calibration and OOD detection include (Liu et al., 2020; Tomani et al., 2022; Rahimi et al., 2022; Neumann et al., 2018; Gonsior et al., 2022).
Normalizing the logits with the \(L_{2}\) norm before applying the softmax function was used in (Kornblith et al., 2021) and later proposed and studied in (Wei et al., 2022) as a training technique (combined with TS) to improve OOD detection and calibration. A variation where the logits are normalized to unit variance was proposed in (Jiang et al., 2023) to accelerate training.
Benchmarking of models in their performance at selective classification/misclassification detection has been done in (Gail et al., 2023; Ding et al., 2020), however these works mostly consider the
MSP as the confidence estimator. In the context of calibration, Wang et al. (2021) and Ashukha et al. (2020) have argued that models should be compared after simple post-hoc optimizations, since models that appear worse than others can sometimes easily be improved by methods such as TS. Here we advocate and provide further evidence for this approach in the context of selective classification.
## 3 Problem Formulation and Background
### Selective classification
Let \(P\) be an unknown distribution over \(\mathcal{X}\times\mathcal{Y}\), where \(\mathcal{X}\) is the input space and \(\mathcal{Y}=\{1,\ldots,C\}\) is the label space, and \(C\) is the number of classes. A _classifier_ is a prediction function \(h:\mathcal{X}\rightarrow\mathcal{Y}\). The classifier's (true) _risk_ is \(R(h)=E_{P}[\ell(h(x),y)]\), where \(\ell:\mathcal{Y}\times\mathcal{Y}\rightarrow\mathbb{R}^{+}\) is a given loss function, for instance, the 0/1 loss \(\ell(\hat{y},y)=\mathds{1}[\hat{y}\neq y]\), where \(\mathds{1}[\cdot]\) denotes the indicator function.
A _selective classifier_(Geifman and El-Yaniv, 2017) is a pair \((h,g)\), where \(h\) is a classifier and \(g:\mathcal{X}\rightarrow\mathbb{R}\) is a _confidence estimator_ (also known as _confidence score function_ or _confidence-rate function_), which quantifies the model's confidence on its prediction for a given input. For some fixed threshold \(t\), given an input \(x\), the selective model makes a prediction \(h(x)\) if \(g(x)\geq t\), otherwise it abstains from making a prediction. We say that \(x\) is _selected_ in the former case and _rejected_ in the latter. A selective model's _coverage_\(\phi(h,g)=P[g(x)\geq t]\) is the probability mass of the selected samples in \(\mathcal{X}\), while its _selective risk_\(R(h,g)=E_{P}[\ell(h(x),y)\mid g(x)\geq t]\) is its risk restricted to the selected samples. In particular, a model's risk equals its selective risk at _full coverage_ (i.e., for \(t\) such that \(\phi(h,g)=1\)). These quantities can be evaluated empirically given a given a test dataset \(\{(x_{i},y_{i})\}_{i=1}^{N}\) drawn i.i.d. from \(P\), yielding the _empirical coverage_\(\hat{\phi}(h,g)=(1/N)\sum_{i=1}^{N}\mathds{1}[g(x_{i})\geq t]\) and the _empirical selective risk_
\[\hat{R}(h,g)=\frac{\sum_{i=1}^{N}\ell(h(x_{i}),y_{i})\mathds{1}[g(x_{i})\geq t ]}{\sum_{i=1}^{N}\mathds{1}[g(x_{i})\geq t]}. \tag{1}\]
Note that, by varying \(t\), it is generally possible to trade off coverage for selective risk, i.e., a lower selective risk can usually (but not necessarily always) be achieved if more samples are rejected. This tradeoff is captured by the _risk-coverage (RC) curve_(Geifman and El-Yaniv, 2017), a plot of \(\hat{R}(h,g)\) as a function of \(\hat{\phi}(h,g)\).
While the RC curve provides a full picture of the performance of a selective classifier, it is convenient to have a scalar metric that summarizes this curve. A commonly used metric is the _area under the RC curve_(AURC) (Ding et al., 2020; Geifman et al., 2019). However, when comparing selective models, if two RC curves cross, then each model may have a better selective performance than the other depending on the operating point chosen, which cannot be captured by the AURC. Another interesting metric, which forces the choice of an operating point, is the _selective accuracy constraint_(SAC) (Galil et al., 2023), defined as the minimum coverage required for a model to achieve a specified accuracy.
Misclassification detection (Hendrycks and Gimpel, 2016), which refers to the problem of discriminating between correct and incorrect predictions made by a classifier, is closely related to selective classification. Both tasks rely on ranking predictions according to their confidence estimates, where correct predictions should be ideally separated from incorrect ones. More precisely, if \((x_{1},y_{1}),(x_{2},y_{2})\in\mathcal{X}\times\mathcal{Y}\) are such that \(\ell(h(x_{1}),y_{1})>\ell(h(x_{2}),y_{2})\), then we would like to have \(g(x_{1})<g(x_{2})\), i.e., an optimal \(g\) orders samples in decreasing order of their losses. In the case of the 0/1 loss, a natural metric of ranking performance (Galil et al., 2023) is the area under the ROC curve (AUROC) (Fawcett, 2006) for misclassification detection. Note that this metric is blind to the classifier performance and focuses exclusively on the quality of the confidence estimates, i.e., given a fixed classifier \(h\), different confidence estimators \(g\) can be compared in their ranking performance. Thus, misclassification detection can also be seen as a proxy problem on which to evaluate confidence estimators for selective classification.
### Calibration
Consider a classifier \(h:\mathcal{X}\rightarrow\mathcal{Y}\) and a confidence estimator \(\pi:\mathcal{X}\rightarrow[0,1]\) (which need not be the same function as the confidence estimator \(g\) used for selective classification). We say that \(\pi\) is
perfectly calibrated_(Guo et al., 2017; Gawlikowski et al., 2022) if
\[P[h(x)=y\mid\pi(x)=p]=p,\quad\forall p\in[0,1],\quad(x,y)\sim P. \tag{2}\]
In practice, empirical measures of calibration are used, based on a test dataset \(\{(x_{i},y_{i})\}_{i=1}^{N}\) drawn i.i.d. from \(P\). The most popular one is arguably the _expected calibration error_(ECE) (Naeini et al., 2015), which is computed by grouping predictions into \(M\) equal-sized interval bins \(B_{m}=\{i\in\{1,\ldots,N\}:\pi(x_{i})\in(\frac{m-1}{M},\frac{m}{M}]\}\), \(m=1,\ldots,M\), and then taking a weighted average of the difference between accuracy and confidence in each bin:
\[\text{ECE}=\sum_{m=1}^{M}\frac{|B_{m}|}{N}\left|\text{acc}(B_{m})-\text{conf}( B_{m})\right| \tag{3}\]
where \(\text{acc}(B_{m})=\frac{1}{|B_{m}|}\sum_{i\in B_{m}}\mathds{1}[h(x_{i})=y_{i}]\) and \(\text{conf}(B_{m})=\frac{1}{|B_{m}|}\sum_{i\in B_{m}}\pi(x_{i})\).
### Confidence estimation
From now on we restrict attention to classifiers that can be decomposed as \(h(x)=\arg\max_{k\in\mathcal{Y}}z_{k}\), where \(\mathbf{z}=f(x)\) and \(f:\mathcal{X}\rightarrow\mathbb{R}^{C}\) is a neural network. The network output \(\mathbf{z}\) is referred to as the (vector of) _logits_ or _logit vector_, due to the fact that it is typically applied to a softmax function
\[\sigma:\mathbb{R}^{C}\rightarrow[0,1]^{C},\qquad(\sigma(\mathbf{z}))_{k}= \frac{e^{z_{k}}}{\sum_{j=1}^{C}e^{z_{j}}},\quad k\in\{1,\ldots,C\} \tag{4}\]
to obtain an estimate of the posterior distribution \(P[y|x]\).
The most popular confidence estimator is arguably the _maximum softmax probability_(MSP) (Ding et al., 2020), also known as _maximum class probability_(Corbiere et al., 2022) or _softmax response_(Geifman and El-Yaniv, 2017)
\[g(x)=\text{MSP}(\mathbf{z})\triangleq\max_{k\in\mathcal{Y}}{(\sigma(\mathbf{ z}))_{k}}. \tag{5}\]
Other representative examples are given in (Belghazi and Lopez-Paz, 2021).
### Temperature Scaling
Temperature scaling (TS) (Guo et al., 2017) is a post-processing method that consists in, for a fixed trained classifier, transforming the logits as \(\mathbf{z}^{\prime}=\mathbf{z}/T\), before applying the softmax function. The parameter \(T\), called the temperature, is then optimized over a hold-out dataset \(\{(x_{i},y_{i})\}_{i=1}^{N}\) (not used during training of the classifier). An important property of this method is that it does not change the model's predictions. The conventional way of applying TS, as proposed in (Guo et al., 2017) for calibration and referred to here as _standard_ TS, consists in optimizing \(T\) with respect to the negative log-likelihood (NLL) (Murphy, 2022)
\[\mathcal{L}=-\sum_{i=1}^{N}\log{((\sigma(\mathbf{z}_{i}/T))_{y_{i}})} \tag{6}\]
where \(\mathbf{z}_{i}=f(x_{i})\).
## 4 Post-hoc Confidence Estimation for Selective Classification
### Proposed Method
Refer to the notation of section 3. Our proposed method for post-hoc confidence estimation is given by
\[g(x)=\text{MSP}\left(\beta\frac{\mathbf{z}-\mu(\mathbf{z})}{\|\mathbf{z}-\mu (\mathbf{z})\|_{p}}\right) \tag{7}\]
where \(\beta>0\), \(\mu(\mathbf{z})\triangleq(z_{1}+\cdots+z_{C})/C\), and \(\|\mathbf{z}\|_{p}\triangleq(|z_{1}|^{p}+\cdots+|z_{C}|^{p})^{1/p}\) is the \(p\)-norm of \(\mathbf{z}\). Thus, our method consists of transforming the logits through centralization (\(\mathbf{z}\leftarrow\mathbf{z}-\mu(\mathbf{z})\)),
\(p\)-normalization (\(\mathbf{z}\leftarrow\mathbf{z}/\|\mathbf{z}\|_{p}\)) and temperature scaling (\(\mathbf{z}\leftarrow\beta\mathbf{z}\)), followed by taking the MSP. To ensure that our method can never cause harm, we augment the definition of \(p\)-norm with
\[\|\mathbf{z}\|_{\emptyset}=1 \tag{8}\]
so that the allowed range for \(p\) is \(\mathbb{R}\cup\{\emptyset\}\).
The hyperparameters \(p\) and \(\beta\) are optimized (e.g., via grid search) based on a hold-out set \(\{(x_{i},y_{i})\}_{i=1}^{N}\), using directly the AURC (or the AUROC) as the objective. In practice, the logits \(\mathbf{z}_{i}=f(x_{i})\) of all hold-out samples are pre-computed and stored, so that any metric based on them can be computed very quickly.
We also propose a simple heuristic for choosing \(\beta\) (given \(p\)):
\[\beta=\frac{1}{N}\sum_{i=1}^{N}\,\|\mathbf{z}_{i}\|_{p}. \tag{9}\]
With this choice, we observe that only \(p\) needs to be tuned. In our experiments, we noticed that it suffices to evaluate a few values of \(p\), such as \(p\in\{\emptyset,2,3,4,5,6\}\), to obtain similar results as with the full optimization of \(\beta\) and \(p\).
Our proposed method with the heuristic choice of \(\beta\) is named \(p\)-NormSoftmax, while the method with full optimization of \(\beta\) is denoted \(p\)-NormSoftmax*. Variations with MCP replaced by other confidence estimators that take logits as input are discussed in Appendix A.
The rationale for each component of our method is given in the following subsections.
### Temperature Scaling
As mentioned before, standard TS can harm selective classification performance in some cases (Gail et al., 2023; Zhu et al., 2022); thus, we propose to optimize selective classification metrics directly. Since a single parameter needs to be tuned, this can easily be done via grid search.
Figure 2 shows how the behavior of different metrics as a function of the temperature \(T\) for a ViT-H-4 (Dosovitskiy et al., 2021) model evaluated on ImageNet. In this case, optimizing NLL can lead to better, but not optimal, selective classification performance, measured in terms of AURC and AUROC. Also, it can be seen that optimizing ECE does not necessarily help, illustrating our point that these two problems should be treated independently. Finally, it can be seen that AURC and AUROC exhibit practically identical behavior with temperature, suggesting that they are equally good to be used as the objective.
### \(p\)-Normalization
A natural way to extend TS is to allow for an input-dependent temperature (A. Balanya et al., 2023):
\[\mathbf{z}^{\prime}=\frac{\mathbf{z}}{T(\mathbf{z})}. \tag{10}\]
We propose to use \(T(\mathbf{z})=\|\mathbf{z}-\mu(\mathbf{z})\|_{p}/\beta\), so that high-norm inputs are penalized reducing the confidence of the corresponding predictions. This idea is inspired by (Wei et al., 2022), which uses \(p=2\) in the context of model training.
Wei et al. (2022) argued that, as training progresses, a model will tend to become overconfident on correctly classified training samples by increasing \(\|\mathbf{z}\|_{2}\), a phenomenon that they confirmed
experimentally. Here we remark that their argument holds unchanged for any \(p\), as nothing in their analysis requires \(p=2\). On the other hand, as discussed in Jiang et al. (2023), this argument is more compelling when the logit vector \(\mathbf{z}\) is centralized, since a non-zero mean has no impact on the training loss but affects the \(p\)-norm.
When applying \(p\)-normalization as a post-hoc method, we expect a similar effect: if the model has become too overconfident (through high \(p\)-norm) on input regions that appear as incorrect predictions on the test set, then \(p\)-normalization may reduce this overconfidence, improving selective classification performance. Otherwise, \(p\)-normalization may not help, so we keep the option of \(p=\emptyset\) and recover TS as a special case.
### A Heuristic for Choosing \(\beta\)
Intuitively, we should penalize predictions whose \(p\)-norm is too high. But high compared to what? We propose to use the expected \(p\)-norm of logits as a reference point, choosing
\[\beta=E[\|\mathbf{z}\|_{p}]. \tag{11}\]
This implies that \(T(\mathbf{z})=\|\mathbf{z}\|_{p}/E[\|\mathbf{z}\|_{p}]\) and thus \(\|\mathbf{z}^{\prime}\|_{p}=\beta=E[\|\mathbf{z}\|_{p}]\), so the expected \(p\)-norm of logits does not change after \(p\)-normalization. Moreover, we have \(E[T(\mathbf{z})]=1\). This can be interpreted as trying to change the logits as little as possible, since most of the logits will have their temperature approximately unchanged.
More details and results of the heuristic are presented in Appendix B.
## 5 Experiments
All the experiments2 regarding the proposed method and the subsequent investigations were conducted using the open-source library PyTorch (Paszke et al., 2019) and all of its provided pre-trained classifiers on ImageNet (Deng et al., 2009). Additionally, some models of the Wightman (2019) repository were utilized, particularly the ones highlighted by Galil et al. (2023). The list of the models, together with all the results per model are presented in Appendix E. In total, 84 ImageNet models were used for experiments. The validation set of ImageNet was randomly split into 5000 hold-out images for post-hoc optimization and 45000 for tests and comparisons. Investigations on the stability of this split are presented in Section C.
Footnote 2: Code can be found at [https://github.com/lfpc/pNormSoftmax](https://github.com/lfpc/pNormSoftmax)
Unless specified otherwise, we always use AURC as the objective when optimizing \(p\)-NormSoftmax.
### Comparison of methods
In Figure 3, we shown an example of the RC curves of the proposed methods for a ResNext101-32x8d (Xie et al., 2017). It can be seen that, while standard TS (TS optimizing NLL) outperforms the baseline, optimizing the AURC directly (TS-AURC) achieves better results. Moreover, it can be noted that \(p\)-NormSoftmax leads to even better performance which practically identical to that of \(p\)-Normalization*.
Footnote 3: footnotemark:
Indeed, the conclusion that full optimization of \(\beta\) is unnecessary when using the proposed heuristic (while always optimizing \(p\)) was observed for all considered models. The average AUROC gain of \(\beta\) optimization with respect to the heuristic is 0.0002, while the maximum value across all the analyzed models is 0.0019. These gains are imperceptible in the RC curve and may be considered negligible. The results for all the evaluated models are summarized in Table 1 and presented with more details in Appendix E.
Figure 3: RC curves of the proposed methods, the baseline and standard TS for a ResNext101-32x8d
Similar results are observed for CIFAR-100 (Krizhevsky, 2009) and are presented in Appendix F.
One important aspect of post-hoc methods is its data efficiency (Zhang et al., 2020), i.e., the efficiency of the method in learning with few data. Appendix C presents experiments when a fraction of the hold-out set is used, and lead us to conclude that the proposed \(p\)-NormSoftmax is extremely data efficient, converging to the optimal with few samples (\(<2000\) for ImageNet).
### Comparison of models
Galil et al. (2023) showed that some models are much better than others in selective classification. An example is shown in the left figure of Figure 1. Although the EfficientNet v2 XL (Tan and Le, 2021) has better accuracy than the ViTB/32 SAM (Chen et al., 2022), the latter is better in identifying misclassification and, thus, in most of the RC curve. However, the figure in the right shows that, after \(p\)-NormSoftmax optimization, the ViTs have negligible gain, while the EfficientNet has a huge one, hence becoming better in the RC curve than the ViTB/32 SAM.
In figures 3(a) and 3(b), the AURC and AUROC are presented with respect to the accuracy for each model. It can be seen that, while for the baseline there are models with higher accuracy but worse (higher) AURC than others, this does not happen after the models are optimized with \(p\)-NormSoftmax. The Spearman's correlation between the AURC and the accuracy goes from 0.9169 to 0.9992, indicating that, while not the case for the baseline, the selective classification performance of the optimized models is almost entirely determined by its accuracy at full coverage.
We can also observe that, after the optimization, the AUROC for all models lies within the range \([0.8421,0.8859]\). This small range suggests that all models are at roughly the same level of misclassification detection, although we can still see some dependency on accuracy (better predictive models are slightly better at predicting their own failures).
### Robustness to distribution shift
Up to this point, the presented results have been evaluated utilizing the validation set of ImageNet. Generally, this set is considered to have a data distribution similar to that of the training set. However, a reliable model must also be robust for dataset shifts (Ovadia et al., 2019). For evaluating a model's performance under data shift, we evaluate our methods on ImageNet-C (Hendrycks and Dietterich, 2018), which consists in 15 different corruptions of the ImageNet's validation set. We follow the standard approach for evaluating robustness with this dataset, which is to use it only for inference;
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & \multicolumn{2}{c}{AUROC [\%]} & \multicolumn{2}{c}{AUROC [x100]} \\ \hline Method & Mean & Max & Mean & Max \\ \hline TS-AURC & 12.65 & 44.32 & 1.7 & 9.32 \\ \(p\)-NormSoftmax & 15.9 & 48.46 & 2.61 & 10.60 \\ \(p\)-NormSoftmax* & 16.02 & 48.52 & 2.63 & 10.65 \\ \hline \hline \end{tabular}
\end{table}
Table 1: AUROC and AURC gains for ImageNet. AURC gains are calculated as the reduction of AURC in relation to the baseline. For both, the higher the better.
Figure 4: AUROC and AUROC of all ImageNet models with respect to their accuracy. \(\rho\) is the Spearman’s correlation between the metric and the corresponding accuracy and the color indicates the valued of \(p\) that optimizes each model.
thus, the post-hoc methods are optimized using only the 5000 hold-out images from uncorrupted ImageNet validation dataset.
Generally, classifiers lose accuracy in the presence of data shift. Hence, we use SAC as performance metric, with the target accuracy chosen as the accuracy of the model on ImageNet validation data at full coverage. Table 2 shows these results when \(p\)-NormSoftmax is applied to a ResNet-50 (He et al., 2016). We can see that \(p\)-NormSoftmax enhances the model's performance in selective classification under data shift at all corruption levels.
### When--and why--is \(p\)-NormSoftmax beneficial?
From the evaluated results (see Figure 3(b)), it can be noticed that, while for some models the MSP baseline is a poor confidence estimator and the \(p\)-NormSoftmax method yields exceptional AUROC gains, for some other the baseline is already a seemingly optimal selective mechanism. From these results, one can ask the question: what makes some models have inferior baselines? Experiments regarding the nature of these models were conducted and presented in Appendix D, along with possible explanations. In summary, models generating logits with high average norms tend to be the best ones in misclassification detection, while the ones with low average norms exhibit the largest gains when \(p\)-NormSoftmax is applied.
## 6 Conclusion
We considered the problem of selective classification for deep neural networks. In order to improve the selective mechanism for a given trained model, we proposed \(p\)-NormSoftmax, a post-hoc method for enhancing misclassification detection of neural network classifiers. Our method achieves an improvement in AURC of 16% on average when compared to the baseline for the evaluated classifiers trained on ImageNet, reaching almost 50% for some specific models.
Furthermore, our analysis revealed that, after implementing \(p\)-NormSoftmax, the models exhibited similar levels of misclassification performance. This finding results in a model's selective classification performance being almost completely determined by its accuracy at full coverage, and suggests that the previous observations regarding different performance between models' selective performance are mostly due to the use of sub-optimal confidence estimators. Additionally, \(p\)-NormSoftmax exhibit impressive data efficiency, due to the fact that a single parameter needs to be tuned. Moreover, the method achieves satisfactory gains for selective classification under data shift. It is also worth mentioning that our method is compatible with classifiers constructed directly for improving confidence estimation, including ensembles, specific architectures and models with specific training routines.
Finally, we point out some possible reasons and initial investigations on why and in which circumstances \(p\)-NormSoftmax achieves gains. For future work, we intend to explore more deeply why post-hoc normalization can lead to improved selective mechanisms and to evaluate our method on different tasks.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline & \multicolumn{6}{c}{Corruption level} \\ \cline{3-8} & Method & 0 & 1 & 2 & 3 & 4 & 5 \\ \hline Accuracy [\%] & - & 80.86 & 68.56 & 60.03 & 51.85 & 39.44 & 27.09 \\ \hline Coverage & Baseline & 100 & 75.97 & 56.79 & 41.43 & 21.65 & 9.09 \\ (SAC) [\%] & TS-AURC & 100 & 77.13 & 60.51 & 45.49 & 27.41 & 13.32 \\ & \(p\)-NormSoftmax & 100 & 78.49 & 62.35 & 47.63 & 29.59 & 15.62 \\ & \(p\)-NormSoftmax* & 100 & 78.52 & 62.39 & 47.76 & 29.67 & 15.66 \\ \hline \hline \end{tabular}
\end{table}
Table 2: \(p\)-NormSoftmax applied to a ResNet-50 under dataset shift. The target accuracy is the one achieved for corruption level 0 (i.e., 80.86%). |
2305.00663 | Activation Functions Not To Active: A Plausible Theory on Interpreting
Neural Networks | Researchers commonly believe that neural networks model a high-dimensional
space but cannot give a clear definition of this space. What is this space?
What is its dimension? And does it has finite dimensions? In this paper, we
develop a plausible theory on interpreting neural networks in terms of the role
of activation functions in neural networks and define a high-dimensional (more
precisely, an infinite-dimensional) space that neural networks including
deep-learning networks could create. We show that the activation function acts
as a magnifying function that maps the low-dimensional linear space into an
infinite-dimensional space, which can distinctly identify the polynomial
approximation of any multivariate continuous function of the variable values
being the same features of the given dataset. Given a dataset with each example
of $d$ features $f_1$, $f_2$, $\cdots$, $f_d$, we believe that neural networks
model a special space with infinite dimensions, each of which is a monomial
$$\prod_{i_1, i_2, \cdots, i_d} f_1^{i_1} f_2^{i_2} \cdots f_d^{i_d}$$ for some
non-negative integers ${i_1, i_2, \cdots, i_d} \in
\mathbb{Z}_{0}^{+}=\{0,1,2,3,\ldots\} $. We term such an infinite-dimensional
space a $\textit{ Super Space (SS)}$. We see such a dimension as the minimum
information unit. Every neuron node previously through an activation layer in
neural networks is a $\textit{ Super Plane (SP) }$, which is actually a
polynomial of infinite degree. This $\textit{ Super Space }$ is something like
a coordinate system, in which every multivalue function can be represented by a
$\textit{ Super Plane }$. We also show that training NNs could at least be
reduced to solving a system of nonlinear equations. %solve sets of nonlinear
equations | John Chiang | 2023-05-01T05:23:58Z | http://arxiv.org/abs/2305.00663v2 | # Activation Functions Not To Active: A Plausible Theory on Interpreting Neural Networks
###### Abstract
Researchers commonly believe that neural networks model a high-dimensional space but cannot give a clear definition of this space. What is this space? What is its dimension? And does it has finite dimensions? In this paper, we develop a plausible theory on interpreting neural networks in terms of the role of activation functions in neural networks and define a high-dimensional (more precisely, an infinite-dimensional) space that neural networks including deep-learning networks could create. We show that the activation function acts as a magnifying function that maps the low-dimensional linear space into an infinite-dimensional space, which can distinctly identify the polynomial approximation of any multivariate continuous function of the variable values being the same features of the given dataset.
Given a dataset with each example of \(d\) features \(f_{1}\), \(f_{2}\), \(\cdots\), \(f_{d}\), we believe that neural networks model a special space with infinite dimensions, each of which is a monomial
\[\prod_{i_{1},i_{2},\cdots,i_{d}}f_{1}^{i_{1}}f_{2}^{i_{2}}\cdots f_{d}^{i_{d}}\]
for some non-negative integers \(i_{1},i_{2},\cdots,i_{d}\in\mathbb{Z}_{0}^{+}=\{0,1,2,3,\ldots\}\). We term such an infinite-dimensional space a _Super Space (SS)_. We see such a dimension as the minimum information unit. Every neuron node previously through an activation layer in neural networks is a _Super Plane (SP)_, which is actually a polynomial of infinite degree.
This _Super Space_ is something like a coordinate system, in which every multivalue function can be represented by a _Super Plane_. From this perspective, a neural network for regression tasks can be seen as an extension of linear regression, i.e. an advanced variant of linear regression with infinite-dimensional features.
We also show that training NNs could at least be reduced to solving a system of nonlinear equations.
## 1 Introduction
### Background
Neural Networks (NN), including the currently popular deep-learning ones, are playing a more and more significant role in important real-world domains, such as image classification, face recognition, and machine translation. Despite the great success of neural networks over the past decade, NNs are routinely considered a "black box" due to the lack of interpretability, leaving the big questions about how they arrive at predictions or decisions and why they work and have such a great performance.
### Previous Work
Universal Approximation Theorem [2] and Stone-Weierstrass Theorem [3] show that any continuous and bounded function can be approximated by NNs and polynomials, respectively. It might at first seem that our work here merely confirms this fact. However, we demonstrate a more subtle but much closer connection than that. We believe that the NN actually models an infinite-dimensional space, which we term _Super Space (SS)_. Any multivalue polynomial to approximate some multivalue function can be represented in this special space by a hyperplane with infinite dimensions, which we name _Super Plane (SP)_. We are interested in both the NN training and inference processes themselves; we show that training NNs is to search a proper SP to represent a polynomial approximation and that NN inference is just to compute the outputs of several finite-dimensional hyperplanes to approximate some SPs. Just like calculating a limited sum for the Taylor series in real-world practice, we could approximate this infinite-dimensional space and planes by a hyperspace and some hyperplanes both with finite dimensions, respectively, in order to analyze and use this super space and super planes. This special space can and should be approximated in real-world applications by a hyperspace with finite dimensions, whose number depends on how many outputs there are in the last layer. We point out here that such an SS approximation is something like a Cartesian coordinate system and that such an SP approximation is a multivalue polynomial.
Our work is primarily on general feedforward NNs. However, we believe our ideas could be adapted to specialized networks such as convolutional NNs (CNNs) and recurrent NNs and so on. For instance, we view the convolutional and pooling layers in CNNs as largely playing the role of manipulating several SPs together, conducted for feature selection to be carried in polynomials approximation of some planes.
It may well be that such an infinite-dimensional space has relationships with polynomial regression [4, 1], support vector machines, and so on. However, this work aims not to explore the possible relationship between this special space and other methods, nor does it aim to compare other methods to SS and NN in various performances. There is no implied claim that SS outperforms other machine learning methods. Instead, we claim that NN \(\leftrightarrow\) SS implies a very close direct relationship between SS and NNs, and explore the consequences and possible applications based on this close relationship.
### Contributions
The present work will make the following contributions:
1. We will show that in any NN, at each neuron with an activation function, there is a close correspondence to an infinite-dimensional hyperspace (a Super Space); in essence, NNs, including the popular form of many-layered deep learning networks, model a unique SS that can represent every multivariate polynomial of some certain variable values, and the output of such a neuron represents a Super Plane (SP). We refer to this clear correspondence here as NN \(\leftrightarrow\) SS.
2. An important aspect of NN \(\leftrightarrow\) SS is that the Super Space has infinite dimensions, each of which is a monomial. The SP in this space is like the line \(y=x+2\) in the _X-Y_ Cartesian coordinate system. In other words, our findings could be interpreted as saying that the end result of an NN can be represented by some coordinate system, except that this system has infinite dimensions.
3. We exploit NN \(\leftrightarrow\) SS in order to learn about the general properties of NNs via the basic knowledge of the properties of ordinary coordinate systems, such as the two-dimensional Cartesian system and the three-dimensional Cartesian system. This would turn out to provide new insights into aspects such as how to deal with categorical data.
4. Property (a) suggests that in many applications, one might simply fit a super-plane approximation (namely, approximated by a polynomial) in the first place, bypassing NNs. This would overcome the disadvantage of selecting various architecture, choosing numerous hyper-parameters, and so on.
Point (a) is the core idea behind this work, as it shows a much tight connection of NNs to SS that is never reported. The output of every last-layer neuron and hidden-layer neurons previously through activation layers is a Super Plane, something like a polynomial but not exactly a polynomial due to its
infinite degree. Literatures [2] have noted some theoretical connections between NNs and polynomials but our contributions go much deeper. It shows that in essence, conventional NNs and other deep NNs actually model an infinite-dimensional hyperspace and that in fact, the outputs of NNs are multiple infinite-dimensional hyperplanes but could be approximated by some finite ones, namely polynomials of a certain degree. Our focus will be on the activation function. Using a formal mathematical analysis on any activation function, we show _why NNs are essentially a form of Super Space_.
Our work not only can be on general feedforward NNs but also can involve on specialized networks such as convolutional NNs (CNNs), recurrent NNs, and so on. We see NNs as a special space with infinite dimensions and their outputs as some special planes with infinite dimensions, rather than regarding NNs as polynomial regression. In real-world applications, such spaces and planes will be respectively replaced by hyperspaces and hyperplanes with finite dimensions. In these cases, we have: If the activation function is a polynomial of infinite degree (unlimited series) or is implemented by such one, an NN exactly models an infinite-dimensional space; otherwise, if the activation function is a polynomial of a certain degree, an NN still models an infinite-dimensional space but only with finite non-zero coefficient dimensions--the other dimensions simply all have zero coefficients.
## 2 Methodology
In this paper, the activation function only means nonlinear modern popular functions, such as sigmoid, ReLU, and tanh functions, which cannot be represented by \(\Phi(x)=a\cdot x+b\) for some floating-point numbers \(a\) and \(b\). Furthermore, such activation functions do not include polynomial activation functions with a certain degree but could be a polynomial of an infinite degree.
### Polynomial Approximation of Any Activation Function
Taylor polynomials used to be commonly adopted to approximate activation functions, such as the sigmoid function, at some points by calculating the function's derivatives. However, they are not a good candidate for approximating a whole function over a large range because they only provide a local approximation near a certain point. Taking into account the overall behavior of the function and not just the behavior near a specific point, the least-squares approximation, as a global approximation method minimizing the mean squared error, is more suitable for approximating a function over a large range.
We develop a new approach to approximate any activation function, which can be briefly described as two steps. Given an activation function \(\Phi(x)\): (1) we use the Fourier series to fit the function \(\Phi(x)\) over the prescribed range, resulting in a Fourier expansion consisting of only the sine and cosine functions; and (2) we adopt the Taylor series to approximate the trigonometric functions at some points in the Fourier expansion from the first step, obtaining the desired polynomial approximation.
Fourier series is an expansion of a periodic function f(x) in terms of an infinite sum of sines and cosines. If a function \(f(x)\) has a finite number of jump discontinuities, which is piecewise smooth [5], then \(f(x)\) has a Fourier series. Such a function is called to be piecewise smooth. For a function \(f(x)\) of period \(2l\), we can form its Fourier series \(f(x)\sim\frac{1}{2}a_{0}+\sum_{n=1}^{\infty}(a_{n}\cos\frac{n\pi}{l}x+b_{n} \sin\frac{n\pi}{l}x)\), where \(a_{n}=\frac{1}{l}\int_{-l}^{l}f(x)\cos\frac{n\pi}{l}xdx,(n=0,1,2,\ldots)\) and \(b_{n}=\frac{1}{l}\int_{-l}^{l}f(x)\sin\frac{n\pi}{l}xdx,(n=1,2,\ldots)\). The sum of its Fourier series will converge to \(f(x)\) at the points of continuity and to the arithmetic mean of the right-hand and left-hand limits at the points of discontinuity. In real-world applications, it is usually a must to replace the infinite series \(\sum_{n=1}^{\infty}\) with a finite one \(\sum_{n=1}^{N}\) to approximate the original function, where \(N\) is a constant.
For a function defined only over a limited interval, like the sigmoid function over the domain \([-8,8]\), it can be extended from its original domain onto the whole X-axis to be a periodic function. We then expand this periodic function in a trigonometric series, which converges to \(f(x)\) as long as it is a piecewise smooth function. In this case, the sigmoid function over the domain \([-l,l]\) has such a Fourier series as \(f(x)\sim\frac{1}{2}a_{0}+\sum_{n=1}^{\infty}(a_{n}\cos\frac{n\pi}{l}x+b_{n} \sin\frac{n\pi}{l}x)=\frac{1}{2}+\sum_{n=1}^{k}(b_{n}\sin\frac{n\pi}{l}x)\), where k is the degree for the trigonometric polynomial, \(a_{0}\) is \(0.5\), and \(a_{n}=0\).
As we mentioned before, Taylor series is suitable for finding the value of a function \(f(x)\) at a certain point \(x_{0}\), but not for fitting the function itself over a wide range. For an infinite differentiable function \(f(x)\), the Taylor series of the function \(f(x)\) at \(a\) (or centered at \(a\)) is: \(f(x)=\sum_{n=0}^{\infty}\frac{f^{(n)}(a)}{n!}{(x-a)}^{n}\)
where \(f^{(n)}(a)\) is the \(n\)-\(th\) derivative of \(f(x)\) evaluated at the point \(a\). The Taylor series with a special case \(a=0\) is given the name Maclaurin series. The Maclaurin series for \(\sin x\) and \(\cos x\) are respectively \(\sin x=\sum_{n=0}^{\infty}\left(-1\right)^{n}\frac{x^{2n+1}}{\left(2n+1\right)}\) and \(\sin x=\sum_{n=0}^{\infty}\left(-1\right)^{n}\frac{x^{2n+1}}{\left(2n+1\right)}\) for all real x. For practical use, a limited sum \(\sum_{n=1}^{K}\) is enough to calculate the value of the function at some point, where \(K\) is a constant that depends on how much accuracy we want. We then substitute these two polynomials into the finite Fourier series expansion from the first step, obtaining the final polynomial approximation of the activation function over a certain domain.
It is straightforward to come to the conclusion that **to approximate any activation function over the whole real domain we need an infinite series polynomial** and that **any (multivariate) function with a Fourier series can be approximated to arbitrary precision by a (multivariate) polynomial**.
Based on the above observation, we make an assumption: **Polynomial of an infinite degree to properly approximate some activation function is equal to the activation function itself**, since the difference at any point between the activation function and the polynomial approximation can be arbitrarily small.
### The Role of Activation Functions
Activation functions play a crucial role in the success of neural networks and are an essential component of neural network design. Without activation functions, neural networks are just linear transformations of input data, which would limit their ability to model complex phenomena.
The traditional view of the role of an activation function in a neural network is that it determines whether a neuron should be activated or not. The main purpose of activation functions is to introduce nonlinearity into the output of neurons in neural networks. This nonlinearity is crucial for networks to model complex phenomena and learn nonlinear relationships between inputs and outputs.
We present a new point of sight to enhance the traditional view: the activation function merely acts like activating a function or "firing a cell" but actually it plays as a magnifying function to elevate the original linear space onto a high-dimensional space with infinite dimensions, thereby finally leading to introduce non-linearity. It behaves like a nozzle squeezing the water stream into an open wide space. The different types of nozzle and the director it points to decide which open space the water would spray into. In a similar way, the activation function projects the input data concatenated together in a linear combination of several super planes into an infinite-dimensional space, each dimension of which is a minimum information unit. The different weight of each dimension on the input SP of the activation function will affect that of each dimension on the output SP. The role of an activation function whose input range is the whole real is as shown in Figure 1.
### The Sampling Procedure
As long as the behavior of a multivalue function is observed, we need the sampling procedure to produce a data stream or dataset. This involves precision loss and would turn functions that cannot have a Fourier series into functions with one. The sampling procedure would fail to capture the true function and return a close function approximation that can have a Fourier series and that thus has a polynomial approximation.
### A Plausible Theory
Consider a sample dataset \(X\in\mathbb{R}^{n\times(1+d)}\) consisting of \(n\) observations of \(d\) features \(f_{0}(=1)\), \(f_{1}\), \(\cdots\), \(f_{d}\):
\[X=\begin{bmatrix}x_{10}&x_{11}&\cdots&x_{1d}\\ x_{20}&x_{21}&\cdots&x_{2d}\\ \vdots&\vdots&\ddots&\vdots\\ x_{n0}&x_{n1}&\cdots&x_{nd}\end{bmatrix},Y=\begin{bmatrix}y_{1}\\ y_{2}\\ \vdots\\ y_{n}\end{bmatrix}.\]
Let the activation function \(\Phi(x)\) be any modern one that can be approximated by, or in other words be equivariant to, a polynomial of infinite degree: \(\Phi(x)=\sum_{i=0}^{\infty}c_{i}\cdot x^{i}\) for some real numbers \(c_{i}\).
Take the simple case as in Figure 1 where there is only one neuron node. The input to the node, including from the "1" node (\(f_{0}\)), will then be of the form \(a_{0}+a_{01}f_{1}+\cdots+a_{0d}f_{d}\) and the output to this node would be \(\sum_{i=0}^{\infty}c_{i}\cdot(a_{0}+a_{01}f_{1}+\cdots+a_{0d}f_{d})^{i}\), which is a polynomial of infinite degree.
We here develop a new theory that NNs model an infinite-dimensional space with each dimension as a monomial
\[\prod_{i_{1},i_{2},\cdots,i_{d}}f_{1}^{i_{1}}f_{2}^{i_{2}}\cdots f_{d}^{i_{d}}\]
for some non-negative integers \(i_{1},i_{2},\cdots,i_{d}\in\mathbb{Z}_{0}^{+}=\{0,1,2,3,\ldots\}\). The output of each node that as long as comes through an activation function in its previous layers is just an infinite-dimensional plane, namely a polynomial of infinite degree. We term such an infinite-dimensional space a _Super Space (SS)_ and such an infinite-dimensional plane a _Super Plane (SP)_.
Since the real-world dataset has limited precision and a certain size, this SS can be approximated by a high-dimensional hyperspace with a limited number of dimensions.
Based on this theory, neural network inference is to compute the output of one or multiple such SPs, just like calculating the value \(y(x=\textit{ some constant value }x_{0})=a\cdot x+b\) in the two-dimensional coordinate system. To be specific, NNs compute the coordinates in the SS and calculate the output of the corresponding SP. In this sense, NNs can be seen as an extension of linear regression, just an advanced one. It seems that the training process aims to search important dimensions (minimum
Figure 1: The function of an activation function is just like that of a nozzle
information unit) from the whole SS and decides which SP could best represent the multivalue function to fit in this mission.
According to this theory, NNs are not like some machine learning algorithms already exist.
## 3 Applications
We could analyze this infinite-dimensional space by studying ordinary coordinate systems like the two-dimensional coordinate system and the three-dimensional coordinate system.
### One-Dot Neural Networks
From this perspective of this infinite-dimensional space, we could design a new NN architecture to solve the difficulty of handling categorical data in NNs. The current popular method is mainly by one-hot encoding, which will introduce more parameters into the NN. We here present a new novel NN architect that might crack this issue, which we term one-dot NN. We would like to leave the experiments testifying this new NN architecture as an open future work.
### Activation Functions Make Count
The activation function actually plays an important role in NNs, to what extent important depending on the specific mission. A well-known example is the invention of the activation function ReLU, which is commonly the first go-to choice made by most researchers. Also in the above example with only one-node NN, various activation functions behave significantly differently. To classify a problem involving in period function, like judging the input number odd or even, a non-period function like ReLU or sigmoid function would perform badly. A one-hidden layer NN with non-period activation functions might only be able to approximate the odd-or-even problem over a certain range. On the other hand, a NN containing one period function like a sine or cosine function would perfectly approximate this problem over the whole integer domain. The complex NN architecture and numerous neuro nodes reduce the important effects of a certain activation function, but that doesn't mean activation functions don't matter.
### Both Classification and Regression
Unlike most machine learning algorithms, NNs can be applied to both classification and regression problems, and have competitive performance in both cases. For simplicity, we assume that the regression problem involves making a single prediction. In this case, we could see the output of the SP as the only prediction. New coming examples would either be mapped onto or close to this SP, resulting in a good prediction, or they will be far away from this SP, in which case we wouldn't expect good results.
For classification tasks, if the NN has multiple outputs in the last layer, then there are multiple class labels to consider. We can treat each output as a separate SP that measures the negative distance or difference between the position of the new example in the SS and the corresponding SP. The decision on which class to assign the new example made based on the outputs of the NN using the softmax function is to select the SP that is nearest to the new data's coordinates in the SS.
### Neural Network Model Compression
Supposing that there is a already well-trained NN or even a CNN with a large number of parameters. We can actually calculate its SPs by replacing the activation functions with a perfect polynomial approximation, and then obtain the polynomial approximation \(s\) for each SP. We then select a simple NN architecture with only one or two hidden layers and a smaller number of neuron nodes and also calculate its polynomial approximation \(t\) for each output SP. Make sure the two polynomial approximations \(s\) and \(t\) have the same degrees for convenience in the calculation. The method of undetermined coefficients in mathematics can be used to determine each coefficient of polynomial approximation \(t\). Once we get the polynomial \(t\), we get the simpler NN with fewer parameters that share the same calculation circuit. In this way, we can compress a well-trained CNN into any NN architect we desire.
Note that in the inference phase, the drop-out technique usually wouldn't be adopted. Other techniques such as the batch normalization layer and making pooling layer are still manipulating polynomials to generate a new polynomial. For example, the maxing pooling in the CNN still outputs an SP (polynomial approximation): we should see the inputs of a max function in NNs as the outputs of several (such as two or four) polynomial functions and regard the output of a max function as another polynomial (SP). A toy example is that the max function for two functions \(y=0\times x+0\) and \(y=1\times x+0\) is still a function \(y=\max(0,x)\) ( ReLU ) that can be approximated by a polynomial.
### An Alternative to NNs
Supposing that we have a dataset \(X\) with its observation \(Y\). Depending on a regression or a classification mission, we could first select any neural network including the deep-learning ones, and then obtain the SPs (polynomials) by replacing all the activation functions with polynomial approximations. We can actually obtain several SPs based on the given information (\(X\) and \(Y\)). Then by using again the method of undetermined coefficients from mathematics, we can obtain a NN that perfectly completes the mission, whether it is a regression one or a classification one.
Thus, **the computational complexity of the training algorithm for NNs can, at least, be reduced to solving a set of multivariate higher-degree equations**. The fsolve function in MatLab can help to solve such problems by using a combination of numerical methods, including the Newton-Raphson method.
## 4 Experiments
In this section, we show how to provide an alternative to NNs for both classification and regression by using the method of undetermined coefficients in mathematics. For simplicity, we use a NN with only one hidden layer of four nodes and select the square function \(\Phi(x)=x^{2}\) as the activation function.
### Experiment 1
Supposing that we have two classes with labels \(c_{0}=0\) and \(c_{1}=1\), generating from \(x_{1}-x_{2}(=0)\) and \(x_{1}+x_{2}(=1)\), respectively, as shown in Figure 2.
We can first build two polynomials (SPs) for these two classes: \(\bar{c}_{0}=-[(x_{1}-x_{2})-0]^{2}\) and \(\bar{c}_{1}=-[(x_{1}+x_{2})-1]^{2}\), respectively:
\[c_{0}(=0)\mapsto\bar{c}_{0} =-(x_{1}-x_{2})^{2}=-x_{1}^{2}-x_{2}^{2}+2x_{1}x_{2},\] \[c_{1}(=1)\mapsto\bar{c}_{1} =-[(x_{1}+x_{2})-1]^{2}=-x_{1}^{2}-x_{2}^{2}-1-2x_{1}x_{2}+2x_{1}+ 2x_{2}.\]
Figure 2: An Alternative to NNs for classification
\[\begin{bmatrix}x_{1}\\ x_{2}\end{bmatrix}^{\intercal}\times\begin{bmatrix}a_{00}&a_{01}&a_{02}\\ a_{10}&a_{11}&a_{12}\\ a_{20}&a_{21}&a_{22}\\ a_{30}&a_{31}&a_{32}\end{bmatrix}^{\intercal}=\begin{bmatrix}a_{00}+a_{01}x_{1 }+a_{02}x_{2}\\ a_{10}+a_{11}x_{1}+a_{12}x_{2}\\ a_{20}+a_{21}x_{1}+a_{22}x_{2}\\ a_{30}+a_{31}x_{1}+a_{32}x_{2}\end{bmatrix}^{\intercal}\xrightarrow{\Phi(x)=x ^{2}}\\ \\ \begin{bmatrix}a_{00}+a_{01}x_{1}+a_{02}x_{2}\\ a_{10}+a_{11}x_{1}+a_{12}x_{2}\\ a_{20}+a_{21}x_{1}+a_{22}x_{2}\\ a_{30}+a_{31}x_{1}+a_{32}x_{2}\end{bmatrix}^{\intercal}\mapsto\begin{bmatrix}1 \\ (a_{00}+a_{01}x_{1}+a_{02}x_{2})^{2}\\ (a_{10}+a_{11}x_{1}+a_{12}x_{2})^{2}\\ (a_{20}+a_{21}x_{1}+a_{22}x_{2})^{2}\\ (a_{30}+a_{31}x_{1}+a_{32}x_{2})^{2}\end{bmatrix}^{\intercal}\mapsto\begin{bmatrix} 1\\ (a_{00}+a_{01}x_{1}+a_{02}x_{2})^{2}\\ (a_{10}+a_{11}x_{1}+a_{12}x_{2})^{2}\\ (a_{20}+a_{21}x_{1}+a_{22}x_{2})^{2}\\ (a_{30}+a_{31}x_{1}+a_{32}x_{2})^{2}\end{bmatrix}^{\intercal}\times\begin{bmatrix} b_{c0}&b_{c1}\\ b_{00}&b_{10}\\ b_{01}&b_{11}\\ b_{02}&b_{12}\\ b_{03}&b_{13}\end{bmatrix}\mapsto[\bar{y}_{0}\quad\bar{y}_{1}]\]
We deliberately select a simple NN with only four hidden-layer nodes so as to easily calculate its SPs \(\bar{y}_{0}\) and \(\bar{y}_{1}\):
\[\bar{y}_{0}=b_{c0} +b_{00}(a_{00}+a_{01}x_{1}+a_{02}x_{2})^{2}+b_{01}(a_{10}+a_{11}x_ {1}+a_{12}x_{2})^{2}\] \[+b_{02}(a_{20}+a_{21}x_{1}+a_{22}x_{2})^{2}+b_{03}(a_{30}+a_{31}x_ {1}+a_{32}x_{2})^{2},\] \[\bar{y}_{1}=b_{c1} +b_{10}(a_{00}+a_{01}x_{1}+a_{02}x_{2})^{2}+b_{11}(a_{10}+a_{11}x_ {1}+a_{12}x_{2})^{2}\] \[+b_{12}(a_{20}+a_{21}x_{1}+a_{22}x_{2})^{2}+b_{13}(a_{30}+a_{31}x_ {1}+a_{32}x_{2})^{2}.\]
Let \(\bar{y}_{0}=\bar{c}_{0}\) and \(\bar{y}_{1}=\bar{c}_{1}\). After applying the method of undetermined coefficients from mathematics to these two polynomial equations, we have the following 12 non-linear equations:
\[\begin{cases}&b_{c0}+b_{00}a_{00}a_{00}+b_{01}a_{10}a_{10}+b_{02}a_{20}a_{20}+ b_{03}a_{30}a_{30}=0,\\ &b_{00}a_{01}a_{01}+b_{01}a_{11}a_{11}+b_{02}a_{21}a_{21}+b_{03}a_{31}a_{31}=-,\\ &b_{00}a_{02}a_{02}+b_{01}a_{12}a_{12}+b_{02}a_{22}a_{22}+b_{03}a_{32}a_{32}=-,1,\\ &b_{00}a_{00}a_{01}+b_{01}a_{10}a_{11}+b_{02}a_{20}a_{20}a_{21}+b_{03}a_{30}a_{ 31}=0,\\ &b_{00}a_{00}a_{02}+b_{01}a_{10}a_{12}+b_{02}a_{20}a_{22}+b_{03}a_{30}a_{32}= 0,\\ &b_{00}a_{01}a_{02}+b_{01}a_{11}a_{12}+b_{02}a_{21}a_{22}+b_{03}a_{31}a_{32}= 1,\\ &b_{c1}+b_{10}a_{00}a_{00}+b_{11}a_{10}a_{10}+b_{12}a_{20}a_{20}+b_{13}a_{30}a_{ 30}=-1,\\ &b_{10}a_{01}a_{01}+b_{11}a_{11}a_{11}+b_{12}a_{21}a_{21}+b_{13}a_{31}a_{31}= -1,\\ &b_{10}a_{02}a_{02}+b_{11}a_{12}a_{12}+b_{12}a_{22}a_{22}+b_{13}a_{32}a_{32}= -1,\\ &b_{10}a_{00}a_{01}+b_{11}a_{10}a_{11}+b_{12}a_{20}a_{21}+b_{13}a_{30}a_{31}= 1,\\ &b_{10}a_{00}a_{02}+b_{11}a_{10}a_{12}+b_{12}a_{20}a_{22}+b_{13}a_{30}a_{32}= 1,\\ &b_{10}a_{01}a_{02}+b_{11}a_{11}a_{12}+b_{12}a_{21}a_{22}+b_{13}a_{31}a_{32}= -1.\end{cases}\]
Finally, We use the function fsolve in Octave to solve this set of non-linear equations by selecting the "1" row vector as the initial guesses for the variable values, obtaining the desired result:
\[\begin{bmatrix}a_{00}&a_{01}&a_{02}\\ a_{10}&a_{11}&a_{12}\\ a_{20}&a_{21}&a_{22}\\ a_{30}&a_{31}&a_{32}\end{bmatrix}=\begin{bmatrix}-0.9642548&0.9650999&0.96351 86\\ 0.0311467&-0.1118354&0.0675761\\ 0.0024421&-1.0605552&1.0563675\\ 0.0170079&0.0285231&-0.0109594\end{bmatrix},\] \[\begin{bmatrix}b_{c0}&b_{c1}\\ b_{00}&b_{10}\\ b_{01}&b_{11}\\ b_{02}&b_{12}\\ b_{03}&b_{13}\end{bmatrix}=\begin{bmatrix}-0.00062347&-0.00041223\\ -0.00055342&-1.07563264\\ 0.93191963&0.36561330\\ -0.89956744&-0.00282642\\ 0.82785662&0.57792885\end{bmatrix}.\]
Further testing on this result via applying the ordinary method ( log-likelihood loss function with the softmax function ) for this classification mission shows that the simple NN with weights of this result can predict the right class.
In this way, it is possible to find a NN to perfectly classify the MNIST datasets. We can select the SP
\[P1=-\prod_{i\;\text{\emph{where}}\;y_{i}=\bar{y}}[\sum_{j=0}^{d}(f_{j}-x_{ij})^{ 2}]\]
for each image class \(\bar{y}\) and then follow the above method. In this case, the squared activation function had to be replaced with a high-enough degree polynomial perfectly approximating some popular activation function over a large range, say the range \([-1e8,+1e8]\), in order to elevate the input linear space of the NN to the SP \(P1\).
### Experiment 2
Supposing that we have one regression mission to predict the output of the function \(r=2x_{1}+2x_{1}x_{2}+x_{2}x_{2}\), generating from the function \(2x_{1}+2x_{1}x_{2}+x_{2}x_{2}(=r)\), as shown in Figure 3.
We can also build an SP ( a polynomial) for this regression:
\[\bar{r}(=2x_{1}+2x_{1}x_{2}+x_{2}x_{2})\overset{SP}{\longmapsto}2x_{1}+2x_{1 }x_{2}+x_{2}x_{2}\]
Like in Experiment 1, we have the following set of 6 non-linear equations:
\[\left\{\begin{array}{l}b_{c0}+b_{00}a_{00}a_{00}+b_{01}a_{10}a_{10}+b_{02}a_{ 20}a_{20}+b_{03}a_{30}a_{30}=0,\\ b_{00}a_{01}a_{01}+b_{01}a_{11}a_{11}+b_{02}a_{21}a_{21}+b_{03}a_{31}a_{31}=0, \\ b_{00}a_{02}a_{02}+b_{01}a_{12}a_{12}+b_{02}a_{22}a_{22}+b_{03}a_{32}a_{32}=1, \\ b_{00}a_{00}a_{01}+b_{01}a_{10}a_{11}+b_{02}a_{20}a_{21}+b_{03}a_{30}a_{31}=1, \\ b_{00}a_{00}a_{02}+b_{01}a_{10}a_{12}+b_{02}a_{20}a_{22}+b_{03}a_{30}a_{32}=0, \\ b_{00}a_{01}a_{02}+b_{01}a_{11}a_{12}+b_{02}a_{21}a_{22}+b_{03}a_{31}a_{32}=1. \end{array}\right.\]
and obtain the following result:
\[\begin{bmatrix}a_{00}&a_{01}&a_{02}\\ a_{10}&a_{11}&a_{12}\\ a_{20}&a_{21}&a_{22}\\ a_{30}&a_{31}&a_{32}\end{bmatrix}=\begin{bmatrix}-1.54767&0.99491&3.02228\\ 0.40528&2.19680&0.52124\\ 0.14781&2.20493&0.14896\\ 0.95912&1.64751&0.50298\end{bmatrix},\quad\begin{bmatrix}b_{c0}\\ b_{00}\\ b_{01}\\ b_{02}\\ b_{03}\end{bmatrix}=\begin{bmatrix}-0.830416\\ 0.081054\\ 0.430706\\ -0.797845\\ 0.633714\end{bmatrix}.\]
The Octave code to verify the validity of this result is as follows:
Figure 3: An Alternative to the NN for regression
```
1>>#[1(X^*A').*(X^*A')]*(B')
2>>A=[-1.547668,0.994909,3.022282;0.405276,2.196802,...
0.521244;0.147810,2.204926,0.148959;0.959119,...
1.647513,0.502981];
3>>B=[-0.830416,0.081054,0.430706,-0.797845,0.633714];
4>>X=[1;1;1];
5>>XA=X'*A'
6XA=
7
82.46953.12332.50173.1096
9
10>>XA=XA.*XA
11XA=
12
136.09859.75516.25859.6697
14
15>>XA=
16XA=
17
181.00006.09859.75516.25859.6697
19
20>>XA*(B')
21ans=5.0000
22>>
23>>X=[1;1;1];
24>>[1(X^*A').*(X^*A')]*(B')
25ans=5.0000
26>>X=[1;2;1];
27>>2*2+2*2*1+1*1
28ans=9
29>>[1(X^*A').*(X^*A')]*(B')
30ans=9.0000
31>>
32
33
34
35
36
37
38
390
400
41
420
431
442
443
453
463
473
483
490
500
511
521
5312
5401
552
563
574
584
591
601
6102
6213
632
642
6556
667
677
682
6832
684
692
7000
71000
7211
7334
7485
7593
7694
895
9000
9111
91211
9212
9355
9401
9512
96213
972
9821
9822
9983
99900
9991
10000
10111
1012
1122
11334
11445
11556
11667
1177
11888
11989
12000
121901
12201
12301
12401
12512
12614
127215
12822
12923
12935
13036
13137
13204
13388
13390
13041
13420
135212
13622
13739
13891
13923
13930
13942
13953
14055
1406
14167
14216
14307
1448
14492
14508
14693
14709
14894
14955
14096
1497
14098
14100
14111
14111
14211
14309
14456
14697
14709
14898
14999
15000
15100
152000
15300
15401
15501
15601
15701
15801
15911
150111
150111
15111
15111
151111
151111
151111
151111
151111
151111
151111
151111
151111
151111
151111
151111
151111
151111
151111
151111
151111
151111
151111
151111
15111
15111
151111
151111
151111
151111
151111
151111
151111
151111
151111
151111
151111
151111
15111
151111
15111
151111
151111
151111
151111
151111
15111
151111
151111
151111
151111
151111
151111
151111
15111
151111
151111
151111
151111
151111
151111
151111
151111
151111
151111
151111
151111
15111
151111
151111
151111
151111
151111
151111
151111
151111
151111
151111
151111
151111
151111
151111
151111
151111
151111
151111
151111
151111
151111
151111
151111
151111
151111
151111
151111
151111
15111
151111
15111
151111
151111
151111
151111
151111
151111
151111
151111
151111
151111
151111
151111
151111
151111
151111
151111
151111
15111
151111
15111
151111
15111
151111
151111
151111
151111
15111
151111
15111
151111
151111
151111
151111
15111
151111
151111
15111
15111
151111
151111
15111
151111
151111
15111
151111
151111
151111
15111
15111
151111
15111
151111
15111
15111
15111
15111
151111
15111
151111
151111
15111
15111
15111
15111
151111
15111
15111
15111
15111
15111
15111
15111
151111
15111
15111
15111
15111
15111
15111
151111
15111
151111
15111
15111
15111
15111
15111
15111
15111
15111
15111
151111
15111
15111
151111
15111
151111
15111
151111
1511
151111
15111
15111
151111
15111
15111
15111
15111
15111
15111
15111
15111
15111
15111
15111
151111
15111
15111
15111
15111
151111
15111
15111
15111
15111
15111
1511
15111
15111
15111
15111
15111
15111
15111
15111
15111
15111
15111
151111
15111
151111
15111
15111
15111
15111
15111
15111
15111
15111
15111
15111
151111
15111
15111
15111
151111
15111
15111
15111
15111
15111
15111
15111
15111
15111
15111
15111
15111
15111
15111
15111
15111
15111
15111
15111
15111
15111
15111
15111
15111
15111
15111
15111
15111
15111
15111
1511
15111
15111
15111
15111
15111
15111
15111
15111
1511
15111
15111
15111
1511
15111
15111
15111
15111
1511
1511
151111
15111
11111
15111
15111
15111
15111
15111
15111
15111
15111
15111
15111
15111
15111
15111
1511
15111
15111
15111
15111
15111
15111
15111
15111
15111
15111
15111
15111
15111
15111
1511
15111
15111
15111
1111
11111
11111
15111
11111
11111
11111
1111
1111
11111
11111
11111
11111
11111
111111
11111
11111
11111
1111
11111
11111
111111
11111
111111
11111
111111
111111
11111
11111
11111
11
\[\bar{c}_{0}(=0)\mapsto SP_{0} =-\prod_{i\:\text{where}\:y_{i}=3}[\sum_{j=0}^{d}(f_{j}-x_{ij})^{2}]\] \[=-[(x_{1}-0.1)^{2}+(x_{2}-0.6)^{2}]\cdot[(x_{1}-0.2)^{2}+(x_{2}-0.7 )^{2}]\] \[=-x_{1}^{4}+0.6\cdot x_{1}^{3}-2\cdot x_{1}^{2}\cdot x_{2}^{2}+2.6 \cdot x_{1}^{2}\cdot x_{2}-0.98\cdot x_{1}^{2}+0.6\cdot x_{1}\cdot x_{2}^{2}\] \[\quad-0.76\cdot x_{1}\cdot x_{2}+0.254\cdot x_{1}-x_{2}^{4}+2.6 \cdot x_{2}^{3}-2.58\cdot x_{2}^{2}+1.154\cdot x_{2}-0.1961,\]
\[\bar{c}_{1}(=1)\mapsto SP_{1} =-\prod_{i\:\text{where}\:y_{i}=8}[\sum_{j=0}^{d}(f_{j}-x_{ij})^{2}]\] \[=-[(x_{1}-0.3)^{2}+(x_{2}-0.8)^{2}]\cdot[(x_{1}-0.4)^{2}+(x_{2}-0. 9)^{2}]\] \[=-x_{1}^{4}+1.4\cdot x_{1}^{3}-2\cdot x_{1}^{2}\cdot x_{2}^{2}+3.4 \cdot x_{1}^{2}\cdot x_{2}-2.18\cdot x_{1}^{2}+1.4\cdot x_{1}\cdot x_{2}^{2}\] \[\quad-2.36\cdot x_{1}\cdot x_{2}+1.166\cdot x_{1}-x_{2}^{4}+3.4 \cdot x_{2}^{3}-4.58\cdot x_{2}^{2}+2.866\cdot x_{2}-0.7081.\]
Since \(SP_{0}\) and \(SP_{1}\) are both polynomials of degree 4, we have to replace the degree-2 polynomial activation function with the polynomial of degree 4. We also use a NN with only one-hidden layer of 8 nodes:
\[\begin{bmatrix}1\\ x_{1}\\ x_{2}\end{bmatrix}^{\intercal}\times\begin{bmatrix}a_{00}&a_{01}&a_{01}&a_{0 2}\\ a_{10}&a_{11}&a_{12}\\ a_{20}&a_{21}&a_{22}\\ a_{30}&a_{31}&a_{32}\\ a_{40}&a_{41}&a_{42}\\ a_{50}&a_{51}&a_{52}\\ a_{60}&a_{61}&a_{62}\\ a_{70}&a_{71}&a_{72}\end{bmatrix}^{\intercal}=\begin{bmatrix}a_{00}+a_{01}x_{1 }+a_{02}x_{2}\\ a_{10}+a_{11}x_{1}+a_{12}x_{2}\\ a_{20}+a_{21}x_{1}+a_{22}x_{2}\\ a_{30}+a_{31}x_{1}+a_{32}x_{2}\\ a_{40}+a_{41}x_{1}+a_{42}x_{2}\\ a_{50}+a_{51}x_{1}+a_{52}x_{2}\\ a_{60}+a_{61}x_{1}+a_{62}x_{2}\\ a_{70}+a_{71}x_{1}+a_{72}x_{2}\end{bmatrix}^{\intercal}\mapsto\begin{bmatrix} a_{00}+a_{01}x_{1}+a_{02}x_{2}\\ a_{10}+a_{11}x_{1}+a_{12}x_{2}\\ a_{20}+a_{21}x_{1}+a_{22}x_{2}\\ a_{30}+a_{31}x_{1}+a_{32}x_{2}\\ a_{40}+a_{41}x_{1}+a_{42}x_{2}\\ a_{50}+a_{51}x_{1}+a_{52}x_{2}\\ a_{60}+a_{61}x_{1}+a_{62}x_{2}\\ a_{70}+a_{71}x_{1}+a_{72}x_{2}\end{bmatrix}^{\intercal}\mapsto\begin{bmatrix} a_{00}+a_{01}x_{1}+a_{02}x_{2}\\ a_{10}+a_{11}x_{1}+a_{12}x_{2}\\ a_{20}+a_{21}x_{1}+a_{22}x_{2}\\ a_{40}+a_{41}x_{1}+a_{42}x_{2}\\ a_{60}+a_{61}x_{1}+a_{62}x_{2}\\ a_{70}+a_{71}x_{1}+a_{72}x_{2}\end{bmatrix}^{\intercal}\mapsto\begin{bmatrix} b_{c0}&b_{c1}\\ b_{00}&b_{10}\\ b_{01}&b_{11}\\ b_{02}&b_{12}\\ b_{03}&b_{13}\\ b_{04}&b_{14}\\ b_{05}&b_{15}\\ b_{06}&b_{16}\\ b_{07}&b_{17}\end{bmatrix}\mapsto[\bar{y}_{0}\quad\bar{y}_{1}]\]
The Octave code to verify the validity of this result is as follows:
### Experiment 4
Given a dataset \(X\in\mathbb{R}^{n\times(1+d)}\) with its predictions \(Y\in\mathbb{N}^{n\times 1}\) for a toy example of regression, we can find a polynomial approximation of the single SP from this dataset:
\[c_{0}\cdot\left(a_{0}+a_{01}x_{1}+\cdots+a_{0d}x_{d}\right)^{0}+\cdots+c_{N} \cdot\left(a_{0}+a_{01}x_{1}+\cdots+a_{0d}x_{d}\right)^{N},\]
which could be used to generate \(n\) nonlinear equations:
\[\left\{\begin{array}{l}c_{0}\cdot\left(a_{0}+a_{01}x_{11}+\cdots+a_{0d}x_{1d }\right)^{0}+\cdots+c_{N}\cdot\left(a_{0}+a_{01}x_{11}+\cdots+a_{0d}x_{1d} \right)^{N}=y_{1},\\ \vdots\\ c_{0}\cdot\left(a_{0}+a_{01}x_{n1}+\cdots+a_{0d}x_{nd}\right)^{0}+\cdots+c_{N} \cdot\left(a_{0}+a_{01}x_{n1}+\cdots+a_{0d}x_{nd}\right)^{N}=y_{n}.\end{array}\right.\]
Thus, we can also obtain the weights of a specialized NN for a regression mission by solving this set of \(n\) nonlinear equations.
## 5 Conclusion
In this paper, we presented a new prospect of viewing NNs, as essentially a form of SS. We have shown that the modern activation functions, equivalent to some infinite-degree polynomials, are acting as a magnifying function mapping the raw linear space to an infinite-dimensional space that we term Super Space (SS). This present theory could interpret why NNs can work for both regression and classification tasks -- this is because of applying various Super Planes.
Most importantly, we have shown that the complexity of training NNs including deep-learning ones can be reduced to that of solving a system of non-linear equations via the method of undetermined coefficients from mathematics to obtain the weights of a pre-selected NN architecture. Given a dataset, whether it is for regression or classification, we can determine the weights of any specialized NN even without applying the ordinary solving method used in the training phase.
There is still a lot of work to be done. We hope to include more experiments to prove the theory proposed in this paper. However, the programming task is too heavy for current authors. In addition, Chiang is currently looking for a PhD to restart his research and studies, and he believes that his best chance is to find a position in privacy-preserving machine learning, rather than explaining neural networks. Therefore, this work, now and in the future, is of no help to him. Even so, future work on this paper may still be done. |
2307.09994 | Impact of Disentanglement on Pruning Neural Networks | Deploying deep learning neural networks on edge devices, to accomplish task
specific objectives in the real-world, requires a reduction in their memory
footprint, power consumption, and latency. This can be realized via efficient
model compression. Disentangled latent representations produced by variational
autoencoder (VAE) networks are a promising approach for achieving model
compression because they mainly retain task-specific information, discarding
useless information for the task at hand. We make use of the Beta-VAE framework
combined with a standard criterion for pruning to investigate the impact of
forcing the network to learn disentangled representations on the pruning
process for the task of classification. In particular, we perform experiments
on MNIST and CIFAR10 datasets, examine disentanglement challenges, and propose
a path forward for future works. | Carl Shneider, Peyman Rostami, Anis Kacem, Nilotpal Sinha, Abd El Rahman Shabayek, Djamila Aouada | 2023-07-19T13:58:01Z | http://arxiv.org/abs/2307.09994v1 | # Impact of Disentanglement on Pruning Neural Networks
###### Abstract
Deploying deep learning neural networks on edge devices, to accomplish task specific objectives in the real-world, requires a reduction in their memory footprint, power consumption, and latency. This can be realized via efficient model compression. Disentangled latent representations produced by variational autoencoder (VAE) networks are a promising approach for achieving model compression because they mainly retain task-specific information, discarding useless information for the task at hand. We make use of the Beta-VAE framework combined with a standard criterion for pruning to investigate the impact of forcing the network to learn disentangled representations on the pruning process for the task of classification. In particular, we perform experiments on MNIST and CIFAR10 datasets, examine disentanglement challenges, and propose a path forward for future works.
## I Introduction
Advances in deep learning have accelerated the state-of-the-art in computational sensing across research domains, finding wide application in tasks encompassing object detection [1, 2], pose estimation [3, 4, 5], classification [6], reconstruction [7], segmentation [8, 9], time series prediction [10, 11], and 3D modelling [12, 13]. Surpassing a top performing deep neural network (DNN) on image-level and pixel-level tasks on a benchmark dataset often comes at the cost of an increased number of model parameters, floating point operations (FLOPS), volume of training data, and graphical processing unit (GPU) resources. This trend poses a challenge for the deployment of these pre-trained DL models for real-time inference on resource constrained edge devices, with low computing power and memory, where a reduction in the model's memory footprint (i.e., RAM and storage requirements), power consumption, and latency, is essential. With the objective of reducing the model size of an existing pre-trained base model, while maintaining a comparable level of metric performance (e.g., accuracy, precision, recall, etc.) to that of the original model, different techniques of neural network compression have been proposed. These include quantization [14], knowledge distillation [15], low-rank matrix factorization [16], and pruning [17].
Quantization compresses the original network by reducing the number of bits required to represent each weight [18]. This reduces the range of data values while retaining most of the information for comparable model accuracy but under certain conditions can lead to a drop in performance. In knowledge distillation, the knowledge from a pre-trained, larger and more complex model, known as the teacher model, is transferred to a smaller network, known as the student network. Limitations of this approach arise from the side of the student network whose architecture needs to be designed and usually remains fixed. Low-rank factorization can be applied to both convolutional and fully connected layers. Applying tensor decomposition to the convolutional layers makes the inference process faster while the factorization of the matrices in the dense layer makes the model's memory footprint lighter. A limitation of this approach is that the proper factorization and rank selection process is challenging to implement in practice and is computationally more intensive. Pruning is used for network sparsification by removing redundant and non-essential parameters, contributing to reduced model size (i.e., smaller memory footprint) and improved latency (i.e., lower bandwidth). Pruning can produce either unstructured or structured network sparsification patterns. Unstructured pruning involves setting weight values to zero, as in convolutional parameters in a convolutional neural network (CNN) or connections between neurons in a fully convolutional neural network (FCNN). A drawback of unstructured pruning is that most machine learning frameworks and hardware cannot accelerate sparse matrices' computation. Structured pruning includes kernel/channel pruning, filter pruning (i.e., neurons in FCNNs), and layer pruning. Although a more structured sparsification pattern leads to an accelerated neural network, the accuracy drops [18]. Pruning has proven successful at maintaining the level of performance of the original pre-trained network while discarding superfluous information contained in the specified network structures described above. Nevertheless, given a specific task, the challenge of selecting which parts of the network to prune remains open, with several criteria proposed to rank the relative importance of the respective components. These criteria include pruning the weight magnitude or the gradient magnitude, and whether to prune globally or locally.
Neural network compression can be assisted by disentangling latent representations because a representation which is disentangled for a particular dataset is one which is sparse over the transformations present in that data [19]. Disentangled representations are representations that capture the underlying factors of variation that generated the data, especially those explanatory factors that are relevant to the task. Disentangling as many factors as possible while discarding as little information about the data as is practical for the task at hand, is a robust approach to feature learning [20]. In this work, instead of proposing a new pruning criterion, we use a standard low-magnitude criterion for local, unstructured pruning and investigate the impact of enforcing the network to learn disentangled representations on the pruning process. The intuition here is that if the network is learning disentangled representations then this would be reflected by the neural network itself which would make pruning more effective. Specifically, we make use of the Beta-VAE [21] framework combined with the aforementioned standard criterion for pruning to investigate the impact of forcing the network to learn disentangled representations on the pruning process for the task of classification. Our contributions are two-fold:
* we provide preliminary analysis on the impact of disentanglement on pruning neural networks for the task of classification,
* we perform experiments on MNIST and CIFAR10 datasets for this objective.
## II Proposed Method
Consider a dataset \(\mathcal{D}=\{(x_{i},y_{i})\}_{i=1}^{N}\), \(x\) is the 2D input data and \(y\) is the corresponding label. We use the Beta-VAE framework to enforce a disentangled latent representation [21] given by
\[\begin{split}\mathcal{L}_{\text{Beta-VAE}}(\theta,\phi;x,z,\beta)=& -\beta\ \mathcal{D}_{KL}\left(q_{\phi}\left(z|x\right)\parallel p\left(z\right)\right) \\ &+\mathbb{E}_{q_{\phi}\left(z|x\right)}[\log p_{\theta}\left(x|z \right)]\end{split} \tag{1}\]
where \(p(z)\) is a prior distribution of the latent variables \(z\), \(p_{\theta}(x|z)\) is a conditional probability of \(x\) that is parametrized by a neural
network \(\theta\), and \(q_{\phi}(z|x)\) approximates the posterior with neural network parameters \(\phi\).
\[\mathcal{L}_{\text{CE}}(y)=-\sum_{i=1}^{N}y_{i}\log y_{i}^{\prime} \tag{2}\]
\[\mathcal{L}_{\text{Beta-VAE-Classif}}(\theta,\phi;x,y,z,\beta)=\mathcal{L}_{ \text{Beta-VAE}}+\mathcal{L}_{\text{CE}} \tag{3}\]
The Beta-VAE objective in Eq. 1 consists of a Kullback-Leibler (KL) divergence term (encoder part) and a pixel-wise reconstruction loss (decoder part). The penalty \(\beta\) imposes a limit on the capacity of latent information and the degree to which statistically independent latent factors are learned. Our training strategy, first involves appending a classifier head to the Beta-VAE during training as shown in Fig. 1 and augmenting the loss function in Eq. 1 with the addition of sparse categorical cross-entropy loss term given by Eq. 2 in Eq. 3. In Eq. 2, \(y^{\prime}\) is the predicted label. During inference, the reconstruction head is removed, as well as its corresponding loss term, leaving the pre-trained encoder and classification head.
## III Experiments and Discussions
Our experiments are implemented in the TensorFlow framework using MNIST and CIFAR10 datasets. The MNIST dataset consists of 70k 28x28 grayscale images of the 10 digits of which 60k are used for the training set and 10k for the test set. The CIFAR10 dataset consists of 60k 32x32 color images, labeled across 10 categories, of which 50k are used for training and 10k for testing. The Adam optimizer is used with a learning rate of \(0.001\) and data are shuffled at each epoch during training. The Beta parameter assumes values of \(\beta=\{1,3,5,10\}\) and experiments are performed with and without pruning for each of these \(\beta\) values. Initially all models are trained for 30 epochs with 2 further epochs once the reconstruction head is removed. At this stage, if pruning is applied, an additional 2 epochs are used for this purpose followed by the evaluation of the classification accuracy. We use the 'prune_low_magnitude' function in Keras with a 50% constant sparsity which is a standard low-magnitude criterion for local, unstructured pruning. The neural network architectures used for the MNIST and CIFAR10 datasets are parameterized by 69k and 2.9M parameters, respectively.
The MNIST dataset is a far easier dataset than CIFAR10 and, consequently, easier to over-train on as seen from the high accuracies attained both with the pure CNN model and Beta-VAE-Classif model up to \(\beta=10\) in Table I. The pure CNN-Classif model baseline outperforms the Beta-VAE-Classif models on both datasets because of its singular objective compared to having to optimize multiple objectives coming from several tasks. Preliminary results of several individual runs on MNIST initially hinted that at a certain degree of disentanglement \(\beta>1\) the accuracy of the pruned and unpruned models increased over the \(\beta=1\) entanglement baseline. This finding would have supported the intuition that applying disentanglement to a model initially guided by the presence of a pixel-level task, such as reconstruction, would facilitate discarding information that is not relevant for the relatively simpler, image-level task of classification. This observation may be captured by the \(\beta=5\) Beta-VAE-Classif model on the CIFAR10 dataset results since the upper range of accuracy exceeds that of the \(\beta=1\) model. The compression ratio of pruned to unpruned model are essentially the same among all models examined for each of the two datasets. The severe drop in accuracy for the \(\beta=10\) on the CIFAR10 dataset appear to be indicative of the information preference problem (i.e., posterior collapse, KL vanishing) where the disentangled representations have become independent of the observation space.
## IV Conclusion and Future Works
The preliminary results presented in this work are inconclusive and only roughly suggest that for a certain value of \(\beta\) in the Beta-VAE framework, the latent space implicitly becomes sufficiently disentangled to allow for pruning to more easily discard useless information for the task of classification. Increasing the number of epochs over which pruning is performed as well as the number of latent dimensions may improve these results. Furthermore, having ground truth disentanglement labels together with an appropriately selected metric to directly measure the degree of disentanglement for the task of classification is expected to yield more robust results. We propose to use the dSprites [22] dataset along with the mutual information gap (MIG) [23] metric for this purpose.
Fig. 1: The Beta-VAE model augmented by the addition of a classifier head. The combined model, Beta-VAE-Classif, is trained with all three loss terms given by the KL divergence, reconstruction loss, and classification loss. During inference, the reconstruction head is removed, leaving the shaded in blocks of the diagram. The notations used are described in Section II.
## V Acknowledgement
This work is supported by the Luxembourg National Research Fund (FNR), under the project reference C21/IS/15965298/ELITE.
|
2308.09644 | A Potts model approach to unsupervised graph clustering with Graph
Neural Networks | Numerous approaches have been explored for graph clustering, including those
which optimize a global criteria such as modularity. More recently, Graph
Neural Networks (GNNs), which have produced state-of-the-art results in graph
analysis tasks such as node classification and link prediction, have been
applied for unsupervised graph clustering using these modularity-based metrics.
Modularity, though robust for many practical applications, suffers from the
resolution limit problem, in which optimization may fail to identify clusters
smaller than a scale that is dependent on properties of the network. In this
paper, we propose a new GNN framework which draws from the Potts model in
physics to overcome this limitation. Experiments on a variety of real world
datasets show that this model achieves state-of-the-art clustering results. | Co Tran, Mo Badawy, Tyler McDonnell | 2023-08-18T16:02:15Z | http://arxiv.org/abs/2308.09644v1 | # A Potts model approach to unsupervised graph clustering with Graph Neural Networks
###### Abstract
Numerous approaches have been explored for graph clustering, including those which optimize a global criteria such as modularity. More recently, Graph Neural Networks (GNNs), which have produced state-of-the-art results in graph analysis tasks such as node classification and link prediction, have been applied for unsupervised graph clustering using these modularity-based metrics. Modularity, though robust for many practical applications, suffers from the resolution limit problem, in which optimization may fail to identify clusters smaller than a certain scale that is dependent on properties of the network. In this paper, we propose a new GNN framework which draws from the Potts model in physics to overcome this limitation. Experiments on a variety of real world datasets show that this model achieves state-of-the-art clustering results.
## 1 Introduction
Graph clustering, also referred to as community detection, has been utilized in a multitude of practical applications in biology [25], social networks [8], neuroscience [33]. According to [43], generally speaking, there are three identifiable themes to graph clustering, also sometimes referred to as community detection.
One approach utilizes classical, greedy algorithms, e.g. hierarchical clustering, see [17]. Second class of methods incorporates a properly selected global function or criteria, the communities are then identified via an optimization process of such function over the set of all possible partitions of the network graph, e.g. graph cuts [31], [39], spectral clustering [21], [2], modularity-based methods [19], [18], etc. The third approach makes use of probabilistic modeling of the network, where a probabilistic model is learned or approximated to maximize the likelihood for specific graph labeling configurations that yield the desirable clustering results. An example of the this would be stochastic block methods [22], along with the degree-corrected version [10], and random cluster models [9].
Modularity-based methods, e.g. Louvain, see [3], have gained popularity due to relative simplicity of the algorithm, ease of scalability for large graphs, and robustness of results. The Newman-Girvan modularity is easy to define for a given graph labeling, measuring how likely the labeling would have occurred by chance. Identifying an optimal labeling (node partition) that maximizes modularity typically yields good clustering results. Modularity approaches suffer from some drawbacks, most notably the resolution limit issue, see [6]. The resolution limit issue makes it harder harder to identify relatively small clusters in larger graphs.
In this paper, we present two main contributions. The first is achieving new SoTA results compared to [36]. Our approach utilizes a GNN network that is used to optimize a criteria derived from a Potts model of the given graph. Potts models have been already used as random graph models for graph clustering purposes, see [16]. Moreover, previous work, [27], shows that they are not prone to the resolution-limit issues that affect the traditional Girvan-Newman modularity-based approaches. The second contribution is a by-product of utilizing the Potts model approach in that it allows us to manipulate, and potentially optimize, certain model parameters, e.g. temperature as in [16] to control the level of granularity or resolution of the resulting clustering. Such parameters lend themselves in a natural way to the clustering problem enabling the researcher to identify "reasonable" values and
thus allows a completely unsupervised solution to the clustering problem. This can be useful in many applications where knowing the number of clusters, a priori, is not feasible.
This paper is structured as follows, in section one...
## 2 Motivation
In recent years, graph neural networks have been utilized to perform various tasks on graph data structures, e.g. node classification, link/edge prediction. The Message passing framework for GNNs, see for example [1], has been successful in generalizing earlier architectures such as Convolutional and Attention GNNs. This message-passing paradigm allows the node and edge properties to be locally pooled within the localized node neighborhoods, thus enabling the GNN to learn representations of the local and higher-order structure of the graph. Intuitively, one would think that pooling could be enough to generate adequate representations to solve graph clustering problems, but that has been shown to be false in the recent paper [36], where the DMoN GNN was introduced as a (partially) unsupervised approach to solve graph clustering problems, where the number of total clusters is known. The DMoN approach has been shown to achieve SoTA results on standardized graph clustering datasets where the ground truth is known or easily inferred.
Our work in this paper has been motivated so as to contribute two improvements to the previous results achieved in [36]. More specifically:
* The loss function utilized in [36] was derived from a Girvan-Newman modularity criteria. This could potentially result in issues regarding modularity maximization, e.g. resolution-limit problems (see [6]), where relatively smaller clusters could become harder to identify via a modularity-optimization approach.
* The unsupervised approach proposed in [36] still requires the knowledge of the number of clusters a priori. This can prove to be a challenge in certain practical applications where there is lack of problem-intrinsic or heuristic knowledge to hint towards a reasonable level of cluster granularity.
Our research into other possible approaches led us to [26] where the Potts model was discussed. In [26] the authors show that the Girvan-Newman modularity occurs naturally as a global criteria corresponding to a particular intuitive choice of a Hamiltonian (energy) function of the system (node labels/configurations). It turns out that minimizing the Hamiltonian function (which is equivalent to maximizing the corresponding global criteria) over the set of all node labels (configurations) of the graph yields optimal clustering results.
The Potts model has been used in Physics (Statistical mechanics) for a while, see [24], [4]. It is a spin glass model that generalizes the Ising model which has been used to model ferromagnetic material, where molecules of the magnetic material are modeled as nodes with 2 possible spin values \(\{-1,1\}\). The Potts model generalizes that to a finite number, \(q\geq 2\), of spin states. The Potts model has been generalized further by K. Fortuin and P. Kasteleyn in 1969 when they introduced the random cluster model, see [9] for more history and details. The Potts model has been studied for its own peculiar and interesting dynamics, for instance the abrupt phase transitions that are observed for large values of \(q\), contrasting the smooth transitions exhibited by the Ising model.
The Potts model can be used, as other similar energy based approaches, to identify optimal graph clustering, see [16]. Furthermore, and what makes the Potts model more appealing, [27] shows that the absolute Potts model does not suffer from the dreaded resolution-limit issue that is associated with the Girvan-Newman modularity approaches.
Finally, and according to experiments conducted by the authors of [16], which showed that by tweaking the temperature parameter of the Potts model, one can control the level of granularity of the clustering results and achieve accurate approximations of the true labels. This provides the researcher with a natural/intrinsic tool to help identify the optimal number of cluster for a given application. Further work could explore potential avenues to identify optimal values for such parameters.
## 3 Preliminaries
First, we lay out the notation for the graph clustering problem. The graph is represented as a tuple of vertices and edges, \(G=(V,E)\) and number of nodes, edges being \(n=|V|,m=|E|\) respectively.
The adjacency is a square matrix \(\mathbf{A}\in\mathbb{R}^{n\times n}\) with \(\mathbf{A}_{ij}=1\) if there is a link between node \(i,j\) and \(0\) otherwise. The node features matrix is described as \(\mathbf{X}\in\mathbb{R}^{n\times l}\) with \(l\) be the number of features.
## 4 Potts Model
Following the works of FK **[ref]**, random cluster models were established as a generalization of percolation, Ising, and Potts models, all of which have been introduced earlier. The Ising model was introduced in statistical mechanics to model ferromagnetic materials, featuring two spin states per node. The Potts model generalizes that to a number of \(q\) spin states per node. These models were utilized to model different materials, and studied further for their own interesting characteristics, e.g. the abrupt phase transitions exhibited by the Potts model for large values of \(q\) contrasting the smooth transitions of the Ising model.
The Hamiltonian, to be minimized, for the Potts model is defined by penalizing two connected nodes if they do not belong to the same cluster. More precisely, see **[ref]**:
\[H(\{z_{ki}\})=\frac{1}{2}\sum_{i=1}^{n}\sum_{k=1}^{q}\sum_{j=1}^{n}z_{ki}(1-z_{ kj})k_{ij}\alpha_{ij}=\sum_{(i,j)\text{ neighbors}}k_{ij}(1-\delta_{ij}),\]
where the \(\{z_{ki}\}\) represents a given label assignment, \(z_{ki}=1\) if the \(i\)-th node belongs to the \(k\)-th cluster, and zero otherwise, \(\delta_{ij}=\sum_{k}z_{ki}z_{kj}\). \(k_{ij}=k(x_{i},x_{j})=k(x_{i}-x_{j})\), a kernel function, and \(\alpha_{ij}=1\) if \(i!=j\) and \(i\)-th and \(j\)-th nodes are connected, and zero, otherwise.
_. Equivalently_, see **[ref]**, let \(q\) be an integer satisfying \(q\geq 2\), and take as sample space the set of vectors \(\Sigma=1,2,...,q^{V}\), \(V\) is the set of nodes of the graph \(G\). So, each vertex of \(G\) may be in any of \(q\) states. For an edge \(e=\langle x,y\rangle\) and a configuration \(\sigma=(\sigma_{x}:x\in V)\in\Sigma\), we write \(\delta_{e}(\sigma)=\delta_{\sigma x,\sigma y}\), where \(\delta_{i,j}\) is the Kronecker delta. The relevant probability measure is given by:
\[\pi_{\beta,q}(\sigma)=\frac{1}{Z_{p}}exp(-\beta H^{\prime}(\sigma)),\]
for \(\sigma\in\Sigma\), where \(Z_{P}=Z_{P}(\beta,q)\) is the appropriate normalizing constant and the Hamiltonian \(H^{\prime}\) is given by:
\[H^{\prime}(\sigma)=-\Sigma_{e=(x,y)}\delta_{e}(\sigma)\]
### Adaptation as a graph clustering quality measuring function
Communities or clusters [5] generally accepted as groups of close distance or densely interconnected nodes and far distance or sparsely connected with other communities. Hence, as a quality measuring function, it should satisfy
* reward internal links within the same community and non-links between different communities (in the same spin state)
* vice versa, penalize non-links between nodes in the same community and links between communities
Connecting the analogy of spin state and cluster, with \(\delta_{ij}\) being the Kronecker delta, the Potts Model quality function is derived naturally as (negative sign was added for conventional minimizing loss function)
\[\mathcal{H}(\{\sigma\}) =-\sum_{i\neq j}a_{ij}\underbrace{A_{ij}\delta(\sigma_{i},\sigma _{j})}_{\text{internal links}}+\sum_{i\neq j}b_{ij}\underbrace{(1-A_{ij}) \delta(\sigma_{i},\sigma_{j})}_{\text{internal non-links}}\] \[+\sum_{i\neq j}c_{ij}\underbrace{A_{ij}(1-\delta(\sigma_{i}, \sigma_{j}))}_{\text{external links}}-\sum_{i\neq j}d_{ij}\underbrace{(1-A_{ ij})(1-\delta(\sigma_{i},\sigma_{j}))}_{\text{external non-links}}\]
with \(\sigma_{i}\in\{1,2,...,q\}\) denotes the spin state (or group index) of node \(i\) in the graph \(G(V,E)\).
With the general assumption that links and non-links are each weighted equally, regardless whether they are external or internal \(a_{ij}=c_{ij}\) and \(b_{ij}=d_{ij}\), we can rewrite \(\mathcal{H}(\{\sigma\})\) as the Hamiltonian
\[\mathcal{H}(\{\sigma\})=-\sum_{i\neq j}\{a_{ij}A_{ij}-b_{ij}(1-A_{ij})\} \delta(\sigma_{i},\sigma_{j}) \tag{1}\]
A choice of weights \(a_{ij},b_{ij}\) will make the adjustment of contribution of links and non-links easier by a change of parameter and help formulate the null model such that the partition is compared with. A common choice is a random null configuration model [9] with a non-negative resolution parameter \(\gamma\) by [26]\(a_{ij}=1-\gamma p_{ij}\) and \(bij=\gamma p_{ij}\), where \(p_{ij}\) denotes the probability that a link exists between node i and j, normalized, such that \(\sum_{i\neq j}p_{ij}=2m\). In the case of \(\gamma=1\), which means the total amount of energy that can possibly be contributed by links and non-links is equal then the equation 1 can be reduced to
\[\mathcal{H}(\{\sigma\}) =-\sum_{i\neq j}\{A_{ij}-\gamma p_{ij}\}\delta(\sigma_{i},\sigma_ {j})\] \[=-\sum_{c}\sum_{i\neq j}\{A_{ij}-\gamma p_{ij}\}\delta(\sigma_{i },c)\delta(\sigma_{j},c)\]
Rewrite the Hamiltonian given the expected number of edges \(\{e_{c}\}_{p_{ij}}=\sum_{i\neq j}p_{ij}\) and probability \(p_{ij}=\frac{k_{i}k_{j}}{2m}\) of the null configuration model [20][23]
\[\mathcal{H}(\{\sigma\}) =\frac{1}{2m}\sum_{c}[e_{c}-\gamma\frac{k_{c}^{2}}{2m}] \tag{2}\] \[=\frac{1}{2m}\sum_{i\neq j}\{A_{ij}-\gamma\frac{k_{i}k_{j}}{2m} \}\delta(\sigma_{i},\sigma_{j}) \tag{3}\]
Minimizing Hamiltonian corresponds to a partition of desirable characteristics. However, a minimum is not necessarily unique and better in general sense to a non minimum. Additionally, the choice of \(\gamma\) has definite impact on community structure in which has to be chosen carefully. The degree \(d_{i}\) of node \(i\) is the number of connections from \(V\) to \(i\), the vector \(\mathbf{d}=[d]_{i}\) contains the degrees of all the nodes in the graph.
### Spectral Form Optimization
In the discussion of complexity in the task of optimizing the spectral form of modularity in [18] and [36], the problem is proven to be **NP-hard** and a relaxation version is empirically shown to be solved efficiently with a soft cluster matrix \(\mathbf{C}\in\mathbb{R}^{n\times k}\) be the cluster assignment matrix and \(d\) be the degree vector. Then, with matrix \(\mathbf{P}\) defined as
\[\mathbf{P} =\mathbf{A}-\gamma\frac{\mathbf{d}\mathbf{d}^{T}}{2m} \tag{4}\] \[\mathbf{P}x =\mathbf{A}\mathbf{x}-\gamma\frac{\mathbf{d}\mathbf{x}\mathbf{d}^ {T}}{2m} \tag{5}\]
Then Hamiltonian can be reformulated in matrix form
\[\mathcal{H}=\frac{1}{2m}Tr(\mathbf{C}^{T}\mathbf{P}\mathbf{C}) \tag{6}\]
## 5 GNNs for graph clustering
### Previous work
Recently, several architecture settings have been proposed in the literature
* Adaptive Graph Convolution (AGC), Deep Attentional Embedded Graph Clustering (DAEGC) [38; 42]: learn the embedding of node features based on convolutional and attentional GNN structure respectively.
* Deep Graph Infomax (DGI) [37]: learn the embedding by maximizing mutual information
* Neural Overlapping Community Detection (NOCD) [29]: combine the power of GNNs and the Bernoulli-Poisson probabilistic model under reconstruction loss.
* Differential Pooling (DiffPool) [41] : is one of the first attempts developing a pooling layer that use message passing to learn and end-end unsupervised cluster matrix that relies on minimizing the entropy of the assignment and link prediction loss.
* MinCutPool[2]: utilizes K-way normalized min cut problem as a optimize measuring function to find the interest partitions
* DMoN [36]: learns the cluster assignment by optimizing modularity function
* Top-k [7]: learns an embedding vector to obtain a score from each node. The nodes with k highest scores are kept and the rest are dropped from the graph.
We can divide the recent GNN clustering algorithms into 2 classes. The first are algorithms that generate embeddings based on node feature and adjacency matrix. Afterwards, the embeddings are clustered with k-means algorithm. The candidates for this class are AGC, DAEGC, DGI. The second learns the assignment matrix from end-end either by gate keeping strategy (Top-k) or optimizing for a global function (DMoN, MinCutPool, DiffPool).
Additionally, there have been efforts developing unsupervised algorithm for community detection that optimize Potts model loss function or modularity commonly, in a greedy fashion
* Louvain [3]: optimizing modularity in theoretical sense will results in the best possible parition. But going through all the combination to find the best modularity is expensive and impractical. Louvain is a heuristic approach to solve that problem.
* Leiden [35]: an recent attempt to solve the issue of finding disconnected community. Leiden introduces one more phase into the system, refinement of partition. Communities detected from the first modularity optimizing phase may split into smaller partitions in the second phase, inheritly solving the problem of finding small communities (resolution limit)
* Constant Potts Model [34]: the layout of the algorithm is same as Louvain and optimizing constant Potts Model function instead of modularity. All the above methods are extremely efficient and well studied. However, they are limited only with the graph structured and do not take the node features into the account for partitioning.
### Graph Neural Networks
The recent advance of adapting neural networks onto graph structured data is built on the message passing paradigm that extract the feature information of the local neighborhood. The feature aggregated is passed through a nonlinear transformation. In particular, the message passing architecture is described as
\[\mathbf{X}^{t+1}=\mathbf{MP}(\mathbf{A},\mathbf{X}) \tag{7}\]
with \(\mathbf{X}^{t+1},\mathbf{X}^{t}\) being the output and input node features of \(t\) message passing layer respectively.
There are several modification in the realm on GNN. A popular variant is Graph Convolutional Networks (GCN) [11] uses implements the message passing function with a combination of linear transformations and ReLU function for normalized adjacency matrix \(\mathbf{\bar{A}}=\mathbf{D}^{-1/2}\mathbf{A}\mathbf{D}^{-1/2}\)
\[\mathbf{X}^{t+1}=ReLU(\mathbf{\bar{A}}\mathbf{X}^{t}\mathbf{W}) \tag{8}\]
In this work, we employ the GCN layer [11] as the setting for feature embedding encoder with skip layer going through selu [12] activation function for better convergent and identitical to DMoN settings
\[\mathbf{X}^{t+1}=SeLU(\mathbf{\bar{A}}\mathbf{X}^{t}\mathbf{W}+\mathbf{W}_{skip}) \tag{9}\]
## 6 Potts Model Networks (PMN)
In this section, we discuss the Potts Model Networks (PMN) inspired by the modularity optimization from [36] and the analysis and effort to overcome resolution limit [34]. Moreover, the introduction of resolution parameter in the pooling layer and loss function provided a competitive advantage of flexibility in adapting the Potts Model loss function 6 onto the graph structure inherited by the training data.
Potts Model Networks (PMN) is a GNN layer that takes the normalized adjacency matrix \(\mathbf{\bar{A}}=\mathbf{D}^{-1/2}\mathbf{A}\mathbf{D}^{-1/2}\) and obtains the soft cluster matrix \(\mathbf{C}\) with \(k\) being the max number of clusters
\[\mathbf{C}=softmax(MP(\mathbf{\bar{A}},\mathbf{X}),\mathbf{C}\in\mathbb{R}^{n \times k} \tag{10}\]
The choice of message passing paradigm can be any kind of differentiable suitable function, the use of GCN (possibly multi-layers) in this work is a set up to compare directly with DMoN to analyze the difference of optimizing Potts Model function versus modularity.
### Loss function
The proposed loss function is an aggregation of the Hamiltonian 1 \(\mathcal{L}_{\mathcal{H}}\), the collapse regularization [36]\(\mathcal{L}_{c}\), and the resolution parameter normalization \(\mathcal{L}_{\gamma}\)
\[\mathcal{L}_{PMN} =\mathcal{L}_{\mathcal{H}}+\mathcal{L}_{c}+\mathcal{L}_{\gamma}\] \[\mathcal{L}_{\mathcal{H}} =\frac{1}{2m}Tr(\mathbf{C}^{T}\mathbf{P}\mathbf{C})\] \[\mathcal{L}_{c} =\frac{k}{\sqrt{n}}\left\|\sum_{i}\mathbf{C}_{i}^{T}\right\|_{F}-1\] \[\mathcal{L}_{\gamma} =\left\|\gamma-\gamma_{max}\right\|\]
The introduction of \(\gamma\) as a trained parameter is to motivate the PMN to learn the best resolution given the null configuration model. As discussed in [35], the influence \(\gamma\) is interpreted as a threshold for detected communities. The inner-cluster density is filtered to be at least \(\gamma\) while intra-cluster density should be lower than \(\gamma\). The higher the resolution parameter is, the larger number of communities would be detected [26]. The resolution parameter \(\gamma\) is first introduced in [13] to address a major problem with using Modularity as a global optimization function called resolution limit [6].
### Resolution-limit
[6] shows that modularity as quality measuring function inheritly has a filter scale that depends on the size of the network. Communities that are smaller than this filter scale may not be detected even if they are complete and fully connected. The reason lies on the use of modularity as a sum of modules to be a global metric for optimization. Finding the best modularity is a trade-off between the number of communities and the modular value of each term. An increase in number of communities doesn't necessarily yield an increase in the global modularity because modular value for each community will be smaller.
### Optimizing resolution
A limit of the framework that coarsen node features and edge link into a cluster assignment matrix is the number of maximum clusters have to be defined before the training process. There is no indication information for the task of choosing the suitable \(k\) number of clusters. Modularity, as described above, does suffer from resolution limit and not necessarily a good indication to tune the number of clusters. For Potts Model, the resolution parameter \(\gamma\) plays the role as an indication of granularity of communities in a heuristic greedy approach. Applying Potts Model as a loss function and involving \(\gamma\) as a training parameter provide two competitive edges:
* number of clusters
* Adaptation of the loss function onto the graph sparsity and structure
As shown in 1, the PMN \(\gamma\) variable stabilized its training process first leading the convergence of the optimal Potts loss convergence. This demonstrates the ability to adapt the loss function onto the graph structure of each dataset.
## 7 Experiments & results
**Benchmark datasets:** we borrow the benchmark results from [36] on 10 real-world datasets from citation networks Cora, CiteSeer, and PubMed [28], coauthors networks based on Microsoft Academic Graph (CS, Physics, Med, Chem, Engineering) [29, 30, 32], to Amazon co-purchase graph (Photo, Computers) [15]. The features of nodes are bag-of-word for abstract, paper keywords, or reviews. The corresponding labels shows topic of papers, fields of study, or product category.
**Metrics:** we use the standard metrics to measure quality of cluster. On the graph side, we employ conductance (C)[40] which measures the edge volume that points outside the cluster, and modularity (Q)[19] against existing benchmark. For the correlation with ground truth label, we use normalized mutual information score (NMI) and F1 score [14]. Conductance is ranked the lower the better, while the rest of the metrics, the higher the better. We will scale all metric to the range of \(100\) for better aesthetic and comparison.
**Model configurations:** in our experiments, the GNN model used for PMN is built on top of the DMoN architecture, and all parameters are kept the same for direct comparison.The baseline results that we compared with are from [36] results. The encoder includes \(1\) layer with \(64\) units and the max number of clusters \(k\) equals to \(16\). Dropout is kept at \(0.5\) and the maximum \(\gamma\) is fixed at \(5\). The initialization of weight matrices are \(\mathcal{N}(0,1)\) and \(\gamma_{init}=1\). The collapse and \(\gamma\) regularization are \(1,0.01\) respectively. We run the experiments for 4 models MinCut, MinCut with Orthogonal loss [2], DMoN [36], and PMN. All the parameters of MinCut and DMoN are the same as described above, we also borrow the results from [36] to demonstrate the comprehensive comparison between all methods. The results are averaged over 10 runs of random seeds.
**Results:** the performance of Potts and our runs of DMoN, Ortho, MinCut are shown in table 4,2,3. The results display strong and consistent performance of PMN over 10 datasets. PMN consistently perform worse in Modularity (Q) while better in other metrics comparing to DMoN with same configuration. Notably, PMN performs extremely well on coauthorship dataset [30]. PMN achieves state-of-the-art conductance, F1, NMI on Cora, PubMed, and all Coauthor datasets. The lack of performance in modularity measure expected because PMN is not optimizing for modularity directly, it finds the best resolution for the graph data as well as minimizing the Potts loss function 6. Because of that, we achieve significantly better results in F1, NMI, conductance score in Coauthor Phys with a \(200\%\) increasing from the second best of \(42.9(F1),23.5(C)\) to \(88.2,5.5\) respectively. In general, DMoN is better at maximizing modularity, \(10-30\%\) better than PMN consistantly. In our runs, there is difficulty to reproduce all the results from DMoN [36] when following the configuration of parameter settings. Even so, PMN still show better result most of the time except for the citation datasets. Overall, PMN performs dominantly better than its counter part DMoN except modularity measurement.
## 8 Conclusions
In this paper, we introduced a generalized framework to address important limitations of modularity optimization and proposed a new trainable measurement of clustering quality. The Potts Model Network approach relies on the existing Potts model application in statistical mechanics for community detection. Moreover, we explore the performance of PMN model on 10 real life datasets and achieve desirable results comparing to existing pooling framework.
Future research direction could be developing an approach to utilize the trained \(gamma\) to tune
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \hline Dataset & \(|\text{VI}|\) & \(|\text{E}|\) & \(|\text{XI}|\) & \(|\text{YI}|\) \\ \hline Cora & 2708 & 5278 & 1433 & 7 \\ \hline CiteSeer & 3327 & 4614 & 3703 & 6 \\ \hline Pubmed & 19717 & 44325 & 500 & 3 \\ \hline Amazon Computers & 13752 & 143604 & 767 & 10 \\ \hline Amazon Photo & 7650 & 71831 & 745 & 8 \\ \hline Coauthor Eng & 14927 & 49305 & 4839 & 16 \\ \hline Coauthor CS & 18333 & 81894 & 6805 & 15 \\ \hline Coauthor Phys & 34493 & 247962 & 8415 & 5 \\ \hline Coauthor Chem & 35409 & 157358 & 4877 & 14 \\ \hline Coauthor Med & 63282 & 810314 & 5538 & 17 \\ \hline \end{tabular}
\end{table}
Table 1: Detail information of the benchmark datasets: number of nodes \(|V|\), number of edges \(|E|\), dimension of the node features \(|X|\), and number of labels \(|Y|\).
Figure 1: The convergence of potts loss and gamma in the process of optimization, the gamma converges into a stable value leads to the stable convergence of Potts loss
the number of clusters. Investigating on the effect of sparsity on the performance of Potts Model is promising based on the empirical results from our experiments.
## 9 Acknowledgement
We would like to thank the SailPoint Technologies for the computational support.
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|l|l|l|l|l|l|} \hline \multirow{2}{*}{Methods} & \multicolumn{4}{c|}{Coauthor Phys} & \multicolumn{4}{c|}{Coauthor Chem} & \multicolumn{4}{c|}{Coauthor Med} \\ \cline{2-13} & \multicolumn{2}{c|}{Graph} & \multicolumn{2}{c|}{labels} & \multicolumn{2}{c|}{graph} & \multicolumn{2}{c|}{Labels} & \multicolumn{2}{c|}{Graph} & \multicolumn{2}{c|}{Labels} & \multicolumn{2}{c|}{Graph} & \multicolumn{2}{c|}{Labels} \\ \hline Metrics & C & Q & NMI & F1 & C & Q & NMI & F1 & C & Q & NMI & F1 & C & Q & NMI & F1 \\ \hline k-m(feat) & 61.7 & 19.8 & 18.5 & 27.0 & 60.5 & 30.3 & 24.5 & 29.2 & 55.8 & 33.4 & 19.4 & 24.4 \\ \hline SBM & 15.4 & **77.3** & 36.2 & 30.2 & 14.2 & 78.1 & 15.3 & 19.1 & 39.0 & 53.5 & 16.4 & 16.7 \\ \hline DeepWalk & 62.1 & 30.7 & 24.3 & 24.8 & 68.1 & 24.3 & 27.6 & 24.8 & 16.6 & **75.3** & 22.9 & 17.2 \\ \hline AGC & 48.9 & 43.2 & 34.1 & 28.9 & 41.9 & 50.0 & 25.5 & 27.5 & 44.9 & 46.8 & 18.2 & 18.4 \\ \hline SDCN & 37.5 & 50.8 & 27.9 & 29.9 & 20.0 & 62.3 & **31.4** & **41.9** & 22.4 & 50.3 & 19.5 & 29.9 \\ \hline DAEGC & 56.8 & 33.5 & 8.3 & 13.6 & 47.6 & 36.4 & 4.3 & 18.0 & 53.6 & 37.5 & 4.4 & 11.6 \\ \hline DGI & 28.0 & 64.0 & 52.7 & 40.1 & 17.5 & 73.7 & 40.4 & 39.4 & 82.9 & 9.6 & 22.0 & 26.4 \\ \hline NQCD & 14.7 & 78.3 & 46.3 & 36.7 & 6.8 & 84.4 & 20.0 & 24.1 & 21.7 & 69.6 & 25.5 & 20.8 \\ \hline DiffPool & 26.1 & 66.3 & 32.9 & 34.4 & 26.0 & 63.4 & 20.0 & 23.5 & 32.9 & 56.8 & 20.2 & 26.3 \\ \hline MinCut & 29.3 & 71.5 & 30.1 & 25.0 & 14.1 & 82.2 & 25.9 & 20.1 & 22.3 & 64.5 & 24.1 & 28.5 \\ \hline Ortho & 19.2 & 65.6 & 29.4 & 26.6 & 15.4 & 79.2 & 30.1 & 19.2 & 47.7 & 38.2 & 21.0 & 18.4 \\ \hline DMoN & 11.8 & 75.8 & 45.6 & 35.9 & 6.9 & **83.2** & 27.9 & 31.4 & 17.2 & 68.3 & 30.2 & 40.1 \\ \hline Potts & **5.5** & 57.8 & **49.7** & **54.7** & 6.0 & 81.2 & 29.2 & 36.9 & **7.4** & 59.4 & **32.4** & **56.3** \\ \hline \end{tabular}
\end{table}
Table 2: Performance of benchmark models on medium size graph datasets
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|l|l|l|l|} \hline \multirow{2}{*}{Methods} & \multicolumn{4}{c|}{Coauthor Phys} & \multicolumn{4}{c|}{Coauthor Chem} & \multicolumn{4}{c|}{Coauthor Med} \\ \cline{2-13} & \multicolumn{2}{c|}{Graph} & \multicolumn{2}{c|}{Labels} & \multicolumn{2}{c|}{Graph} & \multicolumn{2}{c|}{Labels} & \multicolumn{2}{c|}{Graph} & \multicolumn{2}{c|}{Labels} \\ \cline{2-13} & C & Q & NMI & F1 & C & Q & NMI & F1 & C & Q & NMI & F1 \\ \hline k-m(feat) & 57.0 & 19.4 & 30.6 & 42.9 & 42.9 & 18.2 & 13.9 & 35.1 & 54.7 & 19.3 & 11.8 & 31.7 \\ \hline SBM & 25.9 & **66.9** & 45.4 & 30.4 & 18.4 & 74.6 & 25.4 & 25.0 & 21.1 & 72.0 & 36.1 & 31.1 \\ \hline DeepWalk & 44.7 & 47.0 & 43.5 & 24.3 & 14.0 & 74.8 & 36.5 & 33.8 & 16.6 & 72.1 & 43.1 & 39.4 \\ \hline SDCN & 32.1 & 52.8 & 50.4 & 39.9 & 29.9 & 58.7 & 33.3 & 32.8 & 34.8 & 54.2 & 25.2 & 26.5 \\ \hline DGI & 38.6 & 51.2 & 51.0 & 30.6 & 31.6 & 60.6 & 40.8 & 32.9 & 35.7 & 56.5 & 34.8 & 27.7 \\ \hline NOCD & 25.7 & 65.5 & 51.9 & 28.7 & 19.2 & 73.1 & 43.1 & 40.1 & 22.0 & 69.7 & 42.5 & 37.6 \\ \hline MinCut & 29.7 & 63.1 & 42.3 & 32.4 & 21.4 & 70.1 & 41.5 & 33.5 & 24.2 & 70.1 & 43.1 & 33.4 \\ \hline Ortho & 31.1 & 63.5 & 46.4 & 35.8 & 21.6 & 69.2 & 39.1 & 29.3 & 21.5 & 68.3 & 39.5 & 31.5 \\ \hline DMoN & 23.5 & 66.3 & 53.8 & 37.8 & 21.8 & 71.2 & 46.3 & 44.9 & 22.2 & 72.2 & 51.5 & 50.9 \\ \hline Potts & **5.5** & 49.5 & **72.2** & **88.2** & **11.3** & **75.7** & **58.1** & **57.1** & **14.9** & **72.6** & **60.4** & **59.7** \\ \hline \end{tabular}
\end{table}
Table 3: Citation network dataset performance
## Appendix A Appendix
|
2305.03395 | Sparsifying Bayesian neural networks with latent binary variables and
normalizing flows | Artificial neural networks (ANNs) are powerful machine learning methods used
in many modern applications such as facial recognition, machine translation,
and cancer diagnostics. A common issue with ANNs is that they usually have
millions or billions of trainable parameters, and therefore tend to overfit to
the training data. This is especially problematic in applications where it is
important to have reliable uncertainty estimates. Bayesian neural networks
(BNN) can improve on this, since they incorporate parameter uncertainty. In
addition, latent binary Bayesian neural networks (LBBNN) also take into account
structural uncertainty by allowing the weights to be turned on or off, enabling
inference in the joint space of weights and structures. In this paper, we will
consider two extensions to the LBBNN method: Firstly, by using the local
reparametrization trick (LRT) to sample the hidden units directly, we get a
more computationally efficient algorithm. More importantly, by using
normalizing flows on the variational posterior distribution of the LBBNN
parameters, the network learns a more flexible variational posterior
distribution than the mean field Gaussian. Experimental results show that this
improves predictive power compared to the LBBNN method, while also obtaining
more sparse networks. We perform two simulation studies. In the first study, we
consider variable selection in a logistic regression setting, where the more
flexible variational distribution leads to improved results. In the second
study, we compare predictive uncertainty based on data generated from
two-dimensional Gaussian distributions. Here, we argue that our Bayesian
methods lead to more realistic estimates of predictive uncertainty. | Lars Skaaret-Lund, Geir Storvik, Aliaksandr Hubin | 2023-05-05T09:40:28Z | http://arxiv.org/abs/2305.03395v1 | # Sparsifying Bayesian neural networks with latent binary variables and normalizing flows
###### Abstract
Artificial neural networks (ANNs) are powerful machine learning methods used in many modern applications such as facial recognition, machine translation, and cancer diagnostics. A common issue with ANNs is that they usually have millions or billions of trainable parameters, and therefore tend to overfit to the training data. This is especially problematic in applications where it is important to have reliable uncertainty estimates. Bayesian neural networks (BNN) can improve on this, since they incorporate parameter uncertainty. In addition, latent binary Bayesian neural networks (LBBNN) also take into account structural uncertainty by allowing the weights to be turned on or off, enabling inference in the joint space of weights and structures. In this paper, we will consider two extensions to the LBBNN method: Firstly, by using the local reparametrization trick (LRT) to sample the hidden units directly, we get a more computationally efficient algorithm. More importantly, by using normalizing flows on the variational posterior distribution of the LBBNN parameters, the network learns a more flexible variational posterior distribution than the mean field Gaussian. Experimental results show that this improves predictive power compared to the LBBNN method, while also obtaining more sparse networks. We perform two simulation studies. In the first study, we consider variable selection in a logistic regression setting, where the more flexible variational distribution leads to improved results. In the second study, we compare predictive uncertainty based on data generated from two-dimensional Gaussian distributions. Here, we argue that our Bayesian methods lead to more realistic estimates of predictive uncertainty.
## 1 Introduction
The idea of using a mathematical model to imitate how the brain works was first introduced in McCulloch and Pitts (1943). However, it was not until more recent years that the true power of these models could be harnessed with the idea of using backpropagation Rumelhart et al. (1986) to train the model with gradient descent. With the advent of modern GPU architectures, deep neural networks can be scaled to big data, and have shown to be very successful on a variety of tasks including computer vision Voulodimos et al. (2018), natural language processing Young et al. (2018) and reinforcement learning Li (2017). Modern deep learning architectures can have billions of trainable parameters Khan et al. (2020). Due to the large number of parameters in the model, the network has the capacity to overfit, and therefore may not generalize well to unseen data. Various regularization methods are used to try to deal with this, such as early stopping Prechelt (1998), dropout Srivastava et al. (2014) or data augmentation Shorten and Khoshgoftaar (2019). These techniques are heuristic and therefore it is not always clear how to use them and how well they work in
practice. It is also possible to reduce the number of parameters in the network with pruning. This is typically done with the dense-to-sparse method (Han et al., 2017). Here, a dense network is trained, while the importance of the weights (i.e. their magnitude) is recorded. Then, the weights that fall below the sparsity threshold (a hyperparameter) are removed. In Frankle and Carbin (2018), it is hypothesized that in randomly initialized dense networks, there exists a sparse sub-network (the winning lottery ticket) that can be trained in isolation and obtain the same test accuracy as the original dense network. Instead of training and pruning once, referred to as one-shot pruning, this process is repeated sequentially several times, removing a certain percentage of the remaining weights each time, which then results in networks that have a higher degree of sparsity than the ones found with one-shot pruning. However, this comes at a higher computational cost. Further refinements to this are done in Evci et al. (2020), where the network starts off dense, and dynamically removes the weights with the smallest magnitude, while also adding new connections based on gradient information. Again, these approaches are heuristic and lack a solid theoretical foundation. Another issue with deep learning models is that they often make overconfident predictions. In Szegedy et al. (2013), it was shown that adding a small amount of noise to an image can trick a classifier into making a completely wrong prediction (with high confidence), even though the image looks exactly the same to the human eye. The opposite is also possible, images that are white noise can be classified with almost complete certainty to belong to a specific class (Nguyen et al., 2015).
Bayesian neural networks (BNNs) were presented by Neal (1992), MacKay (1995), and Bishop (1997). They use a rigorous Bayesian methodology to handle parameter and prediction uncertainty and to incorporate prior knowledge. In many cases, this results in more reliable solutions with less overfitting; however, this comes at the expense of extremely high computational costs. Until recently, inference on Bayesian neural networks could not scale to large multivariate data due to limitations of standard Markov chain Monte Carlo (MCMC) approaches, the main quantitative procedure used for complex Bayesian inference. Recent developments of variational Bayesian approaches (Gal, 2016) allow us to approximate the posterior of interest and lead to more scalable methods.
Still, BNNs tend to be heavily over-parameterized and difficult to interpret. It is therefore interesting to consider sparsity-inducing methods from a Bayesian perspective. This is typically done by using sparsity-inducing priors, as in variational dropout (Kingma et al., 2015; Molchanov et al., 2017), which uses the independent log uniform prior on the weights. This is an improper prior, meaning that it is not integrable and thus not a valid probability distribution. As noted in Hron et al. (2017), using this prior, combined with commonly used likelihood functions leads to an improper posterior, meaning that the obtained results can not be explained from a Bayesian modeling perspective. It is argued that variational dropout should instead be interpreted as penalized maximum likelihood estimation of the variational parameters. Additionally, Gale et al. (2019) finds that while variational dropout works well on smaller networks, it gets outperformed by the heuristic (non-Bayesian) methods on bigger networks. Another type of sparsity inducing prior is the independent scale mixture prior, where Blundell et al. (2015) proposed a mixture of two Gaussian densities, where using a small variance for the second mixture component leads to many of the weights having a prior around 0. Another possibility is to use the independent spike-and-slab prior, most commonly used in Bayesian linear regression models. This prior is used in the la
tent binary Bayesian neural networks (LBBNN) introduced by [19, 2023]. The prior was studied from a theoretical perspective in [20]. In [19] it was empirically shown that using this prior will induce a very sparse network (around 90 % of the weights were removed) while maintaining good predictive power. The LBBNN method takes into account uncertainty around whether each weight is included or not (structural uncertainty) and uncertainty in the included weights (parameter uncertainty) given a structure, allowing for a fully Bayesian approach to network sparsification (see Figure 1). In this paper, we show that transforming the variational posterior distribution with normalizing flows can result in even more sparse networks while improving predictive power compared to the mean field approach in [19]. Additionally, we demonstrate that the flow network handles predictive uncertainty well, and performs better than the mean-field methods at variable selection in a logistic regression setting with highly correlated variables.
## 2 The model
Given the explanatory variable \(\mathbf{x}\in\mathbb{R}^{n}\), and the response variable \(\mathbf{y}\in\mathbb{R}^{m}\), a neural network models the function
\[\mathbf{y}\sim f(\mathbf{\eta}(\mathbf{x})).\]
The mean vector \(\mathbf{\eta}\) is obtained through a composition of semi-affine transformations:
\[u_{j}^{(l)}=\sigma^{(l)}\bigg{(}\sum_{i=1}^{n^{(l-1)}}u_{i}^{(l-1)}\gamma_{ij} ^{(l)}w_{ij}^{(l)}+b_{j}^{(l)}\bigg{)},j=1,\ldots,n^{(l)},l=1,\ldots,L, \tag{1}\]
with \(\eta_{j}=u_{j}^{(L)}\). Additionally, \(\mathbf{u}^{(l-1)}\) denotes the inputs from the previous layer (with \(\mathbf{u}^{0}=\mathbf{x}\) corresponding to the explanatory variables), the \(w_{ij}^{(l)}\)'s are the weights, the \(b_{j}^{(l)}\)'s are the bias terms, and \(n^{(l)}\) (and \(n^{(0)}=n\)) the number of inputs at layer \(l\) of a total \(L\) layers. Further, we have the elementwise non-linear activation functions \(\sigma^{(l)}\). The additional parameters
Figure 1: A dense network on the left, a possible sparse structure on the right.
\(\gamma_{ij}^{(l)}\in\{0,1\}\) denote binary inclusion variables for the corresponding weights. From here on, we drop the layer notation for readability, since the layers will always be considered independent.
Following Hubin and Storvik (2019), Polson and Rockova (2018), we consider a structure to be defined by the configuration of the binary vector \(\mathbf{\gamma}\), and the weights of each structure conditional on this configuration. To consider uncertainty in both structures and weights, we use the spike-and-slab prior, where for each (independent) layer of the network, we also consider the weights to be independent, resulting in the LBBNN model:
\[p(w_{ij}|\gamma_{ij}) =\gamma_{ij}\mathcal{N}(0,\sigma^{2})+(1-\gamma_{ij})\delta(w_{ij})\] \[p(\gamma_{ij}) =\text{Bernoulli}(\alpha).\]
Here, \(\delta(\cdot)\) is the Dirac delta function, which is considered to be zero everywhere except for a spike at zero. In addition, \(\sigma^{2}\) and \(\alpha\) denote the prior variance and the prior inclusion probability of the weights, respectively. In practice, we use the same variance and inclusion probability across all the layers and weights, but this is not strictly necessary.
## 3 Bayesian inference
The main motivation behind using LBBNNs is that we are able to take into account both structural and parameter uncertainty, whereas standard BNNs are only concerned with parameter uncertainty. By doing inference through the posterior predictive distribution, we average over all possible structural configurations, and parameters. For a new observation \(\tilde{\mathbf{y}}\) given data, \(\mathcal{D}\), we have:
\[p(\tilde{\mathbf{y}}|\mathcal{D})=\sum_{\mathbf{\gamma}}\int_{\mathbf{w}}p(\tilde{\mathbf{y}}| \mathbf{w},\mathbf{\gamma},\mathcal{D})p(\mathbf{w},\mathbf{\gamma}|\mathcal{D})\,d\mathbf{w}.\]
This expression is intractable due to the ultra-high dimensionality of \(\mathbf{w}\) and \(\mathbf{\gamma}\), and using Monte Carlo sampling as an approximation is also challenging due to the difficulty of obtaining samples from the posterior distribution, \(p(\mathbf{w},\mathbf{\gamma}|\mathcal{D})\). Instead of trying to sample from the true posterior, we turn it into an optimization problem, using variational inference (VI, Blei et al., 2017). The key idea is that we replace the true posterior distribution with an approximation, \(q_{\mathbf{\theta}}(\mathbf{w},\mathbf{\gamma})\), with \(\mathbf{\theta}\) denoting some variational parameters. We learn the variational parameters that make the approximate posterior as close as possible to the true posterior. Closeness is measured through the Kullback-Leibler (KL) divergence,
\[\text{KL}\left[q_{\mathbf{\theta}}(\mathbf{w},\mathbf{\gamma})||p(\mathbf{w},\mathbf{\gamma}| \mathcal{D})\right]=\sum_{\mathbf{\gamma}}\int_{\mathbf{w}}q_{\mathbf{\theta}}(\mathbf{w},\bm {\gamma})\log\frac{q_{\mathbf{\theta}}(\mathbf{w},\mathbf{\gamma})}{p(\mathbf{w},\mathbf{\gamma}| \mathcal{D})}\,d\mathbf{w}.\]
Minimizing the KL-divergence (with respect to \(\mathbf{\theta}\)) is equivalent to maximizing the evidence lower bound (ELBO):
\[\text{ELBO}(q_{\mathbf{\theta}})=\mathbb{E}_{q_{\mathbf{\theta}}(\mathbf{w},\mathbf{\gamma})} \left[\log p(\mathcal{D}|\mathbf{w},\mathbf{\gamma})\right]-\text{KL}\left[q_{\mathbf{ \theta}}(\mathbf{w},\mathbf{\gamma})||p(\mathbf{w},\mathbf{\gamma})\right]. \tag{2}\]
The objective is thus to maximize the expected log-likelihood while penalizing with respect to the KL divergence between the prior and the variational posterior. How good the approximation becomes depends on the family of variational distributions \(\{q_{\mathbf{\theta}},\mathbf{\theta}\in\Theta\}\) that is chosen.
### Choices of variational families
A common choice (Blundell et al., 2015) for the approximate posterior in (dense) Bayesian neural networks is the mean-field Gaussian distribution. For simplicity of notation, denote now by \(\mathbf{W}\) the set of weights corresponding to a specific layer. Then
\[q_{\mathbf{\theta}}(\mathbf{W})=\prod_{i=1}^{n_{in}}\prod_{j=1}^{n_{out}}\mathcal{ N}(\tilde{\mu}_{ij},\tilde{\sigma}_{ij}^{2}),\]
where \(n_{in}\) and \(n_{out}\) denote the number of neurons in the previous and current layer, respectively. Weights corresponding to different layers are assumed independent as well.
In Hubin and Storvik (2019), the mean-field Gaussian distribution for Bayesian neural networks is extended to include the binary inclusion variables following Carbonetto and Stephens (2012):
\[\begin{split} q_{\mathbf{\theta}}(\mathbf{W}|\mathbf{\Gamma})& =\prod_{i=1}^{n_{in}}\prod_{j=1}^{n_{out}}[\gamma_{ij}\mathcal{N} (\tilde{\mu}_{ij},\tilde{\sigma}_{ij}^{2})+(1-\gamma_{ij})\delta(w_{ij})];\\ q_{\tilde{\alpha}_{ij}}(\gamma_{ij})&=\text{ Bernoulli}(\tilde{\alpha}_{ij}).\end{split} \tag{3}\]
However, the mean-field Gaussian distribution is typically too simple to be able to capture the complexity of the true posterior distribution. We follow Ranganath et al. (2016), and introduce a set of latent variables \(\mathbf{z}\) to model dependencies between the weights, and use the following variational posterior distribution:
\[\begin{split} q_{\mathbf{\theta}}(\mathbf{W}|\mathbf{\Gamma},\mathbf{z})& =\prod_{i=1}^{n_{in}}\prod_{j=1}^{n_{out}}[\gamma_{ij}\mathcal{N} (z_{i}\tilde{\mu}_{ij},\tilde{\sigma}_{ij}^{2})+(1-\gamma_{ij})\delta(w_{ij})] ;\\ q_{\tilde{\alpha}_{ij}}(\gamma_{ij})&=\text{ Bernoulli}(\tilde{\alpha}_{ij}),\end{split} \tag{4}\]
where \(\mathbf{z}\) follows a distribution \(q_{\mathbf{\phi}}(\mathbf{z})\). For an illustration of the difference between the two variational distributions (3) and (4), see Figure 2. Our novel variational distribution is thus able to take into account both weight and structural uncertainty, in addition to modeling dependencies between the weights. As for \(\mathbf{W}\), also \(\mathbf{z}\) is a set of variables related to a specific layer, independence between layers is assumed also for \(\mathbf{z}\)'s. To increase the flexibility of the variational posterior, we apply normalizing flows (Rezende and Mohamed, 2015) to \(q_{\mathbf{\phi}}(\mathbf{z})\). In general, a normalizing flow is a composition of invertible transformations of some initial (simple) random variable \(\mathbf{z}_{0}\),
\[\mathbf{z}_{k}=f_{k}(\mathbf{z}_{k-1}),\quad k=1,...,K.\]
The log density of the transformed variable \(\mathbf{z}=\mathbf{z}_{K}\) is given as,
\[\log q_{K}(\mathbf{z}_{K})=\log q_{0}(\mathbf{z}_{0})-\sum_{k=1}^{K}\log\left|\det \frac{\partial\mathbf{z}_{k}}{\partial\mathbf{z}_{k-1}}\right|. \tag{5}\]
We are typically interested in transformations that have a Jacobian determinant that is tractable, and fast to compute, in addition to being highly flexible. Transforming the
variational posterior distribution in a BNN with normalizing flows was first done in Louizos and Welling (2017), who coined the term multiplicative normalizing flows (MNF), where the transformations were applied in the activation space instead of the weight space. As the weights are of much higher dimensions, the number of flow parameters and thus the number of parameters of variational distribution would explode quickly. We will do the same here. The main difference in our work is that by using the variational posterior in (4), we also get sparse networks.
For the normalizing flows, we will use the inverse autoregressive flow (IAF), with numerically stable updates, introduced by Kingma et al. (2016). It works by transforming the input in the following way:
\[\mathbf{z}_{k-1} =\text{input} \tag{6}\] \[\mathbf{m}_{k},\mathbf{s}_{k} =\text{NeuralNetwork}(\mathbf{z}_{k})\] \[\mathbf{\kappa}_{k} =\text{sigmoid}(\mathbf{s}_{k})\] \[\mathbf{z}_{k} =\mathbf{\kappa}_{k}\odot\mathbf{z}_{k-1}+(1-\mathbf{\kappa}_{k})\odot\mathbf{m} _{k},\]
where \(\odot\) denotes elementwise multiplication. Assuming the neural network in (6) is autoregressive (i.e \(z_{k,i}\) can only depend on \(z_{k,1:i-1}\)), we get a lower triangular Jacobian and
\[\log\left|\det\frac{\partial\mathbf{z}_{k}}{\partial\mathbf{z}_{k-1}}\right|=\sum_{i= 1}^{n_{in}}\log\kappa_{k,i}. \tag{7}\]
### Computing the variational bounds
In practice, we minimize the negative ELBO in (2). In order to compute this upper bound, we need to marginalize out \(\mathbf{z}\) from the joint posterior distribution (still within one layer,
Figure 2: On the left, the mean-field variational posterior where the weights are assumed independent. On the right, the latent variational distribution \(z\) allows for modeling dependencies between the weights.
dropping the layer notation and also from here on dropping the subscript of \(q(\cdot)^{\prime}s\) for variational parameters for simplified notation):
\[q(\mathbf{W,\Gamma})=\int q(\mathbf{W,\Gamma},\mathbf{z})\,d\mathbf{z}.\]
This expression is generally not tractable, therefore we must turn to an approximation to learn its parameters. Similarly to Louizos and Welling (2017), we use that
\[\log q(\mathbf{W,\Gamma})=\log q(\mathbf{W,\Gamma}|\mathbf{z})+\log q(\mathbf{z})-\log q(\mathbf{z} |\mathbf{W,\Gamma}).\]
We thus get the following expression for the KL-divergence,
\[\begin{split}&\text{KL}\left[q(\mathbf{W,\Gamma})||p(\mathbf{W,\Gamma}) \right]=\\ &\mathbb{E}_{q(\mathbf{W,\Gamma,z})}\bigg{[}\text{KL}\left[q(\mathbf{W, \Gamma}|\mathbf{z})||p(\mathbf{W,\Gamma})\right]+\log q(\mathbf{z})-\log q(\mathbf{z}|\mathbf{W, \Gamma})\bigg{]}.\end{split} \tag{8}\]
After doing some algebra, we get the following for the first term:
\[\begin{split}&\text{KL}\left[q(\mathbf{W,\Gamma}|\mathbf{z})||p(\mathbf{W, \Gamma})\right]\\ =&\sum_{ij}\left[\tilde{\alpha}_{ij}\bigg{(}\log \tfrac{\sigma_{ij}}{\sigma_{ij}}+\log\tfrac{\tilde{\alpha}_{ij}}{\alpha_{ij}} -\frac{1}{2}+\tfrac{\tilde{\alpha}_{ij}^{2}+(\tilde{\mu}_{ij}z_{i}-\mu_{ij})^ {2}}{2\sigma_{ij}^{2}}\bigg{)}+(1-\tilde{\alpha}_{ij})\log\tfrac{1-\tilde{ \alpha}_{ij}}{1-\alpha_{ij}}\right].\end{split}\]
The second term is simply
\[\log q_{K}(\mathbf{z})=\log q_{0}(\mathbf{z}_{0})-\sum_{i=1}^{n_{in}}\log\kappa_{k,i}.\]
The third term, \(q(\mathbf{z}|\mathbf{W,\Gamma})\), is in general, intractable and also difficult to compute numerically. To address this, we introduce an additional auxiliary distribution \(r_{\theta}(\mathbf{z}|\mathbf{W,\Gamma})\), parameterized by \(\theta\) and get the upper bound of (8) following Ranganath et al. (2016).
\[\begin{split}&\text{KL}\left[q(\mathbf{W,\Gamma})||p(\mathbf{W,\Gamma}) \right]\leq\\ &\mathbb{E}_{q(\mathbf{W,\Gamma,z})}\bigg{[}\text{KL}\left[q(\mathbf{W, \Gamma}|\mathbf{z})||p(\mathbf{W,\Gamma})\right]+\log q(\mathbf{z})-\log r(\mathbf{z}|\mathbf{W, \Gamma})\bigg{]}.\end{split} \tag{9}\]
This bound is looser than the original upper bound (see Ranganath et al. (2016) for a proof), but the dependence structure in the variational posterior distribution can compensate for this. For the last term in (9), \(\log r(\mathbf{z}|\mathbf{W,\Gamma})\), we follow Louizos and Welling (2017) and use inverse normalizing flows to make this distribution flexible, with
\[r_{B}(\mathbf{z}_{B}|\mathbf{W,\Gamma})=\prod_{i=1}^{n_{in}}\mathcal{N}(\nu_{i},\tau_{ i}^{2}).\]
We define the dependence on \(\mathbf{W}\) and \(\mathbf{\Gamma}\) similar to Louizos and Welling (2017):
\[\mathbf{\nu} =n_{\text{out}}^{-1}(\mathbf{d}_{1}\mathbf{s}^{T})\mathbf{1}, \text{with }\mathbf{s}=\sigma(\mathbf{e}^{T}(\mathbf{W}\odot\mathbf{\Gamma})) \tag{10}\] \[\log\mathbf{\tau}^{2} =n_{\text{out}}^{-1}(\mathbf{d}_{2}\mathbf{s}^{T})\mathbf{1}.\]
Here, \(\mathbf{d}_{1}\), \(\mathbf{d}_{2}\) and \(\mathbf{e}\) are trainable parameters with the same shape as \(\mathbf{z}\). For \(\sigma\), we use hard-tanh, as opposed to tanh (used in Louizos and Welling (2017)) as this works better empirically. For the last term of (9), we thus have:
\[\log r\left(\mathbf{z}|\mathbf{W},\mathbf{\Gamma}\right)=\log r_{B}\left(\mathbf{z}_{B}| \mathbf{W},\mathbf{\Gamma}\right)+\log\left|\det\frac{\partial\mathbf{z}_{B}}{ \partial\mathbf{z}}\right|.\]
This means that we must use two normalizing flows, one to get from \(\mathbf{z}_{0}\) to \(\mathbf{z}=\mathbf{z}_{K}\), and another from \(\mathbf{z}_{B}\) to \(\mathbf{z}\). Here, we have shown the inverse normalizing flow with only one layer, but can in general be extended to an arbitrary number of them just like in (5).
For the biases, we assume they are independent of the weights, and each other. We use the standard normal prior with the mean-field Gaussian approximate posterior. As we do not use normalizing flows on the biases, we only need to compute the KL-divergence between two Gaussian distributions:
\[\text{KL}\left[q(\mathbf{b})||p(\mathbf{b})\right]=\sum_{ij}\left[\log\frac{\sigma_{b _{ij}}}{\tilde{\sigma}_{b_{ij}}}-\frac{1}{2}+\frac{\tilde{\sigma}_{b_{ij}}^{2} +(\tilde{\mu}_{b_{ij}}-\mu_{b_{ij}})^{2}}{2\sigma_{b_{ij}}^{2}}\right].\]
In practice, the ELBO is optimized through a (stochastic) gradient algorithm where the reparametrization trick Kingma and Welling (2013) combined with mini-batch is applied.
## 4 Extending LBNNNs with the LRT and MNF
One downside to the LBBNN method, proposed by Hubin and Storvik (2019, 2023), is that each forward pass during training requires sampling of the large \(\mathbf{\Gamma}\) and \(\mathbf{W}\) matrices, consisting of all \(\gamma_{ij}\)'s, and \(w_{ij}\)'s, to compute the activations for each layer in the network for the stochastic variational inference procedure described in detail in Hubin and Storvik (2019, 2023). Additionally, due to the binary nature of the \(\gamma_{ij}\)'s, a continuous relaxation was required in Hubin and Storvik (2019, 2023). Here, we will show how to circumvent both of these issues by sampling the pre-activations \(h_{j}\) given in (1) directly, typically referred to as the local reparametrization trick (LRT)Kingma et al. (2015). The difference in our case is that we must also take into account the binary inclusion variables. Here when we refer to \(h_{j}\), we shall mean the activation before the non-linear activation function is applied. Then, we still use exactly the same stochastic variational inference optimization algorithm as in Hubin and Storvik (2019, 2023) except for not requiring any relaxations. We can compute the mean and the variance of this as:
\[\mathbb{E}(h_{j}) =\mathbb{E}\left[b_{j}+\sum_{i=1}^{N}[o_{i}\gamma_{ij}w_{ij}] \right]=\tilde{\mu}_{b_{j}}+\sum_{i=1}^{N}[o_{i}\tilde{\alpha}_{ij}\tilde{\mu }_{ij}]\] \[\text{Var}(h_{j}) =\text{Var}\left[b_{j}+\sum_{i=1}^{N}[o_{i}\gamma_{ij}w_{ij}] \right]=\tilde{\sigma}_{b_{j}}^{2}+\sum_{i=1}^{N}[o_{i}^{2}\tilde{\alpha}_{ij} (\tilde{\sigma}_{ij}^{2}+(1-\tilde{\alpha}_{ij})\tilde{\mu}_{ij}^{2})].\]
Here, \(o\) denotes the output from the previous layer, consisting of \(N\) neurons. The general idea behind the LRT is that if we have a sum of independent Gaussian random variables, the sum will also be (exactly) Gaussian. In our case, we have a mixture of independent Gaussians, but the central limit theorem still holds for a sum of independent random variables, as long as Lindeberg's condition is satisfied. We also verify empirically that a sample of activations generated using the LRT will follow approximately the same distribution as a sample of activations generated by sampling \(\mathbf{\Gamma}\) and \(\mathbf{W}\). We can thus sample the activations as (independent) Gaussian variables with the means and variances given from the formulas above. Also, if we use the LRT, we have a reduction in the variance of the gradient estimates, as shown in Kingma et al. (2015). Note also that the approximations induced by the sampling procedure for \(\mathbf{h}\) also can be considered as an alternative variational approximation directly for \(p(\mathbf{h}|\mathcal{D})\).
For our second extension, we apply normalizing flows in the activation space to increase the flexibility of the variational posterior. When using normalizing flows, the mean and the variance of the activation \(h_{j}\) are:
\[\mathbb{E}(h_{j}) =\sum_{i=1}^{N}o_{i}z_{i}\tilde{\alpha}_{ij}\tilde{\mu}_{ij}+ \tilde{\mu}_{b_{j}}\] \[\text{Var}(h_{j}) =\sum_{i=1}^{N}o_{i}^{2}\tilde{\alpha}_{ij}(\tilde{\sigma}_{ij}^ {2}+(1-\tilde{\alpha}_{ij})z_{i}^{2}\tilde{\mu}_{ij}^{2})+\tilde{\sigma}_{b_{j }}^{2},\]
It should be noted that \(\mathbf{z}\) affects both the mean and the variance of our Gaussian approximation, whereas in Louizos and Welling (2017) it only influences the mean. Louizos and Welling (2017) also sample one \(\mathbf{z}\) for observation within the mini-batch. We found that empirically it made no difference on performance to only sample one vector and multiply the same \(\mathbf{z}\) with each input vector. We do this, as it is more computationally efficient.
## 5 Experiments
### Background
In this section, we are mainly interested in comparing the baseline method of Hubin and Storvik (2023), (denoted LBBNN-GP-MF in their paper), denoted LBBNN here, with the two extensions detailed in this paper. The first one, where we use the local reparametrization trick (LRT), we denote LBBNN-LRT, whereas the second one, where we use multiplicative normalizing flows (and the LRT) on the variational posterior distribution we shall denote LBBNN-FLOW. In Hubin and Storvik (2019, 2023), comprehensive classification experiments show that LBBNNs can sparsify Bayesian neural networks to a large degree while maintaining high predictive power. Additionally, Hubin and Storvik (2023) consider an experiment where dependencies in the variational posterior distribution are built-in by using a multivariate normal distribution on the logit scale for the posterior inclusion parameters. The multivariate normal option does not improve performance compared to the mean-field approach in the examples considered in Hubin and Storvik (2023), hence we drop this option from our set of compared approaches. Here, we demonstrate that increasing the flexibility of the variational posterior with normalizing flows improves predictive power compared to
the approaches considered in Hubin and Storvik (2019, 2023) for a set of baseline classification problems. Additionally, we perform two simulation studies. In the first one, we consider variable selection in a logistic regression setting, with highly correlated explanatory variables. In the second, we generate data from clusters of two-dimensional Gaussian distributions and compare how the different methods handle predictive uncertainty. All the experiments were coded in Python, using the PyTorch deep learning library (Paszke et al., 2019).
### Classification experiments
We perform two classification experiments, one with a fully connected architecture (as in Hubin and Storvik, 2019, 2023), and the other with a convolutional architecture (see appendix A for details on how this is implemented). In both cases, we classify on MNIST (Deng, 2012), FMNIST (Fashion MNIST) (Xiao et al., 2017) and KMNIST (Kuzushiji MNIST) (Clanuwat et al., 2018). MNIST is a database of handwritten digits ranging from 0 to 9. FMNIST consists of ten different fashion items from the Zalando (Europe's largest online fashion retailer) database. Lastly, KMNIST also consists of ten classes, with each one representing one row of Hiragana, a Japanese syllabary. All of these datasets contain 28x28 grayscale images, divided into a training and validation set with 60 000 and 10 000 images respectively. MNIST and FMNIST are well-known and often utilized datasets, so it is easy to compare performance when testing novel algorithms. KMNIST is a somewhat recent addition and is considered a more challenging task than the classical MNIST digits dataset because each Hiragana can have many different symbols.
For the experiments with the fully connected architecture, we use the same set-up as in Hubin and Storvik (2019, 2023). We have two hidden layers with 400 and 600 neurons respectively, ReLU (Agarap, 2018) activation functions, and the Adam (Kingma and Ba, 2014) optimizer. We use a batch size of 100 and train for 250 epochs. All the experiments are run 10 times, and we report the minimum, median, and maximum predictive accuracy over these 10 runs. The reported density (1-sparsity) is an average over these 10 runs. For the LBBNN-LRT and LBBNN-FLOW methods, we use the standard normal prior for all the weights and biases in the network, and a prior inclusion probability of 0.10. For both \(q(\mathbf{z})\) and \(r(\mathbf{z}|\mathbf{W},\mathbf{\Gamma})\), we use flows of length two, where the neural networks consist of two hidden layers with 250 neurons each. For our second classification experiment, we use the LeNet-5 (LeCun et al., 1998) convolutional architecture, but with 32 and 48 filters for the convolutional layers. We use the same priors and normalizing flows as in the previous experiment, and the same datasets.
To measure predictive performance, we consider two approaches. First, the fully Bayesian model averaging approach, where we average over 100 samples from the variational posterior distribution, taking into account uncertainty in both weights and structures. Secondly, we consider the median probability model (Barbieri and Berger, 2004), where we only do model averaging over the weights that have a posterior inclusion probability greater than 0.5 following Hubin and Storvik (2019, 2023), whilst others are excluded from the model. This allows for significant sparsification of the network. We emphasize that this is possible because we can go back to sampling the weights when doing inference. We also report the density, i.e. the proportion of weights included in the median probability model.
The results with the fully connected architecture can be found in Table 1 and for the convolutional architecture in Table 2. Firstly, we see that using the LBBNN-LRT gives results that are comparable to the baseline LBBNN method, except for FMNIST where it performs a bit worse both with the fully connected and with the convolutional architecture. It is no surprise that these results are similar, as using the LRT is mainly a computational advantage. Secondly, we note that the LBBNN-FLOW method performs better than the other two methods, on both convolutional and fully connected architectures, while having the most sparse networks. We note that our method's accuracy on MNIST with the convolutional architecture is comparable to the accuracy reported in Louizos and Welling (2017) (99.30%). This then suggests that it is possible to sparsify convolutional BNNs without losing much predictive power. The higher density in general on the convolutional architectures is mainly a result of slightly different parameter initializations, however, these networks could also be sparsified to a similar degree as the fully connected ones. The increased predictive power of using normalizing flows comes at a computational cost. With the fully connected architecture, we observed that it took around 4 seconds to train one epoch with LBBNN-LRT, 13 seconds with LBBNN, and 17 seconds with LBBNN-FLOW on an NVIDIA A10 GPU. On the convolutional architecture, it took 7 seconds per epoch with the LBBNN-LRT, 18 seconds with LBBNN, and 28 with normalizing flows.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline
**KMNIST** & \multicolumn{4}{c}{Median probability model} & \multicolumn{4}{c}{Full model averaging} \\ \hline Method & min & median & max & density & min & median & max & density \\ LBBNN & 89.22 & 89.59 & 89.98 & 0.113 & 89.43 & 89.76 & 90.21 & 1.000 \\ LBBNN-LRT & 90.04 & 90.26 & 90.43 & 0.136 & 90.23 & 90.39 & 90.60 & 1.000 \\ LBBNN-FLOW & 90.64 & **91.12** & 91.46 & 0.096 & 91.16 & **91.30** & 91.61 & 1.000 \\ \hline \hline
**MNIST** & \multicolumn{4}{c}{Median probability model} & \multicolumn{4}{c}{Full model averaging} \\ \hline Method & min & median & max & density & min & median & max & density \\ LBBNN & 98.01 & 98.10 & 98.20 & 0.098 & 98.03 & 98.14 & 98.23 & 1.000 \\ LBBNN-LRT & 97.84 & 97.95 & 98.09 & 0.103 & 98.01 & 98.08 & 98.11 & 1.000 \\ LBBNN-FLOW & 98.14 & **98.36** & 99.42 & 0.074 & 98.23 & **98.42** & 98.53 & 1.000 \\ \hline \hline
**FMNIST** & \multicolumn{4}{c}{Median probability model} & \multicolumn{4}{c}{Full model averaging} \\ \hline Method & min & median & max & density & min & median & max & density \\ LBBNN & 88.47 & 88.76 & 88.90 & 0.106 & 88.60 & 88.74 & 88.91 & 1.000 \\ LBBNN-LRT & 87.51 & 87.82 & 87.94 & 0.141 & 87.88 & 87.94 & 88.14 & 1.000 \\ LBBNN-FLOW & 89.49 & **89.70** & 89.88 & 0.097 & 89.52 & **89.80** & 89.92 & 1.000 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance metrics on the KMNIST, MNIST, FMNIST validation data, for the fully connected architecture. For the accuracies (%), we report the minimum, maximum, and median over the ten different runs. Density is computed as an average over the ten runs.
### Logistic regression simulation study
In this section, we do a variable selection experiment within a logistic regression setting. As logistic regression is just a special case of a neural network with one neuron (and hence one layer), modifying the algorithms is straightforward. We are limiting ourselves to the logistic regression context to be able to compare to the original baseline method from Carbonetto and Stephens (2012), who have shown that the mean-field variational approximation starts to fail the variable selection task when the covariates are correlated. We use the same data as in Hubin and Storvik (2018), consisting of a mix of 20 binary and continuous variables, with a binary outcome, and we have \(2\,000\) observations. The covariates, \(\mathbf{x}\), are generated with a strong and complicated correlation structure between many of the variables (see Figure 3). For more details on exactly how the covariates are generated, see appendix B of Hubin and Storvik (2018). The response variable, \(y\), is generated according to the following data-generating process:
\[\eta \sim\mathcal{N}(\mathbf{\beta}\mathbf{x},0.5)\] \[y \sim\text{Bernoulli}\left(\frac{\exp(\eta)}{1+\exp(\eta)}\right)\]
with the regression parameters defined to be:
\[\mathbf{\beta}=(-4,0,1,0,0,0,1,0,0,0,1.2,0,37.1,0,0,50,-0.00005,10,3,0).\]
The goal is to train the different methods to select the non-zero elements of \(\mathbf{\beta}\). We consider the parameter \(\beta_{j}\) to be included if the posterior inclusion probability \(\alpha_{j}>0.5\), i.e.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline
**KMNIST** & \multicolumn{4}{c}{Median probability model} & \multicolumn{4}{c}{Full model averaging} \\ \hline Method & min & median & max & density & min & median & max & density \\ LBBNN & 95.13 & 95.52 & 95.89 & 0.359 & 95.21 & 95.48 & 95.78 & 1.000 \\ LBBNN-LRT & 94.73 & 94.94 & 95.16 & 0.429 & 95.07 & 95.42 & 95.65 & 1.000 \\ LBBNN-FLOW & 95.73 & **95.99** & 96.43 & 0.351 & 96.00 & **96.18** & 96.44 & 1.000 \\ \hline \hline
**MNIST** & \multicolumn{4}{c}{Median probability model} & \multicolumn{4}{c}{Full model averaging} \\ \hline Method & min & median & max & density & min & median & max & density \\ LBBNN & 99.22 & 99.26 & 99.35 & 0.353 & 99.21 & 99.28 & 99.33 & 1.000 \\ LBBNN-LRT & 99.11 & 99.26 & 99.31 & 0.406 & 99.20 & 99.28 & 99.34 & 1.000 \\ LBBNN-FLOW & 99.15 & **99.27** & 99.41 & 0.338 & 99.16 & **99.29** & 99.42 & 1.000 \\ \hline \hline
**FMNIST** & \multicolumn{4}{c}{Median probability model} & \multicolumn{4}{c}{Full model averaging} \\ \hline Method & min & median & max & density & min & median & max & density \\ LBBNN & 91.14 & 91.31 & 91.48 & 0.352 & 91.10 & 91.26 & 91.44 & 1.000 \\ LBBNN-LRT & 90.04 & 90.40 & 90.85 & 0.433 & 90.52 & 90.73 & 91.06 & 1.000 \\ LBBNN-FLOW & 90.52 & **91.54** & 91.75 & 0.367 & 91.38 & **91.71** & 92.04 & 1.000 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance metrics on the KMNIST, MNIST, FMNIST validation data, with the convolutional architecture. See the caption in Table 1 for more details.
Figure 3: Plots showing the correlation between different variables in the logistic regression simulation study.
the median probability model of Barbieri and Berger (2004). We fit the different methods 100 times (to the same data), each time computing the true positive rate (TPR), and the false positive rate (FPR):
TPR \[=\sum_{j=1}^{M}\frac{\text{TP}_{j}}{\text{TP}_{j}+\text{FN}_{j}},\] FPR \[=\sum_{j=1}^{M}\frac{\text{FP}_{j}}{\text{FP}_{j}+\text{TN}_{j}}.\]
Here \(n=20\) variables, and TP = true positive, meaning that a non-zero weight was correctly included. FN = false negative, meaning a non-zero weight was not included. FP = false positive, meaning a weight that was zero was included. TN = true negative, meaning that a zero weight was not included. Thus, TPR measures the proportion of variables with non-zero weights correctly included, whereas FPR measures the proportion of variables with zero weights that were wrongly included.
In this experiment we compare our approaches LBBNN-LRT and LBBNN-FLOW against the algorithm proposed by Carbonetto and Stephens (2012), denoted as CS henceforth. That method is very similar to LBBNN-LRT, as it uses the same variational distribution, but optimization is done with importance sampling, and coordinate ascent variational inference (without subsampling from the data). For the normalizing flows, we use flows of length two with the neural networks having two hidden layers of 100 neurons each. We use a batch size of 400 and train for 500 epochs. We use standard normal priors for the weights and a prior inclusion probability of 0.25 on the inclusion indicators for all three approaches. Hence we are in the setting of a Bayesian logistic regression, with variable selection.
\begin{table}
\begin{tabular}{l c c c} \hline \hline & CS & LBBNN-LRT & LBBNN-FLOW \\ \hline mean TPR & 0.681 & 0.838 & **0.972** \\ mean FPR & 0.125 & 0.084 & **0.074** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance metrics on the logistic regression variable selection simulation study.
Figure 4: Bar-plots showing how often the weights are included over 100 runs.
The results are in Table 3. We also show a bar-plot (Figure 4) for each of the 20 weights over the 100 runs. We see that LBBNN-FLOW performs best, with the highest TPR and the lowest FPR. It is especially good at picking out the correct variables where there is a high correlation between many of them (for example \(\beta_{1}-\beta_{6}\)). We might attribute this to the more flexible variational posterior distribution, as opposed to the mean-field Gaussian distribution used in the other three methods. Carbonetto and Stephens (2012) also discuss how the mean-field approach can only be expected to be a good approximation when the variables are independent or at most weakly correlated.
### Predictive uncertainty
A key motivation behind using BNNs is their ability to handle predictive uncertainty more accurately than non-Bayesian neural networks. We therefore in this experiment want to illustrate how our approaches LBBNN-LRT and LBBNN-FLOW, as well as Monte Carlo dropout (Gal and Ghahramani, 2016), and a regular (dense) BNN behave in terms of the predictive uncertainty. The purpose of this study is, thus, illustrative rather than comparative and the methods are not competing here. For this experiment, we simulate 5 clusters of data from two-dimensional Gaussian distributions. For the five Gaussians, we use the means and covariances reported in Appendix B. The data is then transformed to be in the range between 0 and 1, for ease of visualization. The task is to classify to the correct class corresponding to a specific cluster.
We generate three datasets, with 10, 50, and 200 samples from each class, respectively. For all the methods, we fit a network with one hidden layer consisting of 1000 neurons, meaning we are in a setting where the number of trainable parameters is much larger than the number of observations, which is a typical scenario for applications of BNN. For dropout, we use 0.5 for the dropout probability, and we use 0.5 for the prior inclusion probabilities for LBBNN-LRT and LBBNN-FLOW. We use flows of length two, with the neural networks consisting of two hidden layers of 50 neurons each. For all the methods, we use 10 samples for model averaging. To measure predictive uncertainty, we generate a test set over a grid over \([0,1]^{2}\) and compute the entropy of the predictive distributions for each point in the grid. Maximum entropy is attained when the predictive
Figure 5: The data generative uncertainty of the Gaussian clusters in the predictive uncertainty experiment.
distribution is uniform, i.e. 0.2 for each class. The results are shown in Figure 6, Figure 7, and Figure 8. In addition, we show the entropy of the data generative model in Figure 5, to which the predictive entropies are expected to converge as the sample size increases.
With little data, we see a stark difference between dropout and the Bayesian networks. Dropout predictions are highly confident everywhere, except for at the decision boundaries between the classes. In contrast, the Bayesian networks exhibit high uncertainty in most areas, especially where little data is observed. When we increase the amount of data, we can see that the Bayesian networks gradually get more certain about predictions, and the entropies (as desired) start to converge towards the data-generative ones, while for dropout at a given rate, the uncertainties do not reduce. It should be noted that there is no under-fitting happening, as we have close to 100% accuracy during training for all the methods. As a final observation, we see that the dense BNN typically has slightly less uncertainty than LBBNN with LRT and FLOW. Although, we can not say much about how good/bad this is, since it is difficult to obtain the true uncertainties by doing a reversible jump MCMC in the settings of LBBNN.
Additionally, we perform an experiment where we generate 10 000 test samples (2 000 from each cluster), after training with 50 samples (10 from each cluster). After training, we compute the entropy of the predictive distribution on the test data and sort the data from lowest to highest entropy. We also sort the samples based on the maximum class probability and compute the cumulative accuracy (with 100 data samples at a time). By that we mean that we start with the accuracy for the 100 most confident predictions, followed by 100 less
Figure 6: Entropy with 10 samples from each cluster
Figure 8: Entropy with 200 samples from each cluster
Figure 7: Entropy with 50 samples from each cluster
Figure 9: Top left, cumulative accuracy (100 samples at a time), where each point is the accuracy for the corresponding data points. Top right, entropy sorted from low to high. Bottom, maximum class probability sorted from high to low.
confident predictions, and so on until we reach 100 of the least confident predictions. The results are in Figure 9. With dropout, the maximum class probability is typically very high (i.e. we are extremely certain about which class the sample belongs to). After the first 5 000 (sorted) samples, the output probability for the most likely class is at around 95%. With LRT and FLOW, on the other hand, it has dropped to roughly 50%. This mirrors what we saw earlier, dropout has high certainty most of the time. Despite this, we see that in this experiment the Bayesian methods have higher predictive accuracy than dropout for the cases with the most uncertainty.
As a final illustration, we consider an experiment where we take the maximum model averaged pre-activation output (pre-softmax) of the last layer (i.e. just before applying the softmax function) as a measure instead of using entropy. We use the training data (\(m=1000\)) to generate an empirical confidence interval for the model-averaged pre-activation outputs for all the classes. We use a one-sided 95% confidence interval on the upper bound. During testing, we generate a sample over a grid, now between -1 and 2 in both dimensions, and take the highest model-averaged pre-activation output. We then check whether it falls within the empirical confidence interval or not. The results are shown in Figure 10. We see that in the regions with extremely low entropy, we can detect out-of-distribution data. This shows that using maximal entropy for out-of-distribution data as suggested in Louizos and Welling (2017) might not be optimal. However, we still see the potential of BNNs to differentiate between in and out-of-domain uncertainty using the pre-activation values of the output of BNNs. We do not go any further here and leave this topic for future research.
## 6 Discussion
We have demonstrated that increasing the flexibility in the variational posterior distribution with normalizing flows improves the predictive power compared to the baseline method (with mean-field posterior) while obtaining more sparse networks, despite having a looser variational bound than the mean-field approach. Also, the flow method performed best on a
Figure 10: Out of distribution entropy, where dark blue corresponds to the OOD data detected by the BNN, and white is the in-distribution data.
variable selection problem, where the mean-field approaches struggle with highly correlated variables. More generally, we argue that Bayesian neural networks (BNNs) are much better at obtaining realistic predictive uncertainty estimates than their frequentist counterparts, as they have higher uncertainty when data is sparse. We do not observe a big difference in the uncertainty estimates obtained with dense BNN compared to our approaches. Unlike dense BNNs, our methods have the additional advantage of being able to perform variable selection. The downside is that LBBNNs have an extra parameter per weight, making them less computationally efficient than dense BNNs. Using normalizing flows is a further computational burden as we must also optimize over all the extra flow parameters.
In this paper, we use the same prior for all the weights and inclusion indicators, although this is not necessary. A possible avenue of further research could be to vary the prior inclusion probabilities, to induce different sparsity structures. Currently, we are taking into account uncertainty in weights and parameters, given some neural network architecture. In the future, it may be of interest to see if it is also possible to incorporate uncertainty in the activation functions. By having connections between the layers, we could learn to skip all non-linear layers if a linear function is enough. A possible application is to do a genome-wide association study (GWAS), using our method. Combining LBBNNs and GWAS has been proposed by Demetci et al. (2021), however, this only uses the mean-field posterior. With our normalizing flow approach, we can easily model dependencies within each SNP set, in addition to dependencies between the different SNP sets.
## Supplementary material
**GitHub:** The code used for the experiments can be found at [https://github.com/LarsELund/Sparsifying-BNNs-with-LRT-and-NF](https://github.com/LarsELund/Sparsifying-BNNs-with-LRT-and-NF)
|
2303.12116 | Physics Informed Neural Networks for Phase Locked Loop Transient
Stability Assessment | A significant increase in renewable energy production is necessary to achieve
the UN's net-zero emission targets for 2050. Using power-electronic
controllers, such as Phase Locked Loops (PLLs), to keep grid-tied renewable
resources in synchronism with the grid can cause fast transient behavior during
grid faults leading to instability. However, assessing all the probable
scenarios is impractical, so determining the stability boundary or region of
attraction (ROA) is necessary. However, using EMT simulations or Reduced-order
models (ROMs) to accurately determine the ROA is computationally expensive.
Alternatively, Machine Learning (ML) models have been proposed as an efficient
method to predict stability. However, traditional ML algorithms require large
amounts of labeled data for training, which is computationally expensive. This
paper proposes a Physics-Informed Neural Network (PINN) architecture that
accurately predicts the nonlinear transient dynamics of a PLL controller under
fault with less labeled training data. The proposed PINN algorithm can be
incorporated into conventional simulations, accelerating EMT simulations or
ROMs by over 100 times. The PINN algorithm's performance is compared against a
ROM and an EMT simulation in PSCAD for the CIGRE benchmark model C4.49,
demonstrating its ability to accurately approximate trajectories and ROAs of a
PLL controller under varying grid impedance. | Rahul Nellikkath, Andreas Venzke, Mohammad Kazem Bakhshizadeh, Ilgiz Murzakhanov, Spyros Chatzivasileiadis | 2023-03-21T18:09:20Z | http://arxiv.org/abs/2303.12116v1 | # Physics-Informed Neural Networks for Phase Locked Loop
###### Abstract
A significant increase in renewable energy production is necessary to achieve the UN's net-zero emission targets for 2050. Using power-electronic controllers, such as Phase Locked Loops (PLLs), to keep grid-tied renewable resources in synchronism with the grid can cause fast transient behavior during grid faults leading to instability. However, assessing all the probable scenarios is impractical, so determining the stability boundary or region of attraction (ROA) is necessary. However, using EMT simulations or Reduced-order models (ROMs) to accurately determine the ROA is computationally expensive. Alternatively, Machine Learning (ML) models have been proposed as an efficient method to predict stability. However, traditional ML algorithms require large amounts of labeled data for training, which is computationally expensive. This paper proposes a Physics-Informed Neural Network (PINN) architecture that accurately predicts the nonlinear transient dynamics of a PLL controller under fault with less labeled training data. The proposed PINN algorithm can be incorporated into conventional simulations, accelerating EMT simulations or ROMs by over 100 times. The PINN algorithm's performance is compared against a ROM and an EMT simulation in PSCAD for the CIGRE benchmark model C4.49, demonstrating its ability to accurately approximate trajectories and ROAs of a PLL controller under varying grid impedance.
## I Introduction
To achieve the net zero emission targets set by the UN for 2050, renewable energy production in the electricity grid has to be ramped up drastically in the coming decades. Unlike traditional synchronous power generators, the renewable resources such as Wind Power Plants (WPP) requires power-electronic controllers to remain in synchronism with the grid. In that respect, Phase Locked Loops (PLLs) are one of the most widely used controllers in renewable generators for tracking the grid reference frame [1]. However, recent studies have shown that in a weak grid, during large disturbances such as grid faults, the PLL reference frame may fail to synchronize with the grid reference frame, leading to grid instability [2]. Additionally, even during small perturbations, the interaction between the controller and a weak grid can result in small-signal instability, which is equally detrimental for the secure power supply.
These types of complex and fast transient behaviors displayed by the grid-tied power-electronic controllers are challenging to examine using a traditional RMS analysis. Instead, power system operators must use EMT simulations to evaluate each scenario. However, EMT simulations are computationally very expensive, which makes it is impractical to assess each scenario on a case-by-case basis. Instead, power system operators usually rely on a pre-determined stability boundary of the controller, also called Region of Attraction (ROA), which guarantees a domain of safe operation. More specifically, ROA is a space of initial system states within which all trajectories will converge to a stable equilibrium point.
However, to accurately determine the ROA using EMT simulation for a renewable generation plant, the power system operators still have to evaluate the stability of numerous scenarios to cover all the parameter combinations. Evaluating all the necessary scenarios to determine the ROA using EMT simulations is computationally intractable. Therefore, it is essential to develop methods to assess the stability of these controllers.
This has led to the development of reduced-order models (ROMs) and approximations of the actual converter dynamics. The most simple are the linearised model-based approaches, such as eigenvalue analysis [3, 4] or impedance-based stability analysis [5, 6], that approximates the wind turbines (WT) grid side converter and the connected power system with a linear function around an operating point. These models perform well at analyzing the stability under small disturbances but cannot sufficiently capture the instability under larger grid disturbances. Hence, a nonlinear approach must be used to predict global asymptotic stability [7, 8, 9].
When it comes to non-linear reduced order modelling approaches, even though the once proposed are considerably faster than the EMT simulations, they still demand a long computational time to evaluate the stability at all the points of interest. An alternative option explored in [10] is to use Lyapunov's direct method to estimate the ROA. However, as shown in [11], Lyapunov's direct method is a conservative approximation of the ROA and could falsely classify multiple stable initial states as unstable.
Hence, developing a more efficient method than EMT simulations and reduced-order models is essential to accurately assess the stability of renewable generation plants. To this end, in [12], Machine Learning (ML) models have been proposed to predict the stability of WT. ML models can
deliver solutions 100-1'000 faster than conventional models with sufficient accuracy [13]. Thus they can be used to quickly screen a vast number of scenarios to approximate the Region of Attraction, and identify few critical scenarios that require further analysis with EMT simulations.
Still traditional ML algorithms, such as Neural Networks (NN), require large amounts of high quality labeled data for training. Generating such a training dataset requires substantial computation time, which would cancel out the speedup the ML algorithms could offer. The development of physics-informed NNs (PINNs) addresses exactly this challenge, as it incorporates the underlying physics inside the neural network training, and by that, it drastically reduces the dependency of NN performance on external training data. As we will see later in this paper, this fundamental property of PINNs eliminates the need to generate large training datasets, avoiding a heavy computational burden, and drastically accelerates PINN training [14].
PINN for predicting the transient stability of an equivalent 2-area transmission grid with synchronous machines was proposed in [13]. Our previous work has also looked into using PINNs to evaluate a wind farm's N-1 small-signal stability margin [15]. This is the first paper that uses Physics-Informed Neural Networks (PINNs) to approximate the dynamics of Electro-Magnetic Transient Simulations. We believe that deploying PINNs for so computationally expensive simulations as EMT puts forward the most valuable use of PINNs, where we can achieve a speedup of over 100 times compared to conventional methods (including the time taken for training the PINNs), while maintaining sufficient accuracy. As a use case, in this paper, we focus on the accurate approximation of the non-linear power system dynamics of a PLL controller under fault.
1. We demonstrates that the proposed PINN architecture can accurately estimate the Region of Attraction of a PLL controller with varying grid strength more than 100 times faster than a Reduced-Order Model.
2. We introduce a novel recurring PINN architecture that can provide accurate predictions regardless of the prediction time window. In other words, PINNs can accurately learn the underlying physics of a system using only a narrow prediction window of e.g. 100-200 ms. We can then use this learnt neural network model in a recurrent fashion to predict for much longer horizons, e.g. 2s. This is an important contribution since it allows for more flexible and robust predictions in real-world scenarios.
The remainder of this paper is structured as follows: Section II describes the ROM for a PLL controller global stability analysis. Section III introduces the PINN algorithm used to approximate the non-linear system dynamics. Section IV presents the simulation setup and the results demonstrating the performance of the proposed PINN training architecture. Section IV concludes.
## II Reduced Order Model for PLL control
To develop a PINN capable of estimating the non-linear transient system dynamics of a PLL controller during fault, we use the second-order Reduced Order Model (ROM) for PLL stability proposed in [9] and shown in Fig. 1. The ROM in [9] was developed for a renewable generator such as WT with PLL controller under fault and is actively used in industry during wind farm design. Once the fault occurs, the dc chopper is activated, which allows to assume the dc voltage that the grid-side converter (GSC) receives as constant. Considering, besides that, the inner current controller is assumed to be fast enough to regulate the dq-domain currents during fault, the dynamics of the inner current loop can be neglected. This assumption is valid since fast system dynamics could be neglected while analyzing slow PLL dynamics of the GSC. The electrical model of the renewable generator, here a WT, after the dc chopper is activated is given in Fig. 1.
As shown in Fig. 1b, while developing the ROM, the shunt capacitor of the filter, denoted by \(X_{f}\) in Fig. 1a, was neglected since it had negligible impact on grid synchronisation stability when the current is controlled on the grid-side inductor of the LCL filter [9]. The resulting reduced-order representation of a renewable generator and grid in the dq domain is illustrated in Fig. 2a
The reduced order model was proposed for a PLL with a synchronous rotation frame (SRF) approach, which identifies the utility grid voltage phase angle by synchronizing the rotating frame reference of the PLL with the grid voltage. The SRF-PLL to track the grid voltage phase angle is given in Fig. 2b, where \(\theta_{df}\) is the angle tracked by the PLL. The grid frame rotates with the grid frequency \(\omega_{g}\), and the PLL reference frame rotates with the PLL frequency \(\omega_{pll}\). The resulting misalignment between the grid voltage angle, denoted by \(\theta_{g}\), and \(\theta_{df}\), is depicted in Fig. 2c.
Assuming time-invariant system parameters after clearing the fault, the SRF-PLL transient nonlinear dynamics can be
Fig. 1: (a) The electrical model of the renewable generator with PLL controller with grid-side converter and controls. (b) The reduced-order electrical model of renewable generator.
formulated as follows (see [9] and [16] for the derivations):
\[\theta_{df}=\int(k_{p}\cdot v_{pcc,q}^{c}+k_{i}\int v_{pcc,q}^{c}dt)dt \tag{1}\] \[v_{pcc,q}^{c}=-V_{g}\cdot sin(\theta_{df}-\theta_{g})+r_{Lg}i_{q}^ {c}+L_{g}i_{d}^{c}\omega_{g}\] (2) \[+L_{g}i_{d}^{c}(\overline{\theta_{df}-\theta_{g}})\]
where \(k_{p}\) and \(k_{i}\) are the control parameters of the SRF-PLL. \(v_{pcc,d}\) and \(v_{pcc,q}\) are d and q coordinates of the voltage at the point of common coupling (\(V_{pcc}\)) in the dq reference frame. \(V_{g}\) is the grid voltage, and \(r_{Lg}\) and \(L_{g}\) denotes the grid side impedance. \(i_{d}^{c}\) and \(i_{q}^{c}\) are the dq currents in the PLL reference frame.
The second-order dynamics in (2) can be transformed into an equivalent swing equation of the PLL controller as formulated in [9]:
\[\begin{bmatrix}\dot{\delta}\\ M\dot{\omega}\end{bmatrix}=\begin{bmatrix}\omega\\ T_{m}-T_{e}-D\omega\end{bmatrix} \tag{3}\]
\[\begin{split} M&=1-k_{p}L_{g}i_{d}^{c}\\ T_{m}&=k_{i}(r_{Lg}i_{q}^{c}+L_{g}i_{d}^{c}\omega_{g})\\ T_{e}&=k_{i}V_{g}sin(\delta)\\ D&=k_{p}V_{g}cos(\delta)-k_{i}L_{g}i_{d}^{c}\end{split} \tag{4}\]
where, \(\delta=\theta_{df}-\theta_{g}\) is the misalignment in the angle tracked by the PLL. By considering a vector \(x=[\delta,\omega]^{T}\), we can represent the system dynamics in a more compact form as follows:
\[\frac{d}{dt}x=f(t,x,u) \tag{5}\]
where \(x\) is the state of the system, and \(u\) is the system and control parameters. For a given initial conditions \(x_{0}\) and system parameter \(u\), we can use an ODE solver to evaluate trajectory by performing an integration over small time steps. However, this can be computationally intensive, particularly when we need to integrate over a long time horizon and for multiple initial conditions and parameters. To address this challenge, we propose a physics-informed neural network (PINN) that can accurately approximate the trajectory \(x\) with significantly less computational resources.
## III Physics-Informed Neural Network to Approximate the Reduced Order Model
Neural Networks (NNs) are considered global approximators. A NN of sufficient size, which has been appropriately trained is guaranteed to be able to determine the output of any function, including those of the ROM we consider in this paper, without loss of accuracy. To achieve this, the NN uses a group of interconnected hidden layers with multiple neurons to learn the relationship between the input and output layers. For our problem, the inputs to the NN are the state of the system after the fault is cleared (\(x_{0}\)), the prediction time, i.e. the time at which the NN should make the prediction (denoted by \(t\)), and a few of the relevant system parameters such as grid impedance (indicated by \(u\)). The NN to approximate the PLL state at time \(t\) can be depicted as follows:
\[x\approx\hat{x}=NN(x_{0},t,u) \tag{6}\]
A standard NN, with \(K\) number of hidden layers and \(N_{k}\) number of neurons in hidden layer \(k\), is shown in Fig. 3. Each neuron in a hidden layer is connected with neurons in the neighboring layers through a set of edges. The information exiting one neuron goes through a linear transformation over the respective edge before reaching the neuron in the subsequent layer. In every neuron, a nonlinear so-called "activation function" is applied to the information to introduce nonlinear relationships into the approximator. The NN for approximating the trajectory \(x\) can be formulated as follows:
\[Z_{0} =[t,x_{0},u] \tag{7}\] \[\mathbf{\hat{Z}}_{k} =\mathbf{w_{k}}\mathbf{Z}_{k-1}+\mathbf{b_{k}}\] (8) \[\mathbf{Z}_{k} =\mathbf{\sigma}(\mathbf{\hat{Z}}_{k})\] (9) \[\hat{x} =\mathbf{w_{k}}\mathbf{Z}_{K-1}+\mathbf{b_{K}} \tag{10}\]
where \(\mathbf{Z}_{k}\) is the output of the neurons in layer \(k\), \(\mathbf{\hat{Z}}_{k}\) is the information received at layer \(k\), \(\mathbf{w_{k}}\) and \(\mathbf{b_{k}}\) are the
Fig. 3: Illustration of the neural network architecture to predict the evolution of the reduced-order model: There are K hidden layers in the neural network with \(N_{k}\) neurons each. Where k = 1,...,K.
Fig. 2: (a) Reduced-order WT system representation in the dq domain (b) Typical synchronous reference frame PLL. (c) Vector diagram: Misalignment between the PLL reference frame and the grid reference frame.
weights and biases connecting layer \(k-1\) and \(k\), and \(\sigma\) is the nonlinear activation function. There is a range of possible activation functions, such as the sigmoid function, the hyperbolic tangent, the Rectifier Linear Unit (ReLU), and others. In this paper, we use the ReLU as the activation function, similar to the vast majority of recent papers, as this has been shown to accelerate the NN training [17]. The ReLU activation function can be formulated as follows:
\[\mathbf{Z}_{k}=\max(\mathbf{\hat{Z}}_{k},0) \tag{11}\]
The average error in predicting the state of the PLL (denoted by \(\mathcal{L}_{0}\)) for different starting points and time steps in the training data set is measured by:
\[\mathcal{L}_{x}=\frac{1}{N}\sum_{i=1}^{N}|\mathbf{x}_{i}-\hat{\mathbf{x}}_{i}| \tag{12}\]
where \(N\) is the total number of data points in the training set, \(\mathbf{x}_{i}\) is the state of the system obtained by solving the ODE, and \(\hat{\mathbf{x}}_{i}\) is the predicted state of the system.
The NN is trained using the backpropagation algorithm, which modifies the weights and biases of the NN in every iteration of the NN training to minimize the average prediction error \(\mathcal{L}_{0}\). However, for the NN to accurately approximate the dynamics of the nonlinear system, it will require a considerable amount of training data. Creating this amount of training data is computationally expensive and can render this approach not feasible. To overcome this challenge, we propose a physics-informed neural network (PINN) to approximate the reduced-order model in this paper.
### _Physics-Informed Neural Network_
Considering the NN prediction should satisfy the ROM formulation given in (4), to improve the NN generalization capabilities, we can impose that the temporal derivative of the NN's approximation \(\hat{x}\) w.r.t the time \(t\) (i.e., \(\frac{d}{dt}\hat{x}\)), calculated using automatic differentiation (AD), matched the state update f (t, x,u) at all the training data points. This can be promoted by including the following loss function in the NN training:
\[\mathcal{L}_{dt}=\frac{1}{N}\sum_{i=1}^{N}|\mathbf{f}(\mathbf{t},\mathbf{x}_{ i},\mathbf{u})-\frac{\mathbf{d}}{\mathbf{dt}}\hat{\mathbf{x}}_{i}| \tag{13}\]
The resulting NN (denoted by dtNN) loss function can be formulated as follows:
\[\mathcal{L}_{dtNN}=\Lambda_{x}\mathcal{L}_{x}+\Lambda_{dt}\mathcal{L}_{df} \tag{14}\]
where \(\Lambda_{x}\) and \(\Lambda_{dt}\) are the weights given to the respective loss functions.
Furthermore, we can assess the accuracy of the NN's prediction \(\hat{x}\) by comparing its temporal derivative \(\frac{d}{dt}\hat{x}\) with the state update computed by the neural network approximation \(f(t,\hat{x},u)\). This way, we can consider additional training points, also called collocation points, in the training space for which we do not have to use computational resources to evaluate the ODE solution \(x\). Instead, the reduced order model will help NN learn the dynamics of the controller for these new training data points. The loss function can be formulated as follows:
\[\mathcal{L}_{f}=\frac{1}{N}\sum_{i=1}^{N}|\mathbf{f}(\mathbf{t},\mathbf{\hat{ x}_{i}},\mathbf{u})-\frac{\mathbf{d}}{\mathbf{dt}}\mathbf{\hat{x}_{i}}| \tag{15}\]
The loss functions given in (13) and (15) can be weighted and added to the NN loss function in (14) to get the proposed PINN loss function as follows:
\[\mathcal{L}=\Lambda_{x}\mathcal{L}_{x}+\Lambda_{dt}\mathcal{L}_{df}+\Lambda_{f }\mathcal{L}_{f} \tag{16}\]
where \(\Lambda_{f}\) is the weights given to the loss functions \(\mathcal{L}_{f}\). The proposed PINN algorithm for training the NN is given in Fig. 4.
### _Recurrent PINN for PLL stability assessment_
NN for predicting time series data usually performs much better when trained for a short time window. However, the time required for the PLL to reach a stable equilibrium point depends on the initial values of \(\delta\) and \(\omega\) after the fault is cleared. The initial conditions closer to the equilibrium point could reach the equilibrium much faster than the ones much further. This implies that the NN approximator would have to work for a wide range of prediction time \(t\) to make accurate predictions about the stability of the PLL. However, considering the underlying dynamics of the PLL controller (the ROM in (4)) remains the same until the system reaches equilibrium. Therefore, designing a huge NN to approximate the trajectory for a wide range of prediction time windows would be inefficient.
To address this issue, we developed a recurrent PINN, denoted by Re-PINN, by limiting the prediction window of the NN approximator to a fixed value, denoted by \(\overline{T}\). Then, once deployed, we use the Re-PINN to approximate the system state for a time t \(\in\) [0, \(\overline{T}\)]. If the system did not reach equilibrium in that interval, then we can give the state of the system at \(\overline{T}\) (i.e. x(\(\overline{T}\))), as starting point to the Re-PINN and evaluate again. This allows the Re-PINN, as depicted in Fig. 5, to make accurate predictions regardless of the time it takes for the PLL controller to reach stability.
Fig. 4: Illustration of the Physics-informed neural network architecture to predict the evolution of the reduced-order model: There are K hidden layers in the neural network with \(N_{k}\) neurons each. Where k = 1,...,K.
Limiting the prediction window of the NN to a fixed value, \(\overline{T}\), helped us reduce the NN size while maintaining the same level of accuracy. Moreover, for a small value of \(\overline{T}\), this approach also allowed us to restrict the training dataset time window to \(\overline{T}\), thereby reducing the computational time required to solve the ROM for generating the training dataset.
However, an extremely small value of \(\overline{T}\) could make the nonlinear dynamics of the PLL controller appear linear to the NN approximation, resulting in a possible piece-wise linear approximation of the PLL controller dynamics. Furthermore, with an extremely small value of \(\overline{T}\), the user will have to make multiple NN predictions to approximate the entire trajectory of PLL till it reaches the equilibrium. Hence, it is essential to choose a suitable value of \(\overline{T}\) based on the system dynamics. In the case studies discussed in Section IV, we used a \(\overline{T}\) of 100ms.
## IV Results
This section presents a comparative analysis of the proposed Re-PINN approach with a recurring NN (denoted as Re-NN) and Re-dtNN, which compares the temporal derivative of the NN with the reduced-order model (ROM) introduced in [9] to accurately predict the dynamics of a PLL controller. The ROM is known to capture the slow dynamics of a PLL controller accurately [9]. Hence, we analytically compare the performance of the Re-PINN approximator, the Re-NN approximator, and the Re-dtNN approximator against the ROM. Furthermore, the trajectory predicted by Re-PINN is also compared to an EMT switching simulation model of a WT in PSCAD on the CIGRE benchmark model C4.49, using the system and control parameters provided in Table I.
### _Neural Network Training Setup_
The objective of this work was to analyzes the ability of Re-PINN to accurately predict the transient stability of the system under varying grid strength. Thus, to simplify the training process, all system and control parameters other than the grid impedance were assumed constant while generating the dataset. Furthermore, the X/R ratio of the grid was considered to be constant. Then while generating the training and test dataset, the value of both \(r_{Lg}\) and \(L_{g}\) was changed by a factor of \(\alpha\in[0.1,2]\). The initial state space for \(\delta\) and \(\omega\) was limited to \(-\pi\) to \(\pi\) radians and -60 to 60 radians/second, respectively. We chose this region to limit the number of unstable initial states since an unstable initial state typically results in a high value of \(\omega\) and a rapidly varying \(\delta\). Accurately capturing the dynamics of such states would require a large NN. Moreover, the controller is usually cut off after a specific cut-off frequency \(\omega\) for unstable initial states, making accurate predictions of their system state less critical. Thus, it would be more crucial if the NN could accurately predict the dynamics of stable system well and is able to identify unstable initial states.
To train the three ML algorithms, we used a generated training dataset that comprised 12,000 independent random trajectories spanning 100 ms, implemented using the Runge-Kutta solver in SciPy. In addition, we provided the proposed Re-PINN model with 24,000 random initial states from the input domain as collocation points to enhance the accuracy of the predictions. Unlike the training dataset, we did not compute the system trajectory for these collocation points. To evaluate the performance of the three ML algorithms, we tested them on a dataset of 24,000 independent random trajectories spanning 1 second
A Re-NN, Re-dtNN, and Re-PINN with four hidden layers and 100 nodes in each layer are used to predict the ROM solutions. The ML algorithms were implemented using PyTorch. WandB [18] was used for monitoring and tuning the hyperparameters. The NNs have trained in a High-Performance Computing (HPC) server with an Intel Xeon E5-2650v4 processor and an NVIDIA Tesla V100 GPU with 16 GB RAM. The code and datasets to reproduce the results are available online [19].
### _Comparing Different Neural Network Algorithms_
This section evaluates the performance of the three NN models in predicting the dynamics of PLL under fault. The aim was to assess the generalization capabilities of each ML algorithm by comparing the NN predictions with the ROM solutions in the test set. The mean absolute error (MAE) in the test set was recorded during the NN training iterations, as shown in Fig. 7.
The results indicate that the Re-dtNN model, which compared the temporal derivative of the NN with ROM in the training dataset, achieved significantly better prediction accuracy in the test set than a similarly sized Re-NN model. Moreover, the Re-PINN model outperformed the Re-NN and Re-dtNN models by almost an order of magnitude, thanks to the additional collocation points in the training
Fig. 5: Illustration of the recurrent physics-informed neural network (Re-PINN) architecture to predict the evolution of the reduced-order model: Re-PINN approximate the system state for a time t \(\in[0,\overline{T}]\). If the system did not reach equilibrium in that interval, then we can give the state of the system at \(\overline{T}\), denoted by x(\(\overline{T}\)), as starting point to the Re-PINN and evaluate again.
set. Furthermore, with these extra collocation points, the Re-PINN model converged much faster than the other two NN models.
Additionally, to investigate if the Recurrent NN models resulted in error propagation as prediction time increases, we plotted the MAE in predicting the trajectory as a function of prediction time for all stable initial states in the test set. The MAE of Re-NN, Re-dtNN, and Re-PINN in predicting the PLL state \(\delta\) was compared against the ROM at 50 fixed time steps between zero and one second, as illustrated in Fig. 7.
The results indicate that none of the three NN models suffer from error propagation in the test set. Additionally, it was observed that the Re-PINN predictions were almost an order of magnitude more accurate than the Re-NN model regardless of the prediction time, and with Re-PINN, the MAE reduced as the prediction time increased.
Furthermore, the trajectory predicted by the Re-PINN was also compared against an EMT switching simulation in PSCAD and Re-PINN managed to capture the dynamics reasonably well for a stable and an unstable initial state as shown in Fig. 8.
Additionally, our experiments revealed that the dataset generation for training and testing required half an hour, while training all three NN architectures using GPU took only between 10 to 15 minutes each. These results indicate that dataset creation is a more time-consuming task than NN training. This shows that the PINN architecture achieved significantly better results compared to the other NN architectures while requiring a comparable amount of computational time.
### _Predicting Region of Attraction using Physics-Informed Neural Network_
To ensure that Re-PINN did not misclassify any unstable cases as stable, we compared the region of attraction (ROA) predicted by Re-PINN against the ROA obtained analytically using ROM solutions. ROA is the set of initial states of the system from which all trajectories will converge to a stable equilibrium point. ROA can be calculated using Lyapunov's direct method or equal area criteria. However, these methods are often approximations and do not accurately capture the region [11]. In this section, we demonstrate the ability of PINNs to accurately predict the ROA for a PLL controller in CIGRE benchmark model C4.49 by evaluating the trajectory at multiple initial states.
The contour plot of the ROA with the time taken for the initial condition to reach a stable equilibrium is shown in Fig. 9. We calculated the time taken to reach the stable equilibrium as follows:
\[t=t_{eq} \tag{17}\]
s.t.
\[\delta(t_{eq})=\delta_{eq}\pm\epsilon_{\delta} \tag{18}\]
\[\omega(t_{eq})=\omega eq\pm\epsilon_{\omega} \tag{19}\]
where \(\delta_{eq}\) and \(\omega eq\) are the equilibrium states that can be achieved after the fault is cleared, they are computed using the formulation given in [9]. \(\epsilon_{\delta}\) and \(\epsilon_{\omega}\) are small error terms used to negate the tiny fluctuations in the Re-PINN prediction. The ROA was evaluated at 25,600 evenly space initial state \(\delta\) and \(\omega\) for ten different \(\alpha\) values. The resulting ROA is given in Fig. 9.
Based on the results presented in Fig. 9, we observed that the Re-PINN accurately classified the PLL system's stable and unstable initial states compared to the ROM proposed in [9]. Notably, the Re-PINN achieved this in 10 minutes. Even when considering the dataset generation and the training, the Re-PINN only took less than one hour, while the ROM required over two hours to evaluate the system trajectories using the HPC cluster. We attribute this performance improvement to our use of GPUs and CUDA for training the Re-PINN.
Traditionally, CPUs were used for computationally intensive tasks, while GPUs were reserved for graphic rendering tasks. However, unlike CPUs, which have a limited number of processing cores, GPUs have hundreds of less powerful cores with high memory bandwidth. This makes GPUs extremely useful for highly parallelizable tasks like ML training. By utilizing GPUs for Re-PINN training, we achieved a speed-up of 10 to 20 times compared to CPUs
Fig. 8: Trajectory predicted by the Re-PINN as compared to ROM and PSCAD for an initial grid voltage phase jump of \(20^{0}\) (left) and \(150^{0}\) (right).
Fig. 6: The mean absolute error (MAE) in the test set during NN training iterations for different NN algorithms.
Fig. 7: Comparing the performance of the trained NN algorithms.
in the HPC, making online training and deployment of the Re-PINN competitive with ROM.
Additionally, the Re-PINN enabled the rapid assessment of the ROA for a significantly larger number of points than the ROM. Using Re-PINN, we evaluated the PLL trajectory for 5 million evenly spaced random initial states in under half an hour, a task that would have taken the ROM over two days. The resulting ROA is given in Fig. 10. For a more detailed depiction of the ROA, please refer to [19] for the high-definition images.
## V Conclusion
In this paper, we present a novel Recurring Physics-Informed Neural Network (Re-PINN) architecture to (i) accurately predict the nonlinear transient dynamics of a PLL controller under fault with limited labeled training data, and (ii) at much faster time scales than conventional simulation approaches. To evaluate the performance of our proposed Re-PINN algorithm, we compared it against a Reduced Order Model (ROM) for a renewable generator with an SRF-PLL controller with varying grid impedance. The results demonstrate that the Re-PINN can accurately approximate the trajectories for varying grid impedance. Leveraging the GPU acceleration for the Re-PINN training, we demonstrate that the online training and deployment of the Re-PINN algorithm is orders of magnitude faster than an existing ROM approximation. We show that the Re-PINN algorithm can generate a detailed Region of Attraction for the nonlinear dynamic system, with five million different initial conditions, in just half an hour. In contrast, the ROM approximation would have taken more than two days to compute. In the future, this work will focus on the scalability of this approach, expanding to include more system and control parameters in the NN approximator
|
2307.05189 | Using Linear Regression for Iteratively Training Neural Networks | We present a simple linear regression based approach for learning the weights
and biases of a neural network, as an alternative to standard gradient based
backpropagation. The present work is exploratory in nature, and we restrict the
description and experiments to (i) simple feedforward neural networks, (ii)
scalar (single output) regression problems, and (iii) invertible activation
functions. However, the approach is intended to be extensible to larger, more
complex architectures. The key idea is the observation that the input to every
neuron in a neural network is a linear combination of the activations of
neurons in the previous layer, as well as the parameters (weights and biases)
of the layer. If we are able to compute the ideal total input values to every
neuron by working backwards from the output, we can formulate the learning
problem as a linear least squares problem which iterates between updating the
parameters and the activation values. We present an explicit algorithm that
implements this idea, and we show that (at least for small problems) the
approach is more stable and faster than gradient-based methods. | Harshad Khadilkar | 2023-07-11T11:53:25Z | http://arxiv.org/abs/2307.05189v2 | # Using Linear Regression for Iteratively Training Neural Networks
###### Abstract
We present a simple linear regression based approach for learning the weights and biases of a neural network, as an alternative to standard gradient based backpropagation. The present work is exploratory in nature, and we restrict the description and experiments to (i) simple feedforward neural networks, (ii) scalar (single output) regression problems, and (iii) invertible activation functions. However, the approach is intended to be extensible to larger, more complex architectures. The key idea is the observation that the input to every neuron in a neural network is a linear combination of the activations of neurons in the previous layer, as well as the parameters (weights and biases) of the layer. If we are able to compute the ideal total input values to every neuron by working backwards from the output, we can formulate the learning problem as a linear least squares problem which iterates between updating the parameters and the activation values. We present an explicit algorithm that implements this idea, and we show that (at least for small problems) the approach is more stable and faster than gradient-based methods.
## 1 Introduction
The training of neural networks is known to be a hard problem (Glorot and Bengio, 2010), and several strategies have been proposed to tackle it. The standard backpropagation strategy is known to have empirically poor results (Larochelle et al., 2009), leading to the following variations.
### Related work
Modifications for stabilising backpropagation include (i) initialisation strategies such as the use of complementary priors (Hinton et al., 2006) or the stacking of Restricted Boltzmann Machines for weight initialisation (Hinton, 2007), (ii) unsupervised approaches that aim to preserve information transfer through successive layers (Larochelle et al., 2009; Lowe et al., 2019), and (iii) regularisation and dropout (Wan et al., 2013; Zaremba et al., 2014; Srivastava et al., 2014). Nevertheless, training using gradient descent from random initialisation is known to be brittle (Glorot and Bengio, 2010) and possibly not a good model of biological learning (Lillicrap et al., 2020; Hinton, 2022).
Recently, the forward-forward algorithm (Hinton, 2022) was proposed for training neural networks in a new way that increases the weights for positive samples and decreases them for negative samples. However, this algorithm is expected to outperform backpropagation mainly in power-constrained or uncertain computational settings. An even more recent approach called Cascaded Forward (Zhao et al., 2023) aims to improve on this by forcing each layer to successfully represent the output probability distribution. The disadvantage of such approaches is that they move quite far
from the basic optimisation objective (output error minimisation) by imposing theoretical interpretations of weights, biases, and biological processes.
An alternative to theoretically complex approaches is the intuitively simple concept of neuroevolution (Stanley et al., 2019; Mirjalili, 2019). This idea proposes to do away with gradient based training, and instead implements randomised search on a large population of parameter vectors to find the strongest (most optimal) ones. While easy to understand and implement, neuroevolution is computationally prohibitive for training large neural networks.
### Contributions
Instead, we propose to use the linearisation of training in neural networks directly for parameter optimisation, as described in Sec. 2. The method is inspired by two ideas from prior literature. The first idea is the use of orthogonal least squares for the training of radial basis functions (RBF) (Chen et al., 1991; Huang and Zhao, 2005). The output of an RBF is observed to be composed of a linear combination of basis vectors, followed by a non-linearity. This observation is then used to compute the centers of new hidden units. The second idea is the point-wise linearity of training dynamics in wide neural networks (Lee et al., 2019). This study notes that in the infinite-width limit, the training dynamics of a neural network can be described using Taylor series expansions of the initial parameter values. Based on this inspiration, we claim the following contributions for this paper.
1. An algorithm that uses iterative linear least squares (ILLS) that uses the inherent linear relationship between the output of one layer and the input of the next layer, to provide an alternative training method for neural networks. In this early work, we restrict the description to feedforward networks with invertible activation functions.
2. A set of experiments on 1-layer and 2-layer multi-input neural networks, that demonstrate a significant advantage over standard backpropagation using adaptive moment estimation. The experiments include 3 synthetic data sets and one publicly available data set.
## 2 Solution Methodology
### Illustrative network
Consider the neural network shown in Figure 1. The two inputs \(x_{1}\) and \(x_{2}\) pass through two hidden layers with two neurons each, and produce a single output \(y\). The activation of each neuron is \(f\), and is assumed to be invertible (one-to-one mapping from input to output). The goal of training is to compute the values of weights \(w_{ijk}\) and biases \(b_{ij}\) that minimise the mean-squared error between the predicted and the ideal output for each training data sample. In this notation, \(i\) is the index of the layer, \(j\) is the index of neuron within the layer, and \(k\) is the index of the neuron in the next layer. The input-output relationships are given by,
\[h_{11} =f(w_{111}\,x_{1}+w_{121}\,x_{2}-b_{11}) \tag{1}\] \[h_{12} =f(w_{112}\,x_{1}+w_{122}\,x_{2}-b_{12})\] (2) \[h_{21} =f(w_{211}\,h_{11}+w_{221}\,h_{12}-b_{21})\] (3) \[h_{22} =f(w_{212}\,h_{11}+w_{222}\,h_{12}-b_{22})\] (4) \[y =f(w_{311}\,h_{21}+w_{321}\,h_{22}-b_{31}) \tag{5}\]
### Preliminaries
Consider the network architecture shown in Figure 1. In the following description, we denote the estimated values of parameters by hat superscripts. For example, the estimated value of \(w_{111}\) is written as \(\hat{w}_{111}\). We also denote the training data set of \(N\) samples by tuples \((x_{1},x_{2},y)_{n}\), where \(n\in\mathbb{Z}^{+}\) and \(1\leq n\leq N\). Note that we have \(N\) simultaneous equations to be satisfied. At the same time, we have \((4N+15)\) unknowns consisting of \(15\) network parameters and \(4N\) hidden variables. The system is thus underdetermined. We now consider the output equation (5). Since \(f\) is assumed to be invertible, we can directly compute the vector \(H=f^{-1}(y)\) for the data set, which is the total input required for the output neuron. The following relationship must hold if the output error is 0,
\[\hat{w}_{311}\,\hat{h}_{21}+\hat{w}_{321}\,\hat{h}_{22}-\hat{b}_{31}=H. \tag{6}\]
We note that (6) is linear in \((\hat{h}_{21},\hat{h}_{22})\) when \((\hat{w}_{311},\hat{w}_{321},\hat{b}_{31})\) are held constant, and vice versa. We may therefore consider solving this equation using iterative linear squares. The first step with constant \((\hat{h}_{21},\hat{h}_{22})\) is straightforward. Given current values of the hidden activations, (6) is a system of \(N\) linear equations which can be solved to find the optimal values of \((\hat{w}_{311},\hat{w}_{321},\hat{b}_{31})\). The next step is trickier. For the updated values of \((\hat{w}_{311},\hat{w}_{321},\hat{b}_{31})\), we may consider solving for the new values of \((\hat{h}_{21},\hat{h}_{22})\). However, there are \(2N\) of these variables, and we cannot arbitrarily vary the hidden activation values since these must be the outputs of the previous set of neurons. We therefore propose to take a small step along the linearised gradient of the activation function \(f\). Consider the hidden activation \(\hat{h}_{21}\). The input to the corresponding neuron is given using (3) as,
\[I_{21}=\hat{w}_{211}\,\hat{h}_{11}+\hat{w}_{221}\,\hat{h}_{12}-\hat{b}_{21}=f ^{-1}(\hat{h}_{21}) \tag{7}\]
Since we propose to update the weight and bias parameters of the previous layer in order to modify \(\hat{h}_{21}\), we need the partial derivatives of \(I_{21}\) with respect to the parameters,
\[\frac{\partial I_{21}}{\partial\hat{w}_{211}}=\hat{h}_{11},\;\frac{\partial I _{21}}{\partial\hat{w}_{221}}=\hat{h}_{12},\;\frac{\partial I_{21}}{\partial \hat{b}_{21}}=-1 \tag{8}\]
Using (3), the linearised change in activation \(\hat{h}_{21}\) given small deviations to the previous parameters is,
\[\Delta\hat{h}_{21}=\frac{\partial f}{\partial I_{21}}\Big{|}_{\hat{h}_{11}, \hat{h}_{12}}\cdot\left(\alpha_{211}\,\hat{h}_{11}+\alpha_{221}\,\hat{h}_{12}- \beta_{21}\right), \tag{9}\]
where \(\alpha_{211},\alpha_{221},\beta_{21}\) are the proposed deviations in the values of \(\hat{w}_{211},\hat{w}_{221},\hat{b}_{21}\) respectively. Along similar lines, we can also write
\[\Delta\hat{h}_{22}=\frac{\partial f}{\partial I_{22}}\Big{|}_{\hat{h}_{11}, \hat{h}_{12}}\cdot\left(\alpha_{212}\,\hat{h}_{11}+\alpha_{222}\,\hat{h}_{12} -\beta_{21}\right). \tag{10}\]
Instead of solving (6) directly to get \(\hat{h}_{21},\hat{h}_{22}\), we solve for the deviation in the previous layer parameters, which lets us (approximately) satisfy the input-output relationship of the neuron. We thus solve,
\[\hat{w}_{311}\,(\hat{h}_{21}+\Delta\hat{h}_{21})+\hat{w}_{321}\,(\hat{h}_{22} +\Delta\hat{h}_{22})-\hat{b}_{31}=H\]
\[\Rightarrow\hat{w}_{311}\,\Delta\hat{h}_{21}+\hat{w}_{321}\,\Delta\hat{h}_{22 }=H+\hat{b}_{31}-\hat{w}_{311}\,\hat{h}_{21}-\hat{w}_{321}\,\hat{h}_{22}, \tag{11}\]
where \(\Delta\hat{h}_{21},\,\Delta\hat{h}_{22}\) are given by (9) and (10). Note that (11) is a linear equation in 6 variables \(\alpha_{211},\alpha_{221},\beta_{21},\alpha_{212},\alpha_{222},\beta_{22}\). This can be solved to compute proposed deviations in the weights and biases of the previous layer (which are unconstrained and small in number), rather than (6), which would compute deviations in the constrained hidden activations which are \(2N\) in number. Since this is a linear approximation, it is only valid in a small region around \(\hat{h}_{21},\,\hat{h}_{22}\). Therefore we define a learning rate \(\rho\) for updating the values,
\[\hat{h}_{21}\leftarrow\hat{h}_{21}+\rho\,\frac{\Delta\hat{h}_{21}}{N(\Delta \hat{h}_{21},\Delta\hat{h}_{22})},\;\hat{h}_{22}\leftarrow\hat{h}_{22}+\rho\, \frac{\Delta\hat{h}_{22}}{N(\Delta\hat{h}_{21},\Delta\hat{h}_{22})}, \tag{12}\]
Figure 1: Sample network with two hidden layers.
where \(N(\Delta\hat{h}_{21},\Delta\hat{h}_{22})\) is a common normalisation constant that maps the un-normalised values of \(\Delta\hat{h}_{21},\Delta\hat{h}_{22}\) to the range \([-1,1]^{N}\). We emphasise that the deviations in the hidden activations are computed based on the regressed values of \(\alpha\)'s and \(\beta\)'s, and thus correspond to approximately feasible updates for the parameters.
Taken together, the computation of \((\hat{w}_{311},\hat{w}_{321},\hat{b}_{31})\) using (6) and then the updation of \(\hat{h}_{21},\hat{h}_{22}\) using (12) constitutes one batch update of the last layer of the network from Figure 1. We utilise the same logic to proceed backwards up to the input layer, with two small modifications in the following cases (details in Algorithm 1).
1. When a hidden activation is sent to multiple neurons in the next layer, we make one update for every destination neuron. For example, \(\hat{h}_{11}\) and \(\hat{h}_{12}\) in Figure 1 drive both \(\hat{h}_{21}\) and \(\hat{h}_{22}\). We can thus compute one update analogous to (12) via \(\hat{h}_{21}\), and another via \(\hat{h}_{22}\). In Algorithm 1, we make one update to \(\hat{h}_{11}\) and \(\hat{h}_{12}\) (and hence to the four weights and two biases in the input layer) based on \(\hat{h}_{21}\), and another based on \(\hat{h}_{22}\) in every backward pass.
2. In the final step of the backward pass (at the input layer), we only update the weights and biases, since the inputs \(x_{1}\) and \(x_{2}\) are known and fixed.
## 3 Results
### A note on initialisation
Algorithm 1 shows that the initial values of \(\hat{w}_{ijk}\) and \(\hat{b}_{ij}\) have an important effect on training, since they affect the initial values of hidden activations which result in the first update in step (ii). As a result, the experiments reported below (all of which use tanh activation) use two types of initialisation. The first 'default' option is using the standard initialiser in PyTorch. The second 'custom' initialisation uses the following distributions for weights and biases,
\[\hat{w}_{ijk} \sim\mathcal{U}\left([-0.75,-0.25]\cup[0.25,0.75]\right)\] \[\hat{b}_{ij} \sim\mathcal{U}[-0.1,0.1],\]
where \(\mathcal{U}\) is the uniform distribution. The ranges defined above ensure that the tanh activation in every layer covers a substantial portion of the \([-1,1]\) output range, and also has sufficiently large magnitudes for the partial derivatives in (8).
```
Input: Initial guesses for weights \(\hat{w}_{ijk}\) and biases \(\hat{b}_{ij}\), training data set of \(N\) tuples \((x_{1},x_{2},y)_{n}\), where \(n\in\mathbb{Z}^{+}\), \(1\leq n\leq N\), learning rate \(\rho\). for\(e\gets 1\)to\(\max\_epochs\)do i. Compute forward pass using (1)-(5), and estimates of \(\hat{w}_{ijk}\) and \(\hat{b}_{ij}\) instead of true (unknown) values ii. Update \((\hat{w}_{311},\hat{w}_{321},\hat{b}_{31})\) using (6), constant \(\hat{h}_{21},\hat{h}_{22}\) iii. Compute \(\alpha_{211},\alpha_{221},\beta_{21},\alpha_{212},\alpha_{222},\beta_{22}\) using (11) iv. Update \(\hat{h}_{21}\) and \(\hat{h}_{22}\) using (12) v. Compute \((\hat{w}_{211},\hat{w}_{221},\hat{b}_{21})\) by fitting (7) to \(f^{-1}(\hat{h}_{21})\) vi. Compute \((\hat{w}_{212},\hat{w}_{222},\hat{b}_{22})\) by fitting \(I_{22}\) to \(f^{-1}(\hat{h}_{22})\) vii. Update \(\hat{h}_{11}\) and \(\hat{h}_{12}\) by computing \(\alpha_{111},\alpha_{121},\beta_{11},\)\(\alpha_{112},\alpha_{122},\beta_{12}\) that minimise error with respect to \(I_{21}\) viii. Update \(\hat{h}_{11}\) and \(\hat{h}_{12}\) by computing \(\alpha_{111},\alpha_{121},\beta_{11},\)\(\alpha_{112},\alpha_{122},\beta_{12}\) that minimise error with respect to \(I_{22}\) ix. Compute \(\hat{w}_{111},\hat{w}_{121},\hat{b}_{11}\), minimise error to \(f^{-1}(\hat{h}_{11})\) x. Compute \(\hat{w}_{112},\hat{w}_{122},\hat{b}_{12}\), minimise error to \(f^{-1}(\hat{h}_{12})\) end for returnfinal values of \(\hat{w}_{ijk}\) and \(\hat{b}_{ij}\)
```
**Algorithm 1**Training of the 2-layer network from Fig. 1.
### Experiments
We run four experiments to demonstrate the effectiveness of the ILLS algorithm, as described below. All experiments use tanh activation for all neurons, since it satisfies the invertibility requirement on \(f\). We use the Adam optimiser in PyTorch as the baseline. Both algorithms (ILLS and gradient descent) are run with both the default and the custom initialisations described above. For any given random seed, both algorithms start from identical initial values of weights and biases for the same type of initialisation. All results are reported using mean and standard deviation on 10 random seeds.
**Experiment 1 (synthetic, 2 layer):** Considering the network shown in Figure 1, we generate a data set of \(N=100\) pairs \((x_{1},x_{2})\) from a uniform distribution on \([-1,1]\). Using a synthetically generated set of weight and bias values, we compute the ground truth output \(y_{n}\) for each sample. This set is fed to Algorithm 1 with different values of learning rate \(\rho\). The same network is also implemented using PyTorch and trained using the Adam optimiser on the same data set, with different learning rates and default momentum value. A randomly shuffled version of the entire data set is used as the batch for every epoch. Figure 2 shows the comparison results averaged over 10 random seeds for each algorithm, with both default and custom initialisation. The x-axis is logarithmic. Starting from the same initial training error, ILLS takes a large downward step in the first iteration, and ends with a lower final error than Adam. The custom initialisation starts with a lower error for both algorithms, though this is probably incidental. Finally, we note that Adam begins to diverge for the highest learning rates in the figure (\(5\times 10^{-4}\)), possibly due to overfitting on the small sized data set. This behaviour is not observed for ILLS.
**Experiments 2 and 3 (synthetic, 1 layer):** In this pair of experiments, we use the network with one hidden layer and 3 inputs as shown in Figure 3. The update relationships can be written down by following the procedure outlined in Sec. 2, but adapted to three inputs instead of two. Specifically, the linearised deviation to \(\hat{h}_{11},\hat{h}_{12}\) in Figure 3 contains six \(\alpha\)'s (corresponding to the six weights in the input layer) and two \(\beta\)'s, instead of four \(\alpha\)'s and two \(\beta\)'s. Figures 4 and 5 show a comparison of the training performance for two different sets of ideal weights and biases (specified in the respective captions). In both cases, we see a similar behaviour to that observed in Experiment 1. ILLS takes a large initial step followed by continuous improvement, with higher learning rates converging earlier. This time, we are not able to run Adam on the fastest rate of \(0.1\) because it begins to diverge at a rate of \(0.01\) itself.
From Figure 6, we can also confirm that ILLS not only minimises the output error, but is able to compute a good approximation of the true parameters. Because of the symmetry of tanh along both input and output axes, the optimal set of weights and biases come with an equally optimal solution with opposite sign. Specifically, we observe that,
\[\tanh\left(\sum w_{ijk}\,h_{ijk}-b_{ij}\right)=-\tanh\left(\sum(-w_{ijk})\,h _{ijk}-(-b_{ij})\right),\]
which can be mapped to the required value by also flipping the sign of the weight of the next layer. As a result, Figure 6 plots the error in element-wise absolute values of the parameter estimates.
**Experiment 4 (public data set, 1 layer network):** For our final experiment, we use the US airline passengers data set ("R-v3.6", 2023) available by default in R-v3.6. This data contains a time series of monthly passenger counts flown by airlines in the US between 1949 and 1960. For the experiment, we normalise the time series and create a training data set with 3 successive time steps as the input and the fourth step as the expected output, resulting in 142 data samples. The network shown in Figure 3 is used for modelling the problem. Figure 7 shows the comparison as before, with different learning rates. Note that in this case Adam begins to diverge at a rate of \(0.001\) itself for both types of initialisation. ILLS is able to converge for rates at least as high as \(0.1\), even though the ideal parameters in this case are unknown.
Figure 4: Comparison of training loss for Adam and ILLS, for the one-layer network with assumed parameters \(w_{111}=1,w_{121}=2,w_{131}=-1,w_{112}=0,w_{122}=3,w_{132}=-2,b_{11}=1,b_{12}=-2, w_{211}=3,w_{221}=-2,b_{21}=0\). Curves correspond to different learning rates and initialisation schemes. Shaded regions show standard deviation over 10 random seeds. Note that Adam starts to diverge at the highest learning rates shown, and thus cannot be further increased.
Figure 3: Network with one hidden layer, used experiments 2, 3, and 4.
Figure 2: Comparison of training loss for Adam and ILLS, for the two-layer network with assumed parameters \(w_{111}=-0.5,w_{121}=0,w_{112}=-1,w_{122}=1,b_{11}=1,b_{12}=-1,w_{211}=2,w_{2 12}=-0.5,w_{221}=0.5,w_{222}=2,b_{21}=0,b_{22}=2,w_{311}=1,w_{321}=-1,b_{31}=-1\). Curves correspond to different learning rates and initialisation schemes. Shaded regions show standard deviation over 10 random seeds. Note that Adam starts to diverge at the highest learning rates shown, and thus cannot be further increased.
Figure 5: Comparison of training loss for Adam and ILLS, for the one-layer network with assumed parameters \(w_{111}=-2,w_{121}=0,w_{131}=3,w_{112}=4,w_{122}=-1,w_{132}=2,b_{11}=0,b_{12}=2, w_{211}=-3,w_{221}=-1,b_{21}=-1\). Curves correspond to different learning rates and initialisation schemes. Shaded regions show standard deviation over 10 random seeds. Note that Adam starts to diverge at the highest learning rates shown, and thus cannot be further increased.
Figure 6: Plot of \(L2\) norm error between true and estimated parameters, for experiment 2 averaged over 10 random seeds, with the ILLS algorithm and a learning rate of \(\rho=0.1\). We plot the error between absolute values of true and estimated parameters, for reasons explained in the text.
Figure 7: Comparison of training loss for Adam and ILLS, for the one-layer network on the airline passenger data set. Curves correspond to different learning rates and initialisation schemes. Shaded regions show standard deviation over 10 random seeds. Note that Adam diverges at the highest learning rates shown, and thus cannot be further increased.
Discussion
The description of ILLS in Sec. 2 and the experiments in Sec. 3 are exploratory in nature, and are not meant to be conclusive evidence for ILLS being better than backpropagation. Let us consider a few important open questions based on this thought.
1. **Is ILLS just gradient descent in another form?** A quick glance at the partial derivatives in (8) and the step update in (12) raise the possibility of ILLS being equivalent to gradient descent. However, this is not the case because of the following reasons. 1. Backpropagation with gradient descent effectively involves computing the sensitivity of the output error with respect to every parameter, and then taking a step opposite to the direction of the gradient. This is an 'open loop' update; once the gradients are computed, every parameter is updated independently. By contrast, ILLS computes the linearised optimal value of the parameters and the hidden activations in an alternating fashion. The parameter values are directly updated without any need for a learning rate. Only the hidden activations are updated in small steps, in recognition of their relationship to the weights and biases in the previous layer. Furthermore, this is a 'closed loop' update; the updates for every layer are designed to minimise the error to the _updated_ activations and parameters of the next layer. 2. We do not need to handle any chain of non-linearities (over multiple layers) when using ILLS. The only derivatives appear in constant multipliers in relations such as (9) and (10), and are limited to a single non-linearity. In essence, we solve a hierarchy of compact optimisation problems, starting from the final layer and proceeding backwards. The parameters in some arbitrary layer \(i\) are not concerned with the error gradient with respect to the output; they are only focused on minimising the error to the values in layer (\(i+1\)). 3. From the previous two points, it is clear that ILLS cannot be easily parallelised, since the updates have to be cascaded backwards. It may be possible to do a staggered parallelised update (with one iteration delay in successive layers), but this is yet to be explored.
2. **How will ILLS extend to other activation functions?** The description provided in this paper is valid for all invertible activation functions such as sigmoid, tanh, leaky ReLU, etc. We have not discussed the applicability of ILLS to non-invertible functions. In this case, we note that while there are common activation functions with a many-to-one relationship from input to output (for example, the negative portion of ReLU), almost all activations have a substantial strictly monotonic region (such as the positive part of a ReLU). We may be able to apply ILLS to subsets of the training data which correspond to only this portion of the activation. Recall that given the current parameter estimates, the forward pass of the network can be carried out for every data point. In the update step, and for every individual layer, we can identify the subset of data that correspond to monotonic portions of the activation for all neurons in that layer. Once the regression is solved for this subset of data, the updated values of weights and biases are valid for the entire data set. However, this proposal is untested at the moment.
3. **How will ILLS extend to other loss functions?** There are two types to be considered. 1. General \(L^{p}\) losses: The description in this paper covers the standard \(L^{2}\) loss which is used in linear least squares regression. In order to minimise generic \(L^{p}\) loss, there is no structural change required to the algorithm; one can simply use the relevant linear best-fit optimiser to solve for the parameters in the final layer. In all previous layers, we can continue to use the least-squares formulation to match the required hidden activation values. 2. Cross-entropy loss: In this case, the 'ideal' outputs \(y\) are one-hot vectors, which are not useful for ILLS because the ideal input values correspond to \(\pm\infty\). It may be possible to approximate the one-hot vector by a softer version (for example, by replacing output value of \(1\) by \(0.99\)) and then using ILLS. Alternatively, one could translate outputs \(0\) to \(-0.5\) and \(1\) to \(+0.5\) respectively. However, these are speculative ideas. The most promising idea2 is to solve the final layer parameter estimation problem as a logistic regression problem (assuming \(f\) is the sigmoid function for classification tasks). All previous layers can continue to operate as \(L^{2}\) regression, since they only need to match the subsequent hidden activations. Footnote 2: Suggested by Indrajit Bhattacharya, TCS Research
4. **Will ILLS scale to large networks?** Computationally, ILLS does not appear to have any higher overheads than standard gradient based backpropagation. The mathematical operations required
for gradient computation are replaced by the ones for computing the coefficients of linear regression. However, the numerical stability of the procedure is untested at this time. We also reiterate from remark (a.iii.) above, that parallelising the updates for ILLS might be a challenge.
5. **What about convergence of ILLS?** Step (ii) in Algorithm 1 is a contraction mapping, since it directly solves a linear regression problem. So long as the linear approximation in step (iv) is valid, the update (12) should also lead to a reduction in the residual. Therefore we believe ILLS to have stable convergence, although we do not have a rigorous proof at this time.
In conclusion, ILLS is a mathematical curiosity at the present time. However, it appears to have potential to make an impact in the training tools available for deep learning. There is plenty of work to be done, in terms of analysis as a hierarchical optimisation problem, engineering to make it stable across a range of scales and problems, extensions to different activations and loss functions, and many other directions.
|
2302.02986 | Fitness Dependent Optimizer with Neural Networks for COVID-19 patients | The Coronavirus, known as COVID-19, which appeared in 2019 in China, has
significantly affected global health and become a huge burden on health
institutions all over the world. These effects are continuing today. One
strategy for limiting the virus's transmission is to have an early diagnosis of
suspected cases and take appropriate measures before the disease spreads
further. This work aims to diagnose and show the probability of getting
infected by the disease according to textual clinical data. In this work, we
used five machine learning techniques (GWO_MLP, GWO_CMLP, MGWO_MLP, FDO_MLP,
FDO_CMLP) all of which aim to classify Covid-19 patients into two categories
(Positive and Negative). Experiments showed promising results for all used
models. The applied methods showed very similar performance, typically in terms
of accuracy. However, in each tested dataset, FDO_MLP and FDO_CMLP produced the
best results with 100% accuracy. The other models' results varied from one
experiment to the other. It is concluded that the models on which the FDO
algorithm was used as a learning algorithm had the possibility of obtaining
higher accuracy. However, it is found that FDO has the longest runtime compared
to the other algorithms. The link to the covid 19 models is found here:
https://github.com/Tarik4Rashid4/covid19models | Maryam T. Abdulkhaleq, Tarik A. Rashid, Bryar A. Hassan, Abeer Alsadoon, Nebojsa Bacanin, Amit Chhabra, S. Vimal | 2023-01-06T07:05:37Z | http://arxiv.org/abs/2302.02986v1 | Cite as: Maryam T. Abdulkhaleq, Tarik A. Rashid, Bryar A. Hassan, Abeer Alsadoon, Nebojas Bacanin, Amit Chhabra, S. Vimal, 2023. Fitness dependent optimizer with neural networks for COVID-19 patients, Computer Methods and Programs in Biomedicine Update, Volume 3, 100090, DOI: [https://doi.org/10.1016/i.cmpbup.2022.100090](https://doi.org/10.1016/i.cmpbup.2022.100090)
## Abstract
The Coronavirus, known as COVID-19, which appeared in 2019 in China, has significantly affected global health and become a huge burden on health institutions all over the world. These effects are continuing today. One strategy for limiting the virus's transmission is to have an early diagnosis of suspected cases and take appropriate measures before the disease spreads further. This work aims to diagnose and show the probability of getting infected by the disease according to textual clinical data. In this work, we used five machine learning techniques (GWO_MLP, GWO_CMLP, MGWO_MLP, FDO_MLP, FDO_CMLP) all of which aim to classify Covid-19 patients into two categories (Positive and Negative). Experiments showed promising results for all used models. The applied methods showed very similar performance, typically in terms of accuracy. However, in each tested dataset, FDO_MLP and FDO_CMLP produced the best results with 100% accuracy. The other models' results varied from one experiment to the other. It is concluded that the models on which the FDO algorithm was used as a learning algorithm had the possibility of obtaining higher accuracy. However, it is found that FDO has the longest runtime compared to the other algorithms. The link to the covid 19 models is found here: [https://github.com/Tarik4Rashid4/covid19models](https://github.com/Tarik4Rashid4/covid19models)
Machine Learning, Swarm Intelligence, Fitness Dependent Optimizer, FDO, COVID 19
## 1 Introduction
The emergence of the new Coronavirus at the end of 2019 in Wuhan/China led to a global crisis and posed a great challenge for the world to deal with it. The early reports predicted the outbreak of this virus based on its reproduction behaviour [1]. And as was expected, the various spread throughout China and its neighbouring countries in less than a month [2]. According to the WHO situation reports by the end of January 2020, there were more than 9800 confirmed cases and over 200 deaths [3]. The outbreak continued rapidly, reaching over a million confirmed cases, and more than 56000 deaths. At the beginning of April [4].
The challenges of dealing with this pandemic come for several reasons, including the ease of
transmission from one person to another, as transmission occurs simply by direct contact with an infected person or through droplets resulting from sneezing and coughing [1]. What increases the seriousness of the situation is the possibility of transmission before the appearance of the symptoms, since the incubation period for the Corona ranges between 2 to 14 days, and the appearance of symptoms occurs approximately 12 days after infection, in addition to some cases that do not show any symptoms [5]. These reasons have led governments to impose preventive measures such as quarantine and social distancing. However, this procedure has psychological and economic damage to society [6]. ".
The motivation for this work mainly came from the impact of the virus on the world and the damage it has caused in health, social, and even economic terms. Recently, many contributions have appeared to employ techniques such as Artificial Intelligence (AI) to deal with this pandemic. One of the most common fields that have been focused on was using AI to build classifiers for classifying patients if they have COVID-19 or how severe their infection is. Classifiers are simply defined as grouping objects into classes or categories. The main purpose of classification is to have a predictive model which can identify the class or the category of the new data that falls under it. The Artificial Neural Network (ANN) is one of the most common classification methods and it has excelled a lot in research related to the new pandemic. Neural networks fall under the category of supervised machine learning, which means they require a training process where the network can identify patterns by providing data samples with inputs and known outputs so that they can generalize solutions, and thus the network can provide outputs close to the expected outputs for any given set of input values. This process of training is done by employing techniques to adjust the weights and thresholds of the neurons to generalize the solutions produced by their outputs [7].
For a long time, the majority of the studies that apply artificial neural networks use gradient optimization techniques to train the network, typically the Backpropagation neural network. The Backpropagation method has been applied to different applications, such as image processing, function approximation, and pattern recognition. However, this method has some serious drawbacks, most notably its slow coverage and ease of getting trapped in local solutions [8]. To overcome these problems, several techniques have been developed. The most common solution was to employ meta-heuristic algorithms [9]. Metaheuristic optimization has attracted many researchers because of its features that give them superiority over classical algorithms. One of these features is avoiding the local optimum trap. Metaheuristics are also known for solving multiple objective and nonlinear problems. Many algorithms have been developed in the past two decades, which almost gives unlimited options to come up with new techniques that serve the
purpose of classification. There is already a huge number of available techniques to develop a classification model. However, the large diversity of these techniques makes it difficult to select the most efficient one. In addition, there are multiple evaluation criteria which makes it more challenging to decide whether one technique is better than the other [10].
The main contribution of this work is providing three data sets. Each data set will be tested using five intelligent models to perform the automatic classification of COVID-19-infected patients. Two types of neural networks will be tested with three nature-inspired algorithms, which are Fitness Depended Optimization (FDO), Grey Wolf Optimization (GWO), and Modified Grey Wolf Optimization (MGWO). The applied models aim to predict positive and negative Covid-19 cases. The objective of this work is to provide a solution through a system that can make a classification of the diagnosis through AI techniques and algorithms.
The rest of this paper is organized as follows: In section 2, we discuss the previous literature and the proposed methods that deal with COVID-19. Then in section 3, we discuss the preliminaries and the details of each method used in this work, and in section 4, the methodology of creating this work will be explained and how the proposed methods are applied. Section 5 will present the obtained results. Finally, we give our conclusion in section 6.
## 2 Related Work
With the COVID-19 pandemic continuing to spread, many experts from around the world are attempting to learn more about this novel disease and understand its behavior, ways to suppress its spread, and the appropriate treatment. In this section, we highlight the research that harnessed artificial intelligence to find solutions related to the virus, focusing more on the classification to predict the COVID-19 positive and negative cases, as this is the aim of our work. Based on the data type of the reviewed research works, this section will be split into three sections. Section 2.1 describes the literature that proposed the application of analyzing medical image data. Section 2.2 describes the literature that proposed the application of analyzing COVID-19 text data. Section 2.3 describes other applications that were used to deal with COVID-19.
### Artificial Intelligence Applications in Medical Images
Most of the literature that discusses AI applications with COVID-19 deals with medical images, exclusively CT scans and X-ray images. The rapid outbreak of Covid has led to the urgent need
for diagnosis and appropriate treatment of patients. For this, X-rays and CT scans are commonly used. In general, medical images take time for specialists to understand and give proper diagnoses, especially chest CT scans, which contain multiple slices. Thus, AI-based diagnosis with medical image classification is highly popular [11]. One of these works of literature is made by [12] who proposed a deep learning technique to analyze images of chest radiographs of COVID-19 patients treated in China and the United States. The deep learning method was implemented as a U-Net trained with 22000 radiographs that produce pneumonia probability color maps. The suggested technique's goal is to be useful in early diagnosis, as well as longitudinal and long-term monitoring of suspected pneumonia patients, including those with COVID-19 pneumonia.
[13] proposed CoroNet CNN, which is based on Xception architecture pre-trained on the ImageNet dataset to detect COVID-19. They performed two types of multi-classification. first with four classes (Normal, Covid, Pneumonia bacterial, and pneumonia viral). The second is with three classes (Normal, Covid, and Pneumonia). The overall model accuracy was 89.6%.
[14] proposed a convolutional neural network that is based on the concatenation of Xception and ResNet50V2. Their model performs multi-classification of X-ray images categorized into three classes: normal, Covid and pneumonia. The overall model accuracy was 91.4%.
Some research works proposed methods to make real-time implementation systems, such as the techniques proposed by [15] and [16]. Both applied Convolutional Neutral Network (CNN) and Extreme Learning Machine (ELM) to detect COVID-19 from X-ray images in real time. The difference between their methods is the metaheuristic algorithms that have been used to stabilize the ELM. [15] suggested the Sine-Cosine Algorithm optimization algorithm to build their CNN model, which achieved an accuracy of 98.83%. On the other hand, [16] used the Chimp Optimization Algorithm (ChOA) in their model. The accuracy that was achieved using the same database was 98.25%.
Many other methods were presented by different researchers, all performing classification tasks to detect COVID-19. Some of these proposals are summarized in Table 1. Although most of these models achieve promising results, they still need clinical study and testing.
\begin{table}
\begin{tabular}{c c c} \hline
**Ref.** & \multicolumn{2}{c}{**Method and Application Summary**} & **Result** \\ \hline
**[17]** & Develop a hybrid deep neural network (HDNN), using computed tomography (CT) and X
Cite as: Maryam T. Abdulkhaleq, Tarik A. Rashid, Bryar A. Hassan, Abeer Alsadoon, Nebojsa Bacanin, Amit Chhabra, S. Vimal, 2023. Fitness dependent optimizer with neural networks for COVID-19 patients, Computer Methods and Programs in Biomedicine Update, Volume 3, 100090, DOI: [https://doi.org/10.1016/i.cmpbup.2022.100090](https://doi.org/10.1016/i.cmpbup.2022.100090)
\begin{tabular}{c p{142.3pt} p{142.3pt}} \hline
**[18]** & Presenting the CoVIRNet method (COVID Inception-ResNet model), which utilizes the chest X-rays to identify COVID-19 patients by performing multiclass classification (COVID-19, Normal, Pneumonia bacterial, and Pneumonia viral) & The best accuracy obtained is 97.29\%. \\ \hline
**[19]** & Using Convolutional capsule networks (CapsNet) to perform binary classification (normal, and Covid), and multi-class classification (normal, Covid and pneumonia) to detect COVID-19 from chest X-ray images & An accuracy of 97.24\% for binary classification. \\ \hline
**[20]** & Presenting OptCoNet method, which is CNN and Grey Wolf Optimizer for hyperparameters optimization in training the CNN layers to perform Multi-class classification (normal, Covid, and pneumonia) to detect COVID-19 from chest X-ray images. & The best accuracy obtained is 97.78\%. \\ \hline
**[21]** & Using a convolution network with a Bayesian algorithm for training and optimization to perform binary classification (normal, and Covid), and multi-class classification (normal, Covid and pneumonia) to detect COVID-19 from chest X-ray images. & An accuracy of 100\% for binary classification. \\ \hline
**[22]** & DarkNet model which is a convolution network to perform binary classification (normal, and Covid), and multi-class classification (normal, Covid and pneumonia) to detect COVID-19 from chest X-ray images. & An accuracy of 98.08 \% for binary classification. \\ \hline
**[23]** & Applying Dense Convolutional Networks and transfer learning to perform Multi-class classification (normal, Covid, and pneumonia) to detect COVID-19 from chest X-ray images. & The best accuracy obtained is 100\%. \\ \hline
**[24]** & Transfer learning with CNNs Multi-class classification (normal, Covid, and pneumonia) to detect COVID-19 from chest X-ray images. & The best accuracy obtained is 96.78\%. \\ \hline
**[25]** & Applying for Transfer, and Compose (DeTraC) convolutional neural network to perform multi-class classification (norm 1, norm 2, COVID-19 1, COVID-19 2, SARS 1, and SARS 2) from chest X-ray images & The best accuracy obtained is 93.1\% \\ \hline
**[26]** & Applying residual neural network with a total of 50 (ResNet50) convolutional neural networks to perform multi-class classification (Normal, Covid, Pneumonia bacterial, and pneumonia viral) from chest X-ray images & The accuracy obtained is 96.23\% \\ \hline
**[27]** & Comparing three convolutional neural networks based (ResNet50, InceptionV3, and Inception- ResNetV2) by applying them to perform binary classification (normal, and Covid) from chest X-ray images & The accuracy of the ResNet50 model is 98\% \\ \hline
**[28]** & Developing a 3D deep learning framework called COVnNet which consists of a RestNet50 to perform multi-class & The accuracy of InceptionV3 is 90\% \\ \hline \end{tabular}
Cite as: Maryam T. Abdulkhaleq, Tarik A. Rashid, Bryar A. Hassan, Abeer Alsadoon, Nebojsa Bacanin, Amit Chhabra, S. Vimal, 2023. Fitness dependent optimizer with neural networks for COVID-19 patients, Computer Methods and Programs in Biomedicine Update, Volume 3, 100090, DOI: [https://doi.org/10.1016/i.cmpbup.2022.100090](https://doi.org/10.1016/i.cmpbup.2022.100090)
### Artificial Intelligence Applications in Patient Records
Diagnostic research usually uses medical imaging, and there are very few studies that diagnose using patient records compared to image data. Some approaches don't require imaging equipment for COVID-19 tracking and diagnosing. One of these approaches was by [29]. They compared different machine learning techniques (Decision Tree Extremely Randomized Trees, K-nearest neighbors, Logistic Regression, - Naive Bayes, Random Forest) in classifying patients into two categories (Normal, Covid) based on information from the patient's blood tests. They also developed a modified random forest method called a three-way Random Forest classifier which reported higher performance in terms of accuracy of 86%.
Another work was presented by [30]. They classified clinical reports that contained information about the symptoms of COVID-19 and other viruses. They classified their data into four classes: Covid, SARS, ARDS, and both (SARS and COVID) using classical machine learning techniques (Logistic regression, Multinomial Naive Bayesian, Support vector machine, and Decision tree). The highest accuracy obtained was by the Logistic regression and Multinomial Naive Bayes models with 96.2% testing accuracy. Then the decision tree comes in second place with 92.5% testing accuracy. Finally, support vector machine with 90.6% accuracy.
[31] also applied traditional machine learning methods including (logistic regression, decision tree, support vector machine, naive Bayes, and artificial neural network) to classify patients into positive and negative COVID-19 infections based on information about chronic diseases the patient has. The result of the models applied showed that showed the decision tree model has the highest accuracy of 94.99% followed by Logistic regression with 94.41% then naive Bayes with 94.36%, support vector machine with 92.4, and finally artificial neural network with 89.2%
In the last three discussed research papers, there is a discrepancy in the results even though the same models are used in each work. This discrepancy is expected because each work used a different dataset and a different number of classes.
We observed that papers that discuss diagnosing with datasets that contain patient records use only traditional machine learning techniques, and there was no attempt to develop more effective and sophisticated models like the ones that are presented in image classification. In our work, we aim to build a reliable and efficient model to predict the negative and positive COVID-19 based on the patient records dataset.
### Other Artificial Intelligence Applications
Many interesting solutions have been proposed that do not rely solely on diagnosing the disease based on RT-PCR data and chest X-ray tests. A solution was presented by [32]to detect the disease by analyzing the audio data using the Gradient Boosting Machine-based classifier. The methods that have been used in this research were the LGM classifier, Random Forest (RF), SVM, and K-Nearest Neighbor (KNN). The obtained results showed the overall average accuracy was above 97%.
Detecting protective measures such as masks, goggles, and protective clothing is also a very important step in the fight against COVID-19. [33] proposed a very interesting solution that uses unmanned vehicles (UV) to build a map of the real environment and detect protective measures, body temperature measurements, and other advanced tasks.
## 3 Preliminaries
In this research Grey wolf Optimizer, Modified Grey wolf Optimizer and Fitness Dependent Optimizer have been employed to find the optimum weight and bias for a neural network to predict the covid-19 negative and positive patients. The details of each algorithm are explained in this section.
### Grey Wolf Optimization
Grey Wolf Optimization is a swarm intelligence algorithm and one of the most powerful algorithms that were proposed by Mirjalili et al. in 2014 [34]. The GWO is one of the nature-inspired algorithms, which was inspired by grey wolves' behaviour during hunting. In nature, Grey wolves seek out the most efficient approach to hunting prey using a certain procedure. The GWO algorithm uses the same process that is used by the grey wolves in nature to organize the diverse
responsibilities in the wolves' pack, which follows the pack hierarchy. The social hierarchy is interpreted as the optimality of solutions. The most preferred solution is the alpha (\(\alpha\)) Consequently, the second and third-best solutions are beta (\(\beta\)) and delta (\(\delta\)) followed by the other solutions categorized as omega (\(\omega\)).
During the hunt, the grey wolf will encircle the prey first. Equation (1) models the updating of the grey wolf position (\(\vec{X}(t)\)) around the prey (\(\vec{X}_{p}(t)\)) [34] :
\[\vec{D}=\left|\vec{C}.\vec{X}_{p}(t)-\vec{X}(t)\right| \tag{1}\]
\[\vec{x}(t+1)=\vec{X}_{p}(t)-\vec{A}.\vec{D} \tag{2}\]
Where t is the current iteration. \(\vec{A}\) and \(\overline{C}\) are the coefficient Vectors. \(X_{p}\) is the position vector of the prey and \(X\) is the position vector of the grey wolf.
The \(\vec{A}\) and \(\overline{C}\) vectors are represented by the equations [34]:
\[\vec{A}=2\vec{a}.r_{1}-\vec{a} \tag{3}\]
\[\vec{C}=2.\vec{r}_{2} \tag{4}\]
Where the components of \(\vec{a}\) are linearly decreased from 2 to 0 throughout iterations and \(r_{1}\), \(\vec{r}_{2}\) are random vectors in [0, 1].
with equations (3) and (4) the Grey wolf/ search agent can update its position in any location. by adjusting the value of \(\vec{A}\) and \(\vec{C}\) vectors the best grey wolf can be reached as shown in Figure 1 where the wolf in the position of \((X,Y\)) can relocate itself around the prey \((X^{*},Y^{*})\) with the proposed equations.
Cite as: Maryam T. Abdulkhaleq, Tarik A. Rashid, Bryar A. Hassan, Abeer Alsadoon, Nebojsa Bacanin, Amit Chhabra, S. Vimal, 2023. Fitness dependent optimizer with neural networks for COVID-19 patients, Computer Methods and Programs in Biomedicine Update, Volume 3, 100090, DOI: [https://doi.org/10.1016/j.cmpbup.2022.100090](https://doi.org/10.1016/j.cmpbup.2022.100090)
In nature, grey wolves can recognize the location of their prey, but when it comes to mathematical application, the location of the prey (the optimum solution) is unknown. In this case, we assume that the alpha, beta, and delta have better knowledge about the potential location of prey. Therefore, the solutions that are obtained by these three search agents will be saved and the other agents (omega) will update their positions according to the position of the saved ones. This process is represented by the following equations where the distance between alpha, beta, and delta is calculated [34]:
\[\overline{D}_{\alpha}=\left|\vec{C}_{1}.\vec{X}_{\alpha}-\vec{X}\right| \tag{5}\]
\[\overline{D}_{\beta}=\left|\vec{C}_{2}.\vec{X}_{\beta}-\vec{X}\right| \tag{6}\]
Figure 1: Position vectors and their possible next locations in 2d space.
\[\vec{D}_{\delta}=\left|\vec{C}_{3}.\vec{X}_{\delta}-\vec{X}\right| \tag{7}\]
Where \(\vec{X}_{\alpha},\vec{X}_{\beta},\) and \(\vec{X}_{\delta}\) are the positions of alpha, beta, and delta. \(\vec{X}\) is the current solution's position [34].
\[\vec{X}_{1}=\vec{X}_{\alpha}-\vec{A}_{1}.(\vec{D}_{\alpha}) \tag{8}\] \[\vec{X}_{2}=\vec{X}_{\beta}-\vec{A}_{2}.(\vec{D}_{\beta})\] (9) \[\vec{X}_{3}=\vec{X}_{\delta}-\vec{A}_{3}.(\vec{D}_{\delta})\] (10) \[\vec{X}\left(t+1\right)=\frac{\vec{x}_{1}+\vec{x}_{2}+\vec{X}_{3 }}{3} \tag{11}\]
The main controlling parameter of GWO to promote exploration is the C Vector. the value of this parameter is a random value in the interval of [0, 2]. Its purpose is to give the prey a random weight depending on the position of a wolf. this parameter also makes the process of reaching the prey harder and farther (\(C>1\)) or easier and closer (\(C<1\)). This component is very helpful in avoiding local optima stagnation, especially in the final iterations.
Another controlling parameter that causes exploration is \(A\). The value of \(A\) parameter is defined based on the value \(a\) parameter, which linearly decreases from 2 to 0. If the random values of \(\vec{A}\) are within the \([-1.1]\) interval, then the next position of a wove can be in any position between its current position and the prey position. Thus when \(\left|\vec{A}\right|>1\) the wolves attack the prey and when \(\left|\vec{A}\right|<1\) the wolves attack far from the prey (see Figure 2).
Cite as: Maryam T. Abdulkhaleq, Tarik A. Rashid, Bryar A. Hassan, Abeer Alsadoon, Nebojsa Bacanin, Amit Chhabra, S. Vimal, 2023. Fitness dependent optimizer with neural networks for COVID-19 patients, Computer Methods and Programs in Biomedicine Update, Volume 3, 100090, DOI: [https://doi.org/10.1016/j.cmpbup.2022.100090](https://doi.org/10.1016/j.cmpbup.2022.100090)
### Modified Grey Wolf Optimization
In 2019 [35] proposed a modified version of Grey Wolf Optimizer. The proposed algorithm has two simple modifications. The first modification was instead of having four groups of wolves Alpha (\(\alpha\)), Beta (\(\beta\)), Delta (\(\delta\)), and Omega (\(\omega\)). Extra group called Gama (\(\gamma\)) were added. With this extra group, the omega wolved will update their position concerning the position of four wolves (Alpha, Beta, Delta, and Gamma) instead of three.
The second modification is regarding the position defining the step size of the omega wolves which is shown in standard GWO in equations (8), (9), (10), and (11). In this modified version another equation will be added to calculate the distance between alfa, beta, delta, and gamma [35] :
\[\vec{D}_{\gamma}=\left|\vec{C}_{4}.\vec{X}_{\gamma}-\vec{X}\right| \tag{12}\]
Where \(\vec{X}_{\gamma}\) : The position of gamma. \(\vec{X}\) : The current solution.
Figure 2: Attacking prey versus searching for prey.
Then the positions of alfa, beta, delta, and gamma will be calculated as the following [35]:
\[\vec{D}_{avg}=\frac{\vec{D}_{1}+\vec{D}_{2}+\vec{D}_{3}+\vec{D}_{4}}{4} \tag{13}\]
\[\vec{X}_{1}=\vec{X}_{\alpha}-\vec{A}_{1}.(\vec{D}_{avg}) \tag{14}\]
\[\vec{X}_{2}=\vec{X}_{\beta}-\vec{A}_{2}.(\vec{D}_{avg}) \tag{15}\]
\[\vec{X}_{3}=\vec{X}_{\delta}-\vec{A}_{3}.(\vec{D}_{avg}) \tag{16}\]
\[\vec{X}_{4}=\vec{X}_{\gamma}-\vec{A}_{4}.(\vec{D}_{avg}) \tag{17}\]
And finally, the equation that represents the current solution's final position is [35]:
\[\vec{X}\left(t+1\right)=\frac{\vec{X}_{1}+\vec{X}_{2}+\vec{X}_{3}+\vec{X}_{4}}{4} \tag{18}\]
### Fitness Dependent Optimizer
Fitness Dependent Optimizer is one of the most recent natural-inspired algorithms that has been proposed. It was developed by Jaza Abdullah and Tarik Rashid in 2019. The algorithm was inspired by the swarm of bees' behaviour during reproduction. The FDO algorithm mimics the process of scouting bees when searching for a suitable hive among many potential hives to find suitable solutions among possible solutions [36]. Compared to GWO, the FDO has a simpler concept and is easier to understand. The FDO process can be divided into two parts: the searching process, where the search agents attempt to find the best solution, and the movement process, where the scout bee updates its position [37]. These two processes will be explained in detail in the coming sections.
#### 3.3.1 Scout bee searching process
The main essence of this process is to find suitable new hives "solutions". As with the GWO algorithm, this algorithm uses search agents to search for new hives known as scout bees. Finding a new solution in this algorithm is represented by a scout bee position. At the beginning of the
FDO execution, the locations of the artificial scout bees are initiated randomly in the search spaces. Then, throughout the execution, the global best solution is determined by the algorithm. The scout bees use a combination of a random walk and a fitness weight mechanism. to search for new hives/solutions in the search space. The scout bees will keep searching for better solutions till the end of the determined boundaries. If a better solution is found, the previous one is simply ignored. And if no better solution is found, it will keep the former solution.
#### 3.3.2 Scout bee movement process
In FDO, the scout bees update their position to obtain a better solution. The scout bees' position is updated by adding pace to the current position as shown in equation (19) [36]:
\[X_{i,t+1}=X_{i,t}+pace \tag{19}\]
Where \(i\) is The current search agent. \(t\) is the current iteration. \(X\) is the artificial scout bee (search agent) and \(pace\) id the movement rate and direction of the artificial scout bee.
The pace relatively depends on the fitness weight (\(fw\)). Nevertheless, the direction of pace is completely dependent on a random mechanism. The determination of (\(fw\)) is done according to equation 20 [36]:
\[fw=\left|\frac{x_{itfitness}^{*}}{x_{itfitness}}\right|-wf \tag{20}\]
Where \(x_{itfitness}^{*}\) represent the best global solution's fitness function value. \(x_{itfitness}\) is the current solution's value of the fitness function and \(wf\) is the weight factor.
The weight factor (\(wf\)) is used to control \(fw\) and its value is either 0 or 1. If \(wf\) is equal to 1, then it represents a high level of convergence and a low chance of coverage. and if \(wf\) equals 0, then it will be neglected because it will not affect the equation (20). Setting the value of \(wf\) to 0 doesn't necessarily make the search more stable. In some cases, the opposite occurs as the fitness function value depends on the problem.
the FDO has to consider some settings for (\(fw\)) to avoid unacceptable cases such as making sure that the \(fw\) value is in the [0, 1] range. as well as avoiding the division by zero which can occur if the value of \(x_{i,fitness}\)is 0. Therefore, the following rules which are represented in the equations
(21) should be used [36]:
\[\left.\begin{cases}fw=1\ or\ fw=0\ or\ x_{i,t\ fitness}=0,pace=\ x_{i,t}*r\\ fw\ >and\ fw\ <1\left\{\begin{array}{l}r<0,pace=\left(x_{i,t}-x_{i,t}^{*}\right)*fw*-1 \\ r\geq 0,pace=\left(x_{i,t}-x_{i,t}^{*}\right)*fw\end{array}\right\}\end{cases}\right\} \tag{21}\]
Where:
\(r\) is a random number in the range of [-1, 1]. \(x_{i,t}\) is the current solution. \(x_{i,t}^{*}\), is the global best solution achieved sofar.
## 4 Research Methodology
In this work, different methods have been applied to be compared. There are five models applied, each with a different network architecture or different training algorithms. Each of these models was tested on three datasets. The methodology of this work consists of defining the dataset, preparing the data (if it needs translation or representation), feature selection, applying the classification model, and making statistical comparisons (See Figure 3).
Cite as: Maryam T. Abdulkhaleq, Tarik A. Rashid, Bryar A. Hassan, Abeer Alsadoon, Nebojsa Bacanin, Amit Chhabra, S. Vimal, 2023. Fitness dependent optimizer with neural networks for COVID-19 patients, Computer Methods and Programs in Biomedicine Update, Volume 3, 100090, DOI: [https://doi.org/10.1016/j.cmpbup.2022.100090](https://doi.org/10.1016/j.cmpbup.2022.100090)
### Data collection
Building a database is the most important step in machine learning. Three different datasets have been collected in this work. The first dataset was collected by [38]. This data is an original Brazilian COVID-19 dataset that contains early-stage symptoms, comorbidities, and demographic information for patients tested in Brazil. The dataset was compiled with records from 26 Brazilian states and federations. Testing was done by viral and antibody tests.
Figure 3: research methodology
The second dataset, which is available in [39], also provides early-stage symptoms and comorbidities based on various WHO guidelines, in addition to additional information about the preventive measures followed by the patient.
The third dataset contains demographic and clinical data as well as results of the RT-PCR test for COVID-19 in patients with a viral respiratory diagnosis in Mexico. Mexico as reported by the [General Directorate of Epidemiology]. The dataset is available in [40].
### Feature Selection and Data Pre-processing
After data collection, the relevant features were selected. Some records were removed from the dataset either because they were duplicated or because they contained missing or ignored information. Another action that has been made is translating the dataset that contains records of patients from Mexico from Portuguese to English. Some of the datasets originally represented the positive and negative responses with "yes" for positive and "no" for negative. To prepare the data to be used in the models, we represented the positive input feature with 0 and the negative input feature with 1. As for the target, 1 represented "positive" and 2 represented "negative".
### Classification Model
In this section, we describe the methodology of the proposed models. This includes the architecture used to build the models, which is covered in Section 4.3.1, and the training methods that are described in Section 4.3.2.
#### 4.3.1 The architecture of the neural network
Three elements define the architecture of neural networks, which are the number of layers of processing elements or nodes, including input and output, the number of hidden layers, and the number of nodes in each layer. Determining the topology of a neural network lies in controlling the number of hidden layers and the neurons in each layer [41].
In all of the proposed models, we used a single hidden layer with the number of neurons determined based on the number of features that the dataset has (see Table 2). The number of neurons in the
hidden layer is set by the following rule:
\[\textit{Hno=2*Ino +1} \tag{22}\]
Where Hno is the number of hidden layers and Ino is the number of input layers.
\begin{tabular}{|c|c|c|} \hline \multicolumn{1}{|c|}{_Table 2 number of input and hidden neurons based on the dataset that is used in each model._} \\ \hline Dataset & Number of Input neurons (features) & Number Hidden Neurons \\ \hline Dataset 1 & 10 & 21 \\ \hline Dataset 2 & 18 & 37 \\ \hline Dataset 3 & 13 & 27 \\ \hline \end{tabular}
In this work we have applied two types of neural networks the first one is basic feed-forward artificial neural networks which consist of three interconnection layers: one input layer, a hidden layer, and one output layer. The other type is a cascade feed-forward artificial neural network. In both types, the ANN connects from the input layer to each layer and from each layer to the successive layers. The difference is in cascade ANN has an extra connection from the input layer directly to the output layer (see Figure 4). The additional connections are able the network to learn associations of high complexity as well as improve the speed at which the network learns the desired relationship [42][43].
#### 4.3.2 Artificial Neural Network Learning Method
The purpose of training is to find the set of values for weights and biases that provide the best classification accuracy. In this work, different models will each use one of the two metaheuristic algorithms (GWO, modified GWO, and FDO) as a training algorithm.
Since the best values for the weights and biases need to be found, they are defined as variables. After defining the variables for the optimization, we need an objective function for the algorithm. In this case, we will use one of the most common evaluation metrics for neural networks, which is Mean Square Error (MSE). The MSE gives the average or means of the square of the difference between the desired output and the value that is obtained from the ANN model. The following represents the calculation of the MSE:
\[MSE=\sum_{i=1}^{n}(\mathcal{y}_{i}-\mathcal{Y}_{i})^{2} \tag{23}\]
Where n is the number of outputs. \(i\) is the input unit iteration. \(\mathcal{y}_{i:}\) represent the desirable output and \(\mathcal{Y}_{i}\) represents the obtained value.
To evaluate the performance of the model, we calculate the MSE over all the training samples, then take the average of the MSE. Based on the average value of MSE that is given to the
Figure 4: Difference between feed-forward neural network and cascade feed-forward neural network.
optimization algorithm, the model will adapt itself and change the weights and biases to minimize the average MSE of all training samples. The average MSE is calculated as the following:
\[MSE_{avg}=\sum_{j=1}^{m}\frac{\sum_{i=1}^{n}(\mathcal{Y}_{i}-\mathcal{Y}_{i})^{2 }}{m} \tag{24}\]
Where m is the number of training samples.
Figure 5 demonstrates how the optimization algorithm is applied to help the neural network to update the weights to reach the highest accuracy.
Cite as: Maryam T. Abdulkhaleq, Tarik A. Rashid, Bryar A. Hassan, Abeer Alsadoon, Nebojsa Bacanin, Amit Chhabra, S. Vimal, 2023. Fitness dependent optimizer with neural networks for COVID-19 patients, Computer Methods and Programs in Biomedicine Update, Volume 3, 100090, DOI: [https://doi.org/10.1016/j.cmpbup.2022.100090](https://doi.org/10.1016/j.cmpbup.2022.100090)
### Result Evaluation
As mentioned in the previous section. The MSE value is one of the methods that has been used to evaluate the applied models. The models that have very small MSE (close to zero) are considered to be good models.
Another way to evaluate the models is the confusion matrix which is the result of the classification that summarizes the correct and the incorrect predictions. Using the confusion matrix we can get the metrics of sensitivity, specificity, Positive Predictive Value (PPV), and Negative Predictive Value (NPV). These values are calculated to have a better understanding of the model's reliability
Figure 4: Training Methods of the proposed models
[35].
The sensitivity, also known as the true positive rate (TPR), is a metric that measures the proportion of samples that are truly positive and give a positive result. The specificity, also referred to as the true negative rate (TNR), is a metric that measures the proportion of samples that are truly negative and give a negative result. The positive predictive value (PPV) is the probability that a sample that returns a positive result is positive. The negative predictive value (NPV) is the probability that a sample that returns a negative result is negative. The result of these four values is between 0 and 1. the closer value to 1 means the better result it has. In other words, 1 is the best value while 0 is the worst [35].
## 5 Implementation and Result
In this section, we will discuss the achieved results from the implemented architecture and demonstrate the training and testing performance. Section 5.1 describes the experiment environment and the applied framework. Section 5.2 shows the overall performance of the proposed model. The rest of the sections (5.3, 5.4, and 5.5) elaborate on the obtained results of each model in each dataset.
### Experimental Setup
We used a Windows system with 16.0 GB of Ram and a 2.00 GHz processor for performing this work. The MATLAB platform was used to build and perform machine learning classification. The classification model was tested on three different datasets. Each dataset was split into an 80:20 ratio. 80% of the data was dedicated to training and the other 20% was dedicated to testing.
### Overview
Table 2 shows the result of the correct classification rate for each model in three different datasets. The first dataset contains a total of 3128 samples. 2503 was dedicated to training and the other 625 to testing. The second dataset contained 2102 samples, 1683 dedicated to training and 419 for testing. Lastly, dataset 3, has the highest number of samples with a total of 129,581. 103665 is
dedicated to training and 25916 to testing.
Table 2 demonstrated the correct classification rate of training and testing in each model and for each dataset. It also shows the problem dimension (the total number of connections), the search agents that have been used, and the maximum iteration of the search algorithm. In the experiment, we made sure to test all the models under the same conditions and circumstances where we used the same number of iterations and the same number of search agents for all the algorithms in all the models. Ten search agents were used in each algorithm, and the maximum iteration for each was 50 (see Table 3-4). As shown in Table 5, the results obtained by all models are close. However, it can be observed that the models that are trained with the FDO algorithm have a higher chance of getting more accurate results, as they achieved 100% accuracy in all the experiments. However, looking at the run time The FDO takes way longer than the GWO. Observing the run time, we see that the GWO_MLP model takes the shortest time among the other models. The reason for that is in general GWO algorithm is faster than the FDO algorithm and the MLP architecture has fewer connections than the CMLP.
Cite as: Maryam T. Abdulkhaleq, Tarik A. Rashid, Bryar A. Hassan, Abeer Alsadoon, Nebojsa Bacanin, Amit Chhabra, S. Vimal, 2023. Fitness dependent optimizer with neural networks for COVID-19 patients, Computer Methods and Programs in Biomedicine Update, Volume 3, 100090, DOI: [https://doi.org/10.1016/j.cmpbup.2022.100090](https://doi.org/10.1016/j.cmpbup.2022.100090)
### Dataset 1 Results
Table 6 shows the performance of the proposed models in terms of Mean Square Error (MSE) and the classification rate of both testing and training. The highest obtained classification rate for training is 100% by FDO_MLP, then FDO_CMLP with 99.88%, followed by MGWO_MLP with 95.92%, GWO_CMLP (95.32%), and lastly, GWO_MLP with the lowest rate of (92.84%). The highest obtained classification rate for testing is 100% by both FDO_MLP and FDO_CMLP, then MGWO_MLP with 97.76%, followed by GWO_CMLP with 97.28%, and lastly, GWO_MLP with
the lowest rate of (96.48%). In addition to the classification accuracy Table, 7 shows the evaluation metric of the confusion matrices of the proposed models for dataset 1. Figure 6 shows the roc curve result of all the proposed models tested on dataset 1.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline Model & Training/ & \multicolumn{3}{c|}{Positive case} & \multicolumn{3}{c|}{Negative case} & \multicolumn{3}{c|}{MSE} & \multicolumn{1}{c|}{Rate \%} \\ \cline{3-10} & Testing & & & & & & & & \\ \cline{3-10} & & & & & & & & & \\ \hline GWO\_MLP & Training & 1260 & 1163 & 92.3016 & 1243 & 1161 & 93.4031 & 0.0027385 & 92.8486 \\ \cline{2-10} & Testing & 304 & 304 & 100 & 321 & 299 & 93.1464 & 0.002745 & 96.48 \\ \hline MGWO\_MLP & Training & 1260. & 1230 & 97.619 & 1243 & 1171 & 94.2076 & 0.0018229 & 95.9249 \\ \cline{2-10} & Testing & 304 & 304 & 100 & 321 & 307 & 95.6386 & 0.001929 & 97.76 \\ \hline GWO\_CMLP & Training & 1260. & 1210 & 96.0317 & 1243 & 1176 & 94.6098 & 0.0023577 & 95.3256 \\ \cline{2-10} & Testing & 304 & 304 & 100 & 321 & 304 & 94.704 & 0.0024762 & 97.28 \\ \hline FDO\_MLP & Training & 1260 & 1260 & 100 & 1243 & 1243 & 100 & 0.0022916 & 100 \\ \cline{2-10} & Testing & 304 & 304 & 100 & 321 & 321 & 100 & 0.0024932 & 100 \\ \hline FDO\_CLMP & Training & 1260 & 1257 & 99.7619 & 1243 & 1243 & 100 & 0.0020373 & 99.8801 \\ \cline{2-10} & Testing & 304 & 304 & 100 & 321 & 321 & 100 & 0.001997 & 100 \\ \hline \end{tabular}
\end{table}
Table 6: Performance of the proposed models in dataset 1.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Model & Sensitivity & Specificity & PPV & NPV & Accuracy \\ \hline GWO\_MLP & 1 & 0.93 & 0.93 & 1 & 96.48\% \\ \hline MGWO\_MLP & 1 & 0.95 & 0.95 & 1 & 97.76\% \\ \hline GWO\_CMLP & 1 & 0.94 & 0.94 & 1 & 97.28\% \\ \hline FDO\_MLP & 1 & 1 & 1 & 1 & 100\% \\ \hline FDO\_CLMP & 1 & 1 & 1 & 1 & 100\% \\ \hline \end{tabular}
\end{table}
Table 7: Evaluation of the confusion matrices of the proposed models for dataset 1.
Cite as: Maryam T. Abdulkhaleq, Tarik A. Rashid, Bryar A. Hassan, Abeer Alsadoon, Nebojsa Bacanin, Amit Chhabra, S. Vimal, 2023. Fitness dependent optimizer with neural networks for COVID-19 patients, Computer Methods and Programs in Biomedicine Update, Volume 3, 100090, DOI: [https://doi.org/10.1016/j.cmpbup.2022.100090](https://doi.org/10.1016/j.cmpbup.2022.100090)
Cite as: Maryam T. Abdulkhaleq, Tarik A. Rashid, Bryar A. Hassan, Abeer Alsadoon, Nebojsa Bacanin, Amit Chhabra, S. Vimal, 2023. Fitness dependent optimizer with neural networks for COVID-19 patients, Computer Methods and Programs in Biomedicine Update, Volume 3, 100090, DOI: [https://doi.org/10.1016/j.cmpbup.2022.100090](https://doi.org/10.1016/j.cmpbup.2022.100090)
## 5.4 Dataset 2 Results
Table 8 shows the performance of the proposed models in terms of Mean Square Error (MSE) and the classification rate of both testing and training. The highest obtained classification rate for training is 100% by FDO_MLP and FDO_CMLP, followed by GWO_CMLP with 99.88%, then GWO_MLP (99.82%), and lastly, MGWO_MLP with the lowest rate of (99.76%). The highest obtained classification rate for testing is 100% by both FDO_MLP and FDO_CMLP, then GWO_MLP and GWO_CMLP with 99.76%, and finally, MGWO_MLP with the lowest rate of (99.04%). In addition to the classification accuracy, Table 9 shows the evaluation metric of the confusion matrices of the proposed models for dataset 2. Figure 7 shows the roc curve result of all the proposed models tested on dataset 2.
Cite as: Maryam T. Abdulkhaleq, Tarik A. Rashid, Bryar A. Hassan, Abeer Alsadoon, Nebojsa Bacanin, Amit Chhabra, S. Vimal, 2023. Fitness dependent optimizer with neural networks for COVID-19 patients, Computer Methods and Programs in Biomedicine Update, Volume 3, 100090, DOI: [https://doi.org/10.1016/j.cmpbup.2022.100090](https://doi.org/10.1016/j.cmpbup.2022.100090)
\begin{tabular}{|c|c|c|c|c|c|} \hline GWO\_CMLP & 0.99 & 1 & 1 & 0.99 & 99.76\% \\ \hline FDO\_MLP & 1 & 1 & 1 & 1 & 100\% \\ \hline FDO\_CLMP & 1 & 1 & 1 & 1 & 100\% \\ \hline \end{tabular}
Cite as: Maryam T. Abdulkhaleq, Tarik A. Rashid, Bryar A. Hassan, Abeer Alsadoon, Nebojsa Bacanin, Amit Chhabra, S. Vimal, 2023. Fitness dependent optimizer with neural networks for COVID-19 patients, Computer Methods and Programs in Biomedicine Update, Volume 3, 100090, DOI: [https://doi.org/10.10161.cmpbup.2022.100090](https://doi.org/10.10161.cmpbup.2022.100090)
Cite as: Maryam T. Abdulkhaleq, Tarik A. Rashid, Bryar A. Hassan, Abeer Alsadoon, Nebojsa Bacanin, Amit Chhabra, S. Vimal, 2023. Fitness dependent optimizer with neural networks for COVID-19 patients, Computer Methods and Programs in Biomedicine Update, Volume 3, 100090, DOI: [https://doi.org/10.1016/j.cmpbup.2022.100090](https://doi.org/10.1016/j.cmpbup.2022.100090)
## 5.5 Dataset 3 Results
Table 10 shows the performance of the proposed models in terms of Mean Square Error (MSE) and the classification rate of both testing and training. The performance of classifying this dataset was very promising as this is the largest dataset that has been used in this work. In general, the obtained classification rate is between 99% and 100%. In addition to the classification accuracy, Table 11 shows the evaluation metric of the confusion matrices of the proposed models for dataset 3. Figure 8 shows the roc curve result of all the proposed models tested on dataset 3.
Cite as: Maryam T. Abdulkhaleq, Tarik A. Rashid, Bryar A. Hassan, Abeer Alsadoon, Nebojsa Bacanin, Amit Chhabra, S. Vimal, 2023. Fitness dependent optimizer with neural networks for COVID-19 patients, Computer Methods and Programs in Biomedicine Update, Volume 3, 100090, DOI: [https://doi.org/10.1016/j.cmpbup.2022.100090](https://doi.org/10.1016/j.cmpbup.2022.100090)
Cite as: Maryam T. Abdulkhaleq, Tarik A. Rashid, Bryar A. Hassan, Abeer Alsadoon, Nebojsa Bacanin, Amit Chhabra, S. Vimal, 2023. Fitness dependent optimizer with neural networks for COVID-19 patients, Computer Methods and Programs in Biomedicine Update, Volume 3, 100090, DOI: [https://doi.org/10.1016/j.cmpbup.2022.100090](https://doi.org/10.1016/j.cmpbup.2022.100090)
## 6 Conclusion
Given the rapid spread of COVID-19, which infected millions of people within a few months, the urgent need to make a quick and accurate diagnosis of patients has become urgent. Early diagnosis helps to reduce the spread of the disease and the transmission of infection to other people. ML is one of the non-clinical methods that is considered an alternative means of diagnosing infected patients or predicting disease. The goal of developing such technologies is to reduce the burden that the epidemic has caused on health centers around the world. In this work, we proposed five different ML models for COVID-19 prediction. The purpose of these models is to classify infected and non-infected cases. We relied on two different ANN architectures for analysing COVID-19-related infection data. We also used different training algorithms and different datasets to examine the performance of each model in each type of setting and experiment.
According to the obtained results, the FDO_MLP and FDO_CMLP outperformed the other models in all the performed experiments with an accuracy that reached 100%. However, the FDO algorithm takes more run time than GWO in all three applied datasets. Despite that, in the medical area, the priority is the accuracy of the system, which makes the FDO preferable. The performance of the other models varies slightly according to the type of data applied to them. It is noticed that there is no clear difference between the models that are built based on MLP or CMLP because the obtained results vary from one experiment to another.
The classification models that have been presented in this research show very high precision. As a first step in building a predictive system that can be used in the medical field, the obtained results were promising. The main benefit of developing a classification model is to identify the infected or high-risk patients easily, which is useful in controlling this infection. Despite that, the medical field is very sensitive and requires a highly efficient and reliable system. It is worth noting that getting 100% accuracy is uncommon in classification models. This could be due to several reasons. One of the reasons is the number of features used in the classification and how easy the dataset is to classify. Also, the dataset that has been used doesn't contain noise data, which can lead to producing generalization errors.
Therefore, more tests must be done to ensure the reliability of these models. This research can be expanded or developed by doing the following as future work:
Figure 8: ROC curve results for dataset 3.
* Getting an extended database that contains more features and more samples. This will make. Considering more features related to COVID-19 will make the prediction process more efficient and comprehensive.
* Building other neural network models that could use the same training algorithms to create a system that can deal with different types of data, such as image, audio, and time-series data. Processing this type of data will provide a more intelligent system that has more benefits, especially in the health sector.
* Exploring new methods will lead to producing more reliable and sophisticated models. For this reason, we need to develop the applied algorithms by modifying them or hybridizing them with other classical or metaheuristic algorithms to produce other models.
|
2301.05549 | On the explainability of quantum neural networks based on variational
quantum circuits | Ridge functions are used to describe and study the lower bound of the
approximation done by the neural networks which can be written as a linear
combination of activation functions. If the activation functions are also ridge
functions, these networks are called explainable neural networks.
In this paper, we first show that quantum neural networks which are based on
variational quantum circuits can be written as a linear combination of ridge
functions. Consequently, we show that the interpretability and explainability
of such quantum neural networks can be directly considered and studied as an
approximation with the linear combination of ridge functions. | Ammar Daskin | 2023-01-12T18:46:28Z | http://arxiv.org/abs/2301.05549v2 | # On the explainability of quantum neural networks based on variational quantum circuits
###### Abstract
Ridge functions are used to describe and study the lower bound of the approximation done by the neural networks which can be written as a linear combination of activation functions. If the activation functions are also ridge functions, these networks are called explainable neural networks.
In this paper, we first show that quantum neural networks which are based on variational quantum circuits can be written as a linear combination of ridge functions. Consequently, we show that the interpretability and explainability of such quantum neural networks can be directly considered and studied as an approximation with the linear combination of ridge functions.
Quantum neural networks, explainability, interpretability, ridge functions
## I Introduction
Neural networks have applications in almost every field of science ranging from health to banking. The ability to interpret the result of a model and explain the learning behavior may be deemed important especially in critical industries such as medicine and health care[1, 2, 3]. Limitations of the approximation rates of the classical neural networks can be understood better by using linear combination of ridge functions as an approximation to neural networks.
The power and the limitations of quantum neural networks are yet to be fully understood. In this paper, we show that quantum neural networks can be written as a sum of ridge functions. Therefore, the math and methodologies that are used to understand classical neural networks can be used to study quantum ones.
### _Approximation with ridge functions_
For a random variable y, if we have the observations \(y_{1},\ldots,y_{n}\) at points \(x_{1},\ldots,x_{n}\), a standard regression model can be described by \(y_{i}=\hat{y}_{i}+r_{i}\), where \(\hat{y}_{i}\) defines the dependence of \(y_{i}\) on \(x_{i}\) When \(x_{i}\) are univariate real values, the assumption is that the dependence is smooth. This leads the following estimation for the regression model[4]:
\[E(y\mid x)=f(x), \tag{1}\]
where \(f\) is a smoothing function. In linear smoothing, \(\mathbf{\hat{y}}=(\hat{y}_{1},\ldots,\hat{y}_{n})^{T}\) can be written in the form of matrix vector transformation: \(\mathbf{\hat{y}}=S\mathbf{y}\), where \(S\) is the smoother matrix that does not depend on \(\mathbf{y}\). When there are more than one predictors, estimating the regression surface is hard because of the curse of dimensionality (the data sparseness in high dimensions)[5]. The general approach is to use the one-dimensional smoother as the building block in an additive model [4]. Given predictors \(x_{ij}\)s for each \(y_{i}\) outcome, i.e. \(\{y_{i},x_{i1},\ldots,x_{ip}\}\), the additive model can be described as:
\[E(y_{i}|x_{i1},\ldots,x_{ip})=\alpha+\sum_{j=1}^{p}f_{j}(x_{ij})+\epsilon. \tag{2}\]
where \(\epsilon\) is the inherent error, \(\alpha\) is a constant parameter, and \(fj\)s represent unspecified smooth functions. Fitting can be done by using the backfitting algorithm [5, 6] which is in matrix form equivalent to the Gauss-Seidel method in numerical linear algebra[7].
Generalizing this model leads to the projection pursuit model proposed in [5] where the regression surface is predicted by a linear combination of ridge functions as in the following form:
\[f(\mathbf{x})=\sum_{k=1}^{K}f_{k}(\mathbf{w_{k}}\cdot\mathbf{x}). \tag{3}\]
Here, \(\mathbf{w_{k}}\)s represent weight vectors (projection indices) and \(f_{k}\)s are ridge functions [8, 9, 10, 11]: Any multivariate function \(f_{k}:\mathcal{R}^{d}\rightarrow\mathcal{R}\). The vector \(w_{k}\) is called the direction and \(f_{k}\) gives a constant on certain hyper-planes whose normal vectors are parallel to this direction. Ridge functions are used in approximation theory, partial differential equations, and neural networks. For instance, a feed forward neural network can be defined as [8]:
\[\sum_{k}\gamma_{k}\sigma\left(\mathbf{w_{k}}\cdot\mathbf{x}+b_{k}\right), \tag{4}\]
where \(b_{k}\), \(\alpha_{k}\), and \(\mathbf{w_{k}}\) represent parameters that describe the neural network. \(\sigma\) is a univariate function (activation function). The degree of approximation by \(\sigma\) functions can be bounded by the degree of approximation by ridge functions (we refer the reader to Ref.[8] for the properties and other uses of the ridge functions). The lower bound of the approximation of the neural networks can be also studied through the relations of the activation functions with ridge functions (e.g., [12, 13, 14]).
If \(\sigma\) is chosen as a ridge function these networks are recently called explainable neural networks [15]: e.g. an example architecture which has three important structures, a projection layer, sub-network, and a combination layer is described to learn the following:
\[f(\mathbf{x})=\mu+\sum_{k=1}^{K}\gamma_{k}f_{k}(\mathbf{w_{k}}\cdot\mathbf{x}). \tag{5}\]
Here, \(\mu\) and \(\gamma_{k}\)s are shift and scaling parameters, respectively. In comparison to standard neural networks, the learning in
this model can be understood by the "explainable" features: linear projections and uni-variate functions (in other words, the mechanisms used to learn the model can be clearly explained by studying the constitutions that are ridge functions.).
## II Explainability of quantum neural networks
Quantum neural networks[16, 17, 18, 19] are generally based on variational quantum circuits[20] and can be described by
\[\left\langle\mathbf{x}\right|W(\theta)\hat{O}W(\theta)\left|\mathbf{x}\right\rangle, \tag{6}\]
where \(\hat{O}\) represents the measurement operator, \(\left|\mathbf{x}\right\rangle\) is the input vector formatted as a quantum state and \(W(\theta)\) is a unitary matrix generated by the quantum gates with the angle values defined by \(\theta\). Here, by abuse of the notation, we can consider \(\hat{O}\) as a selector set on the parts of \(W(\theta)\left|\mathbf{x}\right\rangle\): e.g., to obtain the measurement output of the first qubit in \(\left|0\right\rangle\) state, in vector forms, we select the first half of the output and combine their squared absolute values. Then, the output quantum state of the quantum circuit applied to \(\left|x\right\rangle\) can be rewritten as:
\[\sum_{i\in\hat{O}}|\left\langle\mathbf{w_{i}}\right|\mathbf{x}\right\rangle|^ {2}=\sum_{i\in\hat{O}}f_{i}(\left\langle\mathbf{w_{i}}\right|\mathbf{x}), \tag{7}\]
where \(\left\langle\mathbf{w_{i}}\right|\) represents a row of the unitary matrix \(W(\theta)\).
In variational quantum circuits, generally any change of the vector element of \(\theta\) may affect multiple rows of \(W(\theta)\). This can affect the studies that try to understand the quantum neural network model. Therefore, to make any \(f_{i}\) independent from each other, we can use the linear combination of unitary matrices[21, 22]. By following Ref.[22], we can write the rows of any matrix \(W(\theta)\) as the first rows of matrices and combine them on a block diagonal matrix:
\[\mathcal{V}=\left(\begin{array}{ccccc}\left(\begin{array}{ccccc}\left\langle \mathbf{w_{1}}\right|\\ \bullet\\ \vdots\\ \bullet\end{array}\right\rangle_{N\times N}&&&\\ &\ddots&&\\ &&&\left(\begin{array}{ccccc}\left\langle\mathbf{w_{N}}\right|\\ \bullet\\ \vdots\\ \bullet\end{array}\right\rangle_{N\times N}&\end{array}\right)_{N^{2}\times N^ {2}} \tag{8}\]
where \(N\) is the dimension of \(W(\theta)\). Using the direct sum, we can write \(\mathcal{V}=\bigoplus_{i}^{N}V_{i}\) with \(V_{i}\) representing the unitary matrix for \(\mathbf{w_{i}}\). Note that any \(N\)-dimensional vector can be formed with \(O(N)\) quantum gates as the leading row of a unitary matrix by using its Schmidt decomposition. Therefore, the construction \(\mathcal{V}\) for a generic \(W(\theta)\) requires at most \(O(N^{2})\) gates (See Ref.[22] for complexity analysis of the circuit.).
The equivalent quantum state to the output of \(W(\theta)\left|\mathbf{x}\right\rangle\) can be generated as a part of the outcome of the following transformation:
\[\left|\psi\right\rangle=\mathcal{V}\begin{pmatrix}\left|\mathbf{x}\right\rangle \\ \vdots\\ \left|\mathbf{x}\right\rangle\end{pmatrix}_{N^{2}\times 1}. \tag{9}\]
In a simplified form let \(\left|\psi_{i}\right\rangle\) represents the \(i\)th element of \(\left|\psi\right\rangle\) with \(0\geq i<N^{2}\), we can define a new selector operator that selects every \(\left|\psi_{i}\right\rangle\), where \(i\mod N=0\). That means we can still use the definition similar to Eq.(7):
\[\sum_{i,i\text{ mod }N=0}^{N^{2}-1}|\left\langle\mathbf{w_{i}}\right|\mathbf{x} \right\rangle|^{2} \tag{10}\]
Note that by writing a quantum operator in this way, we simply make the weight vectors independent from each other. That means the impact of an angle-change in one quantum gate can be limited to affect only one vector. Therefore, this model is at least as powerful as using a single unitary matrix where a change may affect multiple rows. In addition, studying the approximations in this model may be easier since we can easily see how many independent weight vectors are needed and how each weight vector affects the result and training.
### _Explainability of networks that are represented by exponentials_
Ridge functions are also used to approximate the integral forms of functions[23]: One of the most commonly used one is given by the following Fourier form:
\[\Psi_{\mathbf{w_{k}}}:=e^{\mathrm{i}\mathbf{x}\cdot\mathbf{w_{k}}}, \tag{11}\]
Here, the complex plane can be rewritten using the only real valued functions, therefore \(\Psi_{\mathbf{w_{k}}}\) can be considered as a ridge function for each \(\mathbf{w_{k}}\). Therefore, quantum neural networks such as [24] can be also explained as an approximation through the linear combination of ridge functions.
This can be also used to understand quantum circuits that are defined through the Ising type Hamiltonian[25] or machine learning tasks that are solved through the adiabatic quantum computation [26, 27]
## III Discussion and Conclusion
For a given set of weight vectors, in literature there are many works on the conditions to approximate a given function with unknown ridge functions. For instance[28], consider \(C(X)\) is the space of continuous functions on X, then for a given \(\{\mathbf{w_{1}},\mathbf{w_{2}}\}\), for an appropriate choice of some continuous functions \(h_{1}\) and \(h_{2}\), one can write \(g_{1}(h_{1}(x))+g_{2}(h_{2}(x))=C(X)\) if and only if the lengths of \(h_{1}-h_{2}\) paths in \(X\) are bounded by some positive integer. From the Kolmogorov-Arnold representation theorem [29], we know that any multivariate continuous function can be approximated as a sum of univariate functions. By this theorem, it can be also explained how the hidden layers in the classical neural networks help in approximations[29, 30, 31]. However, a continuous function may not be represented exactly by an approximation with ridge functions [10, 29]. Therefore, the approximation rates in quantum neural networks that reduce to Eq.(7) and (10) can be well understood. On the other hand, this indicates that the limitations of the approximations with the ridge functions may also be limitations for these types of quantum neural networks.
In this brief paper, we show that quantum neural networks can be understood as a linear combination of ridge functions, which is used to understand the interpretability and explainability of the classical neural networks. In particular, Eq.(7) and Eq.(10) can be used to describe quantum neural network models. In addition, since it can be written in the form of a linear combination of ridge functions, the approximation errors and upper and lower bounds on the errors of quantum neural networks can be studied through this formulation.
In quantum neural networks which can be reduced to Eq.(7) and (10), the approximation is done by using \(N\) ridge functions. Here, \(N\) is in general exponential in the number of qubits and can be considered a very large number. For general function approximations, using these equations may be helpful to make a decision on the size of \(N\) and the required number of operations and qubits before any application. However, the problems solved by the neural networks are in general not easy to define by functions, therefore it is not easy to decide how many functions (and quantum operations and qubits) are required to make the model more trainable.
|
2310.03679 | Role of Spatial Coherence in Diffractive Optical Neural Networks | Diffractive optical neural networks (DONNs) have emerged as a promising
optical hardware platform for ultra-fast and energy-efficient signal processing
for machine learning tasks, particularly in computer vision. Previous
experimental demonstrations of DONNs have only been performed using coherent
light. However, many real-world DONN applications require consideration of the
spatial coherence properties of the optical signals. Here, we study the role of
spatial coherence in DONN operation and performance. We propose a numerical
approach to efficiently simulate DONNs under incoherent and partially coherent
input illumination and discuss the corresponding computational complexity. As a
demonstration, we train and evaluate simulated DONNs on the MNIST dataset of
handwritten digits to process light with varying spatial coherence. | Matthew J. Filipovich, Aleksei Malyshev, A. I. Lvovsky | 2023-10-05T16:56:25Z | http://arxiv.org/abs/2310.03679v2 | # Role of Spatial Coherence in Diffractive Optical Neural Networks
###### Abstract
Diffractive optical neural networks (DONNs) have emerged as a promising optical hardware platform for ultra-fast and energy-efficient signal processing for machine learning tasks, particularly in computer vision. However, previous experimental demonstrations of DONNs have only been performed using coherent light, which is not present in the natural world. Here, we study the role of spatial optical coherence in DONN operation. We propose a numerical approach to efficiently simulate DONNs under input illumination with arbitrary spatial coherence and discuss the corresponding computational complexity using coherent, partially coherent, and incoherent light. We also investigate the expressive power of DONNs and examine how coherence affects their performance. In particular, we show that under fully incoherent illumination, the DONN performance cannot surpass that of a linear model. As a demonstration, we train and evaluate simulated DONNs on the MNIST dataset of handwritten digits using light with varying spatial coherence.
## I Introduction
Machine learning has led to breakthroughs in many fields of technology including computer vision, natural language processing, and drug discovery [1, 2, 3, 4]. Currently, machine learning models are executed using specialized electronic hardware, such as graphics processing units (GPUs) and tensor processing units (TPUs), which harness immense processing power and data parallelism. However, the growing compute requirements from advanced deep learning models are far outpacing hardware improvements anticipated by Moore's law scaling [5]. Consequently, progress in machine learning using current digital hardware will soon become technically and economically unsustainable, while also producing a significant environmental impact [6, 7].
Given the constraints imposed by digital electronics, optics has gained recognition as a promising platform for machine learning applications with low latency, high bandwidth, and low energy consumption [8]. Several optical implementations of neural networks have recently been demonstrated using both free-space optics [9, 10, 11, 12] and integrated photonics [13, 14, 15, 16]. Computational speeds of trillions of operations per second have been achieved using optical processing hardware [17, 18], and optical neural networks using less than one photon per multiplication have been experimentally realized [19, 20]. Additionally, optical architectures for implementing _in situ_ training of neural networks have been demonstrated [21, 22, 23, 24, 25, 26].
Diffractive optical neural networks (DONNs) are specialized hardware architectures that harness diffraction effects to process optical signals in free space [27, 28]. DONNs are generally composed of several successive modulation surfaces, denoted as diffractive layers, that modify the phase and/or amplitude of the incident optical signals through light-matter interactions, as shown in Fig. 1a. The diffractive layers contain discrete pixels, each with an independent complex-valued transmittance coefficient. The output of the DONN corresponds to the total intensity of the optical field incident on designated detection regions in the output plane.
This architecture is particularly promising for "deep optics" applications [29], i.e., complex processing and recognition of images without converting the optical signal into electronic digital form. In addition to machine learning tasks, DONNs have further applications in super-resolution imaging and displays, microscopy, quantitative phase imaging, and image classification through random diffusers [30, 31, 32, 33]. Several physical realizations of diffractive layers have been experimentally demonstrated using 3D printed materials, metamaterials, and spatial light modulators [34, 35, 36].
DONNs are usually _in silico_, i.e., the physical DONN is modeled using a computer to simulate the evolution of the optical signals through the system. The modulation patterns of the diffractive layers are optimized to achieve the desired transformation between the input and target output of the DONN, which is analogous to optimizing the weights in standard neural network models [37]. During training, the transmittance coefficient of each pixel in the diffractive layers is iteratively updated using an optimization algorithm to minimize the error in the model's output with respect to the training set. The backpropagation algorithm is used to efficiently calculate the gradient of the loss with respect to the transmittance coefficients [38].
DONNs are particularly well-suited for use with real-world optical signals, as the optical fields can be directly fed into the system. However, such signals are typically incoherent. In contrast, most of the existing experimental work with DONNs is performed using fully-coherent illumination from laser sources.
In this paper, we introduce a computationally efficient framework for simulating and training DONNs using incident illumination with arbitrary spatial coherence and discuss the corresponding computational complexity. We also investigate the role of optical coherence on the expressive ability of DONNs to extract complex relationships in the input data. In particular, we show that under fully incoherent illumination, the DONN performance cannot surpass that of a single-layer linear neural network. In contrast, when some degree of coherence is present, measuring the intensity at the output constitutes a nonlinear operation, which plays the role of an activation function in standard neural network models. This permits reaching performance
levels (e.g., classification accuracy) beyond the capabilities of a linear model. To illustrate our findings, we evaluate the performance of simulated DONNs trained on the MNIST dataset of handwritten digits using light with varying optical coherence.
A deep-learning based method was recently proposed for implementing linear transformations with DONNs under spatially incoherent illumination [39]. The method approximates incoherence for a single input example by averaging the output intensity patterns from numerous coherent input fields with random phase distributions. In contrast, our approach operates with the mutual intensity function (i.e., mutual coherence function), which is a compact and efficient way to represent the statistical properties of a partially coherent field [40]. This enables us to compute the propagation of light with arbitrary coherence through diffractive layers much more efficiently and accurately.
## II Results
### Coherent illumination
In this section, we introduce a formalism for describing the evolution of coherent, monochromatic optical fields through DONNs using scalar diffraction theory [41]. We treat the optical field as a complex scalar quantity and employ Dirac notation to represent the transverse profile of the field at discrete spatial positions using ket-vectors. This discretization does not affect the generality of our treatment as long as the spatial sampling interval (i.e., pixel pitch) is much smaller than the characteristic transverse field feature size. The transformations applied to the field as it evolves through the DONN, which include free-space propagation and transmission through modulation surfaces, are expressed using linear operators.
At each layer in the DONN, the optical field is modulated by the diffractive surface and subsequently propagates through free space to the next layer. The incident field at the \(m\)-th discrete pixel of the \(l\)-th diffractive layer, before modulation, is represented by \(\psi_{l}(m)\), where the time dependence of the signal is absent. The transverse profile of the field can be expressed using Dirac notation as \(\ket{\psi_{l}}=\sum_{m}\psi_{l}(m)\ket{m}\), where the set \(\{\ket{m}\}\) of all pixels forms an orthonormal basis. The mapping between the optical fields in the \(l\)-th and \((l\)+1\()\)-th layers can be expressed as \(\ket{\psi_{l+1}}=\hat{P}_{l}\hat{T}_{l}\ket{\psi_{l}}\), where \(\hat{T}_{l}\) and \(\hat{P}_{l}\) are the transmission and free-space propagation operators, respectively.
The transmission operator \(\hat{T}_{l}\) describes the phase and/or amplitude modulation applied to the optical field by each pixel in the \(l\)-th diffractive layer. The corresponding matrix is diagonal:
\[\hat{T}_{l}=\sum_{m}t_{l}(m)\cdot\ket{m}\!\!\bra{m}, \tag{1}\]
where \(t_{l}(m)\) is the complex-valued transmittance coefficient
Figure 1: Diffractive optical neural network (DONN) architecture. **a** Illustration of a DONN trained to identify handwritten digits. The DONN is comprised of \(L\) diffractive layers that modulate the optical field as it propagates through the system. The output plane encompasses ten detection regions, which are each associated with a unique digit, and the predicted output corresponds to the region with the highest optical intensity. The transmission and propagation operators at the \(l\)-th layer are denoted by \(\hat{T}_{l}\) and \(\hat{P}_{l}\), respectively. **b** Summary of the transformations applied to the input field \(\ket{\psi_{\mathrm{in}}}\) in a DONN. The linear evolution operator \(\hat{U}\) maps the input field onto the output field. The intensity \(I_{\mathrm{out}}\) at the output plane is measured, and the output of the DONN, represented by \(o\), corresponds to the total intensity incident on each detection region.
at the \(m\)-th pixel in the \(l\)-th diffractive layer, which satisfies \(|t_{l}(m)|\leq 1\).
The operator \(\hat{P}_{l}\) describes the free-space propagation of the field between the \(l\)-th and (\(l\)+1)-th diffractive layers using the Rayleigh-Sommerfeld solution [41]:
\[\hat{P}_{l}=\sum_{m,n}h(m,n)\cdot|n\rangle\!\langle m|\,, \tag{2}\]
with
\[h(m,n)=\frac{1}{i\lambda}\exp\left(\frac{i2\pi r(m,n)}{\lambda}\right)\frac{d }{r(m,n)^{2}} \tag{3}\]
being the point-spread function, i.e., the amplitude distribution in the (\(l\)+1)-th layer if only the \(m\)-th pixel of the \(l\)-th layer is illuminated. In the above equation, \(\lambda\) is the wavelength of the coherent optical signal (the central wavelength for quasimonochromatic light), \(d\) is the axial distance between the two diffractive layers, and \(r(m,n)\) is the Euclidean distance between the \(m\)-th and \(n\)-th pixels in the \(l\)-th and (\(l\)+1)-th layers, respectively. The above expression is valid when the axial distance between layers is much greater than the (central) wavelength of light.
The input image processed by the DONN is encoded in the initial field \(|\psi_{\text{in}}\rangle\). The output optical field, represented by \(|\psi_{\text{out}}\rangle\), can then be expressed as
\[|\psi_{\text{out}}\rangle=\hat{U}\,|\psi_{\text{in}}\rangle\,, \tag{4}\]
where \(\hat{U}\) is the evolution operator of the DONN that maps the input optical field onto the output field (i.e., the spatial impulse response of the system):
\[\hat{U}=\prod_{l=1}^{L}\left(\hat{P}_{l}\hat{T}_{l}\right)\hat{P}_{0}, \tag{5}\]
where the DONN has \(L\) diffractive layers. At the output plane of the DONN, the intensity of the evolved field is measured using image sensors:
\[I_{\text{out}}(n)=|\,\langle n|\psi_{\text{out}}\rangle\,|^{2}=|\psi_{\text{ out}}(n)|^{2}. \tag{6}\]
For classification tasks, the output of the DONN corresponding to each class \(c\) is defined as the total intensity incident on a specified spatial detection region \(\mathcal{D}_{c}\) in the output plane:
\[o(c)=\sum_{n\in\mathcal{D}_{c}}I_{\text{out}}(n). \tag{7}\]
A summary of the mathematical operations applied to the input optical signal by the DONN during inference is shown in Fig. 1b.
Training DONNs using a computer (i.e., _in silico_) requires simulating the evolution of coherent optical fields through the system. The calculated optical field at each layer is then used during the backward pass to compute the gradient of the loss function with respect to the diffractive layer transmittance coefficients. A naive numerical implementation of the propagation operator \(\hat{P}_{l}\) using matrix-vector multiplication has a computational complexity of \(\mathcal{O}(N^{2})\), where \(N\) is the number of pixels per layer. However, this can be optimized by noting that each propagation operator is described by a Toeplitz matrix since the point-spread function in Eq. (3) is invariant with respect to translation in space. Hence, the application of this operator constitutes a convolution operation with the input field. This convolution can be computed by applying the Fourier transform to the input field, followed by multiplication by the Fourier image of the Toeplitz matrix, followed by the inverse Fourier transform. By utilizing the fast Fourier transform algorithm, this operation can be completed in \(\mathcal{O}(N\log N)\) time [42]. Additionally, the transmission operator \(\hat{T}_{l}\) is described by a diagonal matrix and can be evaluated in \(\mathcal{O}(N)\) time. Therefore, calculating the evolution of \(B\) different input fields through a DONN with \(L\) layers and \(N\) pixels per layer has a computational complexity of \(\mathcal{O}(BLN\log N)\), and the backward pass has the same complexity.
### Arbitrary spatial coherence illumination
Using DONNs for real-world applications requires the ability to process incoherent and partially coherent light. We assume quasimonochromatic illumination conditions, which is a good approximation for many cases. These conditions require that the input light is narrowband and its coherence length is much greater than the maximum path length difference between diffractive layers [43]. At the same time, we assume the coherence time to be much shorter than the inverse detection bandwidth, so the detection averages in time over the non-stationary interference pattern.
The spatial coherence of the optical field in the \(l\)-th layer is characterized by the mutual intensity function, which determines the time-averaged correlation of the field at two separate pixels [40; 43]:
\[J_{l}(m,m^{\prime})=\lim_{T\rightarrow\infty}\frac{1}{T}\int_{-T/2}^{T/2}\psi _{l}(m;t)\,\psi_{l}^{*}(m^{\prime};t)\,\mathrm{d}t,\]
where \(T\) is the detection time. This matrix represents an operator
\[\hat{J}_{l}=\lim_{T\rightarrow\infty}\frac{1}{T}\int_{-T/2}^{T/2}|\psi_{l}(t) \rangle\!\langle\psi_{l}(t)|\ \mathrm{d}t, \tag{8}\]
where \(J_{l}(m,m^{\prime})=\langle m|\hat{J}_{l}|m^{\prime}\rangle\). The time-averaged intensity of the field is given by the diagonal elements of the mutual intensity operator, such that
\[I_{l}(m)=J_{l}(m,m). \tag{9}\]
Similar to the evolution of coherent fields through DONNs, the evolution of the mutual intensity operator can be expressed using the transmission and propagation operators. The input mutual intensity operator \(\hat{J}_{\text{in}}\) describes the spatial coherence of the initial field that encodes the input image to be processed by the DONN. The output mutual intensity operator is given by
\[\hat{J}_{\text{out}}=\hat{U}\,\hat{J}_{\text{in}}\,\hat{U}^{\dagger}, \tag{10}\]
where \(\hat{U}\) is the evolution operator of the DONN defined in Eq. (5). The output of the DONN corresponds to the total time-averaged intensity, defined in Eq. (9), incident on the spatial detection regions along the output plane:
\[o(c)=\sum_{n\in\mathcal{D}_{\varepsilon}}J_{\text{out}}(n,n). \tag{11}\]
The evolution of the mutual intensity operator and the corresponding DONN output can be simulated on a computer using Eqs. (10) and (11). Analogous to the previously discussed method, the fast Fourier transform can be leveraged to evaluate the propagation operator \(\hat{P}_{l}\) applied to an arbitrary mutual intensity operator described by an \(N\times N\) matrix, which scales as \(\mathcal{O}(N^{2}\log N)\). The transmission operator \(\hat{T}_{l}\) can similarly be evaluated in \(\mathcal{O}(N^{2})\) time. Hence, simulating the evolution of \(B\) different input fields with arbitrary spatial coherence through a DONN with \(L\) layers and \(N\) pixels per layer has a computational complexity of \(\mathcal{O}(BLN^{2}\log N)\). The backward pass executed during training has the same computational complexity.
### Incoherent illumination
DONNs with fully incoherent input can be treated using the computational approach discussed in the previous section. However, computational costs can be amortised for multiple input examples using the impulse response that characterizes the system.
Fully incoherent input is described by the diagonal mutual intensity operator
\[J_{\text{in}}(m,m^{\prime})=I_{\text{in}}(m)\,\delta_{m,m^{\prime}}, \tag{12}\]
where the time-averaged intensity \(I_{\text{in}}(m)\) encodes information from the \(m\)-th pixel of the input image. We can express the corresponding time-averaged intensity along the output plane, using Eqs. (9) and (10), as
\[I_{\text{out}}(n)=\sum_{m}I_{\text{in}}(m)\cdot\Big{|}\left\langle n|\hat{U}| m\right\rangle\Big{|}^{2}. \tag{13}\]
Here, \(|\left\langle n|\hat{U}|m\right\rangle|^{2}\) corresponds to the intensity at the \(n\)-th pixel in the output plane from point-source illumination at the \(m\)-th input pixel (i.e., the intensity impulse response of the system).
The intensity impulse response of a DONN with \(L\) layers and \(N\) pixels per layer can be determined by calculating the coherent evolution of all \(N\) input pixels through the system, which has a computational complexity of \(\mathcal{O}(LN^{2}\log N)\). This is the same complexity as the previous method using the evolution of the mutual intensity operator. However, once the intensity impulse response is known, the output intensity distributions for \(B\) different incoherent input fields can be calculated using Eq. (13) with a reduced computational complexity of \(\mathcal{O}(BN^{2})\). Thus, the total computational cost of simulating DONN inference with fully incoherent input for multiple input examples can be reduced. This technique can be implemented during training to improve the simulation runtime by using mini-batches of input examples, as calculating the unit responses of the DONN is only required once for each mini-batch. A summary of the computational complexities of simulating DONNs under coherent, arbitrary coherence, and incoherent illumination is shown in Table 1.
### Expressivity of DONNs
The expressive power of DONNs is dependent on the spatial coherence of the input light. Under coherent illumination, DONNs have been shown to outperform linear models [24; 27]. Since the coherent field evolves linearly through the system, this improvement in performance results from the nonlinear intensity measurement of the complex-valued field at the output plane (6), followed by the linear summation of the intensities over the detection regions (7). Thus, DONNs using coherent illumination can be understood as standard neural networks that consist of a complex-valued linear layer with a nonlinear activation function, followed by a real-valued linear layer, as shown in Fig. 1b.
In contrast, under incoherent illumination, the time-averaged output intensity is the sum of the intensity patterns from individual pixel sources in the input plane. Therefore, DONNs with incoherent input illumination cannot perform better than a linear model, as the time-averaged input and output intensity distributions are linearly related according to the intensity impulse response, as shown in Eq. (13).
The improved expressive power of a coherently illuminated DONN arises from the off-diagonal elements in the input mutual intensity operator, which are absent for incoherent light. These elements represent the spatial coherence between two different pixels in the input image. For an arbitrary input mutual intensity operator \(J_{\text{in}}\), the output intensity can be expressed, using Eqs. (9) and (10), as
\[I_{\text{out}}(n)=\sum_{m,m^{\prime}}J_{\text{in}}(m,m^{\prime})\cdot\langle n |\hat{U}|m\rangle\cdot\langle m^{\prime}|\hat{U}^{\dagger}|n\rangle\,. \tag{14}\]
This summation includes off-diagonal elements of the mutual intensity operator, which depend nonlinearly on the input field. Due to this nonlinearity, the performance of DONNs under partially coherent illumination can surpass that with incoherent light, as demonstrated in the following section.
### Performance on MNIST dataset
Using the formalism introduced in the previous section, we trained simulated DONN models to identify handwrit
\begin{table}
\begin{tabular}{l l} \hline \hline Spatial Coherence & Computational Complexity \\ \hline Coherent & \(\mathcal{O}(BLN\log N)\) \\ Arbitrary Coherence & \(\mathcal{O}(BLN^{2}\log N)\) \\ Incoherent & \(\mathcal{O}(BN^{2})+\mathcal{O}(LN^{2}\log N)\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Computational complexity of modeling the evolution of \(B\) input examples through a DONN with \(L\) layers and \(N\) pixels per layer under different illumination conditions.
ten digits from zero to nine using incoherent, partially coherent, and coherent illumination. The models were trained over 50 epochs using 55,000 images (plus 5,000 images for validation) from the MNIST dataset, each consisting of \(28\times 28\) pixels [44]. The DONNs are composed of five diffractive layers, each with \(100\times 100\) pixels, which modulate the phase of the incident light and are spaced 5 cm apart. Each model was trained using a uniform, normalized optical field incident on the input image of the handwritten digit with a central wavelength of 700 nm. The cross-entropy loss function was used during training to calculate the output error of the model. Each pixel in the diffractive layers has a surface area of \(10\times 10\) um\({}^{2}\), while each pixel in the input pattern is \(30\times 30\) um\({}^{2}\). Each detection region in the output plane, which is associated with a unique digit, has an area of \(250\times 250\) um\({}^{2}\). DONNs trained under coherent and incoherent illumination are illustrated in Fig. 2.
The spatial coherence of the input illumination is given by \(J_{\mathrm{in}}(m,m^{\prime})=\sqrt{I_{\mathrm{in}}(m)I_{\mathrm{in}}^{*}(m^{ \prime})}\,\mu^{\prime_{\mathrm{norm}}(m,m^{\prime})}\), where \(I_{\mathrm{in}}(m)\) is the time-averaged intensity at the \(m\)-th pixel in the input image, \(r_{\mathrm{norm}}(m,m^{\prime})\) is the Euclidean distance between the \(m\)-th and \(m^{\prime}\)-th input pixels, normalized by the pixel pitch, and \(\mu\) quantifies the degree of spatial coherence: \(\mu=1\) for fully coherent and \(\mu=0\) for fully incoherent light. We first trained two DONNs models to process handwritten digits using fully coherent and incoherent illumination. During the training phase, we saved the model parameters that yielded the highest validation accuracy. We then evaluated the performance of these models using
Figure 2: Evolution of input examples in a DONN with five layers trained using coherent and incoherent illumination. **a** The top row shows the phase-only modulation profiles of the five diffractive layers, which were trained to process the MNIST dataset using coherent illumination. In the middle row, the digit zero is fed into the system in the input layer and the time-averaged intensity at each layer is shown. The detection regions at the output layer are indicated using blue and red boxes, where the red box corresponds to the target region. The output of the DONN, which is the total intensity in each detection region, is shown on the right. In the bottom row, the evolution of the digit three is shown. The same scaling for the intensity values is used across each row. **b** The equivalent visualization of a DONN trained using incoherent illumination.
a test set of 10,000 images that were not shown during training. The models trained using coherent and incoherent light achieved test accuracies of 97.54% and 91.17%, respectively. The validation accuracy attained during training, as well as the performance of the models on the test set, are shown in Fig. 3.
We then trained DONNs using input illumination with partially coherent light, where each model was trained to process light with a different degree of spatial coherence. The performance of the models was evaluated, and the test accuracies are shown in Fig. 4a. We also evaluated the robustness of the models by testing their performance using input light with degrees of spatial coherence different from that used during training (Fig. 4b). Not surprisingly, the best performance is achieved when the model is evaluated under the same coherence conditions used during training. However, models trained using incoherent illumination are more robust against changes in the spatial coherence.
## III Discussion
We have demonstrated that the performance of DONNs is dependent on the spatial coherence of the incident illumination. Models using incoherent illumination cannot outperform linear models for information processing tasks. However, as demonstrated in Fig. 4a, the degree of spatial coherence required to achieve optimal performance need not be high: \(\mu\sim 0.6\) means that the mutual coherence between points separated by four pixels is reduced by a factor of 0.13. That is, performance almost at the fully coherent level can be reached even when the transverse coherence length is much less than the size of a whole MNIST digit. This implies that neighboring pixels contain more relevant information for pattern recognition compared to distant pixels. As a result, the DONN model can capture relevant nonlinear relationships in the input data without requiring full spatial coherence between all pixels. In addition to the
Figure 4: Performance of DONN models on the MNIST dataset with varied coherence. **a** Test accuracy achieved with input light of a specified degree of spatial coherence \(\mu\). **b** Accuracies attained by models trained with illumination of spatial coherence \(\mu_{\mathrm{train}}\) but tested using input illumination with spatial coherence \(\mu_{\mathrm{test}}\). The test accuracies along the diagonal correspond to using the same illumination conditions for training and testing, as shown in (a).
Figure 3: DONN training results on the MNIST dataset under coherent and incoherent illumination. **a** Validation accuracy attained by the models at each epoch during training. **b, c** Confusion matrix of the model trained using coherent (b) and incoherent (c) illumination evaluated on the test set.
coherence of the incident light, several other factors affect DONN performance including the training hyperparameters, the axial distances between layers, and the size of the detection regions.
We emphasize that the above relation between the input coherence and DONN expressivity assumes that no further processing of the DONN data is implemented. If, for example, the DONN is followed by an electronic neural network with nonlinear activation layers, the DONN can surpass a linear model even if illuminated incoherently. For example, Rahman _et al._ trained a DONN to classify MNIST by associating two detection regions with each digit and then applying a rational function to compute the network prediction from the intensities of these regions. In this way, the accuracy reached was above that of a linear classifier [39].
Incoherently illuminated DONNs are more broadly applicable to real-world environments, as coherent illumination requires a laser source. However, some degree of coherence can also be achieved by illuminating the object with a distant incoherent source of narrow spatial extent according to the van Cittert -- Zernike theorem [40]. As discussed above, illumination with even a short transverse coherence length can significantly enhance the DONN performance.
The robustness of the system can be improved by training the DONN using various illumination conditions, which could be useful for applications that require DONN operation in different environments. Moreover, the system can be further generalized to operate under a continuum of central frequencies, which has been recently experimentally demonstrated using coherent light [45].
###### Acknowledgements.
This work is supported by Innovate UK Smart Grant 10043476. MJF is funded by the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 956071.
|
2306.01950 | Fast and Interpretable Nonlocal Neural Networks for Image Denoising via
Group-Sparse Convolutional Dictionary Learning | Nonlocal self-similarity within natural images has become an increasingly
popular prior in deep-learning models. Despite their successful image
restoration performance, such models remain largely uninterpretable due to
their black-box construction. Our previous studies have shown that
interpretable construction of a fully convolutional denoiser (CDLNet), with
performance on par with state-of-the-art black-box counterparts, is achievable
by unrolling a dictionary learning algorithm. In this manuscript, we seek an
interpretable construction of a convolutional network with a nonlocal
self-similarity prior that performs on par with black-box nonlocal models. We
show that such an architecture can be effectively achieved by upgrading the
$\ell 1$ sparsity prior of CDLNet to a weighted group-sparsity prior. From this
formulation, we propose a novel sliding-window nonlocal operation, enabled by
sparse array arithmetic. In addition to competitive performance with black-box
nonlocal DNNs, we demonstrate the proposed sliding-window sparse attention
enables inference speeds greater than an order of magnitude faster than its
competitors. | Nikola Janjušević, Amirhossein Khalilian-Gourtani, Adeen Flinker, Yao Wang | 2023-06-02T23:19:07Z | http://arxiv.org/abs/2306.01950v1 | # Fast and Interpretable Nonlocal Neural Networks
###### Abstract
Nonlocal self-similarity within natural images has become an increasingly popular prior in deep-learning models. Despite their successful image restoration performance, such models remain largely uninterpretable due to their black-box construction. Our previous studies have shown that interpretable construction of a fully convolutional denoiser (CDLNet), with performance on par with state-of-the-art black-box counterparts, is achievable by unrolling a dictionary learning algorithm. In this manuscript, we seek an interpretable construction of a convolutional network with a nonlocal self-similarity prior that performs on par with black-box nonlocal models. We show that such an architecture can be effectively achieved by upgrading the \(\ell_{1}\) sparsity prior of CDLNet to a weighted group-sparsity prior. From this formulation, we propose a novel sliding-window nonlocal operation, enabled by sparse array arithmetic. In addition to competitive performance with black-box nonlocal DNNs, we demonstrate the proposed sliding-window sparse attention enables inference speeds greater than an order of magnitude faster than its competitors.
Deep-learning, interpretable neural network, nonlocal self-similarity, group-sparsity, unrolled network, convolutional dictionary learning, image denoising
## I Background and Introduction
Nonlocal self-similarity (NLSS) of natural images has proven to be a powerful signal prior for classical and deep-learning based image restoration. However, state-of-the-art NLSS deep-learning methods are widely constructed as black-boxes, often rendering their analysis and improvement beholden to trial and error. Additionally, current implementations of the NLSS prior in deep-learning separately process overlapping image windows, falsely neglecting the dependency between these overlaps. Here, we address these two shortcomings of nonlocal deep neural networks (DNNs) from the perspective of interpretable architecture design and sparse array arithmetic.
A growing literature of DNNs, derived as direct parameterizations of classical image restoration algorithms, perform on par with state-of-the-art black-box fully convolutional neural networks, without employing common deep-learning tricks (such as batch-normalization, residual learning, and feature domain processing). This interpretable construction has been shown to be instrumental in obtaining parameter and dataset efficiency [1, 2, 3, 4, 5]. Our previous work, CDLNet [1], introduced a unique interpretable construction, based on convolutional dictionary learning, and achieved novel robustness to mismatches in observed noise-level during training and inference. By incorporating the NLSS prior into CDLNet, we demonstrate the first instance of an interpretable network bridging the performance gap to state-of-the-art nonlocal black-box methods for image denoising.
Nonlocal layers attempt to model long-range image dependencies by computing pixel-wise self-similarity metrics. To tame the quadratic computational complexity of this operation, image-restoration DNNs generally rely on computing an overlapping window NLSS (OW-NLSS) of the input, by which overlapping image windows are processed independently by the layer/network and subsequently averaged on overlapping regions to produce the final output [2, 6]. Naturally, ON-NLSS incurs a runtime penalty by redundant overlap processing and a restoration penalty due to the disregard for the correlation among these overlapping regions. In this work, we propose a novel sliding-window NLSS (SW-NLSS) operation that addresses these shortcomings.
Previous work used a patch-based group-sparsity prior and OW-NLSS in an interpretably constructed DNN [2]. However, this approach did not achieve competitive performance with black-box NLSS DNNs. In contrast, we propose to enforce pixel-wise group-sparsity of a latent representation with a dimensionality reduction on the number of channels. We also propose the novel SW-NLSS operation, and achieve denoising performance on par with the state-of-the-art methods at a fraction of the inference time.
We highlight the following contributions:
* a novel and efficient sliding-window nonlocal self-similarity operation which addresses the modeling and computational shortcomings of overlapping-window NLSS.
* a novel thresholding operation, inspired by a group-sparsity prior, which utilizes a reduced channel dimension of the latent space to achieve state-of-the-art inference speeds.
* an interpretable nonlocal CNN with competitive natural image denoising performance to state-of-the-art black-box models.
* a fast and open-source implementation1 in the Julia programming language [7]. Footnote 1: [https://github.com/nikopj/GroupCDL-TIP](https://github.com/nikopj/GroupCDL-TIP).
In Section II, we introduce the mathematics and notation behind classical convolutional dictionary learning and group-sparse representation. We also provide context for related black-box and interpretable deep-learning methods. In Section III, we introduce our sliding-window nonlocal CNN derived from group-sparse convolutional dictionary learning, dubbed GroupCDL. In Section IV, we show experimental results that compare GroupCDL to state-of-the-art deep learning methods.
## II Preliminaries and Related Work
### _Dictionary Learning and Group-Sparse Representation_
We consider the observation model of additive white Gaussian-noise (AWGN),
\[\mathbf{y}=\mathbf{x}+\mathbf{\nu},\quad\text{where}\quad\mathbf{\nu}\sim\mathcal{N}(\mathbf{0}, \sigma^{2}\mathbf{I}). \tag{1}\]
Here, the ground-truth image \(\mathbf{x}\in\mathbb{R}^{NC}\) is contaminated with AWGN of noise-level \(\sigma\), resulting in observed image \(\mathbf{y}\in\mathbb{R}^{NC}\). In the cases of grayscale and color images, we consider \(\mathbf{x},\mathbf{y}\) as vectors in \(\mathbb{R}^{N}\) or \(\mathbb{R}^{N3}\), respectively. For convenience and clarity of notation, we denote images in the vectorized form, and any linear operation on an image as a matrix vector multiplication (see Table I for details). In implementation, fast algorithms are used and these matrices are not actually formed, except when explicitly mentioned.
We frame our signal-recovery problem in terms of a (given) \(s_{c}\)-strided convolutional dictionary \(\mathbf{D}\in\mathbb{R}^{NC\times QM}\), with \(Q=N/s_{c}^{2}\), i.e. the columns of \(\mathbf{D}\) are formed by integer translates of a set of \(M\) (vectorized) 2D convolutional filters, each having \(C\) channels. We assume \(\exists\,\mathbf{z}\in\mathbb{R}^{QM}\;\mathrm{s.\;t.}\;\mathbf{x}\approx\mathbf{D}\mathbf{z}\). The rich works of sparse-representation and compressed-sensing provide guarantees based on assumptions of sparsity in \(\mathbf{z}\) and regularity on the columns of \(\mathbf{D}\)[8]. We refer to \(\mathbf{z}\) as our sparse-code, latent-representation, or subband-representation of \(\mathbf{x}\).
A popular classical paradigm for estimating \(\mathbf{x}\) from an observed noisy \(\mathbf{y}\) is the Basis Pursuit DeNoising (BPDN) model,
\[\underset{\mathbf{z}}{\text{minimize}}\;\frac{1}{2}\|\mathbf{y}-\mathbf{D}\mathbf{z}\|_{2}^{2 }+\lambda\psi(\mathbf{z}), \tag{2}\]
where \(\psi:\mathbb{R}^{QM}\rightarrow\mathbb{R}_{+}\) is a chosen regularization function. The Lagrange-multiplier term \(\lambda>0\) provides a trade-off between satisfying observation consistency and obeying the prior-knowledge encoded by \(\psi\). A popular approach to solving (2) is the proximal-gradient method (PGM) [9], involving the _proximal-operator_ of \(\psi\), defined as
\[\mathbf{prox}_{\tau\psi}(\mathbf{v})\coloneqq\operatorname*{arg\,min}_{\mathbf{x}}\; \tau\psi(\mathbf{x})+\frac{1}{2}\|\mathbf{x}-\mathbf{v}\|_{2}^{2},\quad\tau>0. \tag{3}\]
PGM can be understood as a fixed point iteration involving the iterative application of a gradient-descent step on the \(\ell_{2}\) term of (2) followed by application of the proximal operator of \(\psi\),
\[\mathbf{z}^{(k+1)}=\mathbf{prox}_{\tau\psi}(\mathbf{z}^{(k)}-\eta\mathbf{D}^{T}(\mathbf{D}\mathbf{z }^{(k)}-\mathbf{y})), \tag{4}\]
where \(\tau=\eta\lambda\), and \(\eta>0\) is a step-size parameter.
When \(\psi\) is the sparsity-promoting \(\ell_{1}\)-norm, the proximal operator is given in closed-form by element-wise soft-thresholding,
\[\mathrm{ST}_{\tau}(\mathbf{z})=\mathbf{z}\circ\left(1-\frac{\tau}{|\mathbf{z}|}\right)_{+}, \tag{5}\]
where \((\cdot)_{+}\) denotes projection onto the positive orthant \(\mathbb{R}_{+}\). The resulting PGM iterations are commonly referred to as the Iterative Soft-Thresholding Algorithm (ISTA) [9].
More sophisticated priors (\(\psi\)) can be used to obtain better estimates of our desired ground-truth image by exploiting correlations between "related" image-pixels. One such prior is _group-sparsity_,
\[\psi(\mathbf{z})=\sum_{\begin{subarray}{c}m=1\\ j=1\end{subarray}}^{M,Q}\sqrt{\sum_{i=1}^{Q}\Gamma_{ij}\mathbf{z}_{m}[i]^{2}}=\| \sqrt{(\mathbf{I}_{M}\otimes\mathbf{\Gamma})\mathbf{z}^{2}}\|_{1}, \tag{6}\]
where \(\mathbf{\Gamma}\in\mathbb{R}_{+}^{Q\times Q}\) is a row-normalized adjacency matrix (i.e. \(\|\mathbf{\Gamma}_{:}\|_{1}=1\)), and \(\cdot^{2}\) and \(\sqrt{\cdot}\) are taken element-wise. Group-sparse regularization may be understood as encouraging similar latent-pixels to share the same channel-wise sparsity pattern, and has been shown to improve denoising performance under classical patch-based sparse coding methods [10], as well as recent interpretably constructed DNNs [2].
In general, the group-sparse regularizer (6) does not have a closed form proximal-operator. Motivated by the operator proposed in [2], we propose an approximate solution, group-thresholding,
\[\mathrm{GT}_{\tau}(\mathbf{z};\,\mathbf{\Gamma})=\mathbf{z}\circ\left(1-\frac{\tau}{\sqrt{ (\mathbf{I}_{M}\otimes\mathbf{\Gamma})\mathbf{z}^{2}}}\right)_{+}. \tag{7}\]
Note that the operator proposed in Lecouat et. al [2] is equivalent to (7) when the adjacency matrix is row-normalized. This approximate solution has the desirable property of reducing to element-wise soft-thresholding (5) when \(\mathbf{\Gamma}\) is the identity matrix.
The BPDN (2) problem can be made more expressive by opting to learn an optimal dictionary from a dataset of noisy images \(\mathcal{D}=\{\mathbf{y}\}\). We express the (convolutional) dictionary learning problem as,
\[\underset{\{\mathbf{z}\},\mathbf{D}\in\mathcal{C}}{\text{minimize}}\;\;\sum_{\mathbf{y}\in \mathcal{D}}\frac{1}{2}\|\mathbf{y}-\mathbf{D}\mathbf{z}\|_{2}^{2}+\lambda\psi(\mathbf{z}), \tag{8}\]
\begin{table}
\begin{tabular}{|l|l|} \hline \hline \(\mathbf{x}\in\mathbb{R}^{NC}\) & a vector valued image with \(N=N_{1}\times N_{2}\) pixels, \\ & and vectorized channels, \(\mathbf{x}=[\mathbf{x}_{1}^{T},\cdots,\mathbf{x}_{N}^{T}]^{T}\). \\ \hline \(\mathbf{x}_{\tau}\in\mathbb{R}^{N}\) & the \(c\)-th subband-feature-map/channel of \(\mathbf{x}\). \\ \hline \(\mathbf{x}_{m}[i]\in\mathbb{R}^{C}\) & the \(n\)-th pixel of \(\mathbf{x}\), \(n\in[1,N]\). \\ \hline \(\mathbf{x}_{m}[i]\in\mathbb{R}\) & the \(n\)-th pixel of the \(c\)-th channel of \(\mathbf{x}\). \\ \hline \(n\in[1,N]\times[1,N_{2}]\) & the spatial coordinates of the \(n\)-th pixel of \(\mathbf{x}\). \\ \hline \(n\)-th pixel of \(\mathbf{x}\). \\ \hline \(\mathbf{x}\in\mathbb{R}^{N}\) & the element-wise product of two vectors. \\ \hline \(\mathbf{D}\in\mathbb{R}^{NC\times QM}\) & a 2D \(M\) \(C\) channel synthesis convolution \\ & operate with stride \(s_{c}\), where \(Q=N/s_{c}^{2}\). \\ \hline \(D^{T}\in\mathbb{R}^{QM\times NC}\) & a 2D \(C\) to \(M\) channel analysis convolution \\ & operator with stride \(s_{c}\), where \(Q=N/s_{c}^{2}\). \\ \hline \(\mathbf{U}\in\mathbb{R}^{QM\times N}\) & a \(Q\times N\) matrix with elements \(\mathbf{U}_{i}\in\mathbb{R}\). \\ \hline \(\mathbf{U}_{i}\in\mathbb{R}^{N}\), \(\mathbf{U}_{i}\in\mathbb{R}^{Q}\) & the \(i\)-th row, \(j\)-th column of matrix \(\mathbf{U}\). \\ \hline \(\mathbf{U}\otimes\mathbf{V}\in\mathbb{R}^{QM\times NC}\) & Kronecker product of \(\mathbf{U}\in\mathbb{R}^{Q\times N}\) and \(\mathbf{V}\in\mathbb{R}^{MX\times C}\), i.e. the block matrix with \(\mathbf{V}_{i}\) scaled by \(\mathbf{U}_{ij}\) \(\forall\,i.j\). \\ \hline \(\mathbf{I}_{N}\in\mathbb{R}^{MX\times N}\) & the \(N\) by \(N\) identity matrix. \\ \hline \(\mathbf{y}=(\mathbf{I}_{C}\otimes\mathbf{U})\mathbf{x}\) & the matrix \(\mathbf{U}\) applied channel-wise, \\ & \(\mathbf{z}_{b}=\mathbf{U}\mathbf{z}_{b}\in\mathbb{R}^{Q}\) \(\forall\,1\leq c\leq C\). \\ \hline \(\mathbf{y}=\mathbf{\nabla}_{\mathbf{x}}=(\mathbf{V}\otimes\mathbf{I}_{N})\mathbf{x}\) & the matrix \(\mathbf{V}\) applied pixel-wise, \\ & i.e. \(\mathbf{y}[i]=\mathbf{V}\mathbf{x}[i]\in\mathbb{R}^{M}\), \(\forall\,1\leq i\leq N\). \\ \hline \hline \end{tabular}
\end{table} TABLE I: Notation
where constraint set \(\mathcal{C}=\{\mathbf{D}\,:\,\|\mathbf{D}_{.j}\|_{2}^{2}\leq 1\ \forall\,j\}\) ensures that the regularization term is not rendered useless by an arbitrary scaling of latent coefficients. Solving (8) generally involves alternating sparse-pursuit (ex. (4)) and a dictionary update with fixed sparse-codes (ex. projected gradient descent) [11].
### _Unrolled and Dictionary Learning Networks_
Approaches in [12, 13, 14] explore the construction of DNNs as unrolled proximal gradient descent machines with proximal-operators that are implemented by a black-box CNN, learned end-to-end. Although these methods contribute to more principled DNN architecture design in image-processing, their use of black-box neural networks, such as UNets [15] and ResNets [16], ultimately side-step the goal of full interpretability. In contrast, our previous work CDLNet [1] introduces a CNN as a direct parameterization of convolutional PGM (4) with an \(\ell_{1}\) prior, with layers defined as,
\[\mathbf{z}^{(0)}=\mathbf{0},\quad\text{for}\quad k=0,1,\ldots,K-1, \tag{9}\] \[\mathbf{z}^{(k+1)}=\mathrm{ST}_{\mathbf{\tau}^{(k)}}\left(\mathbf{z}^{(k)}- \mathbf{A}^{(k)}{}^{T}(\mathbf{B}^{(k)}\mathbf{z}^{(k)}-\mathbf{y})\right),\] \[\mathbf{\tau}^{(k)}=\mathbf{\tau}_{0}^{(k)}+\hat{\sigma}\mathbf{\tau}_{1}^{( k)},\quad\hat{\mathbf{x}}=\mathbf{D}\mathbf{z}^{(K)}.\]
Parameters \(\Theta=\{\mathbf{D},\,\{\mathbf{A}^{T}(k),\,\mathbf{B}^{(k)},\,\mathbf{\tau}_{0}^{(k)},\mathbf{ \tau}_{1}^{(k)}\}_{0=1}^{K-1}\}\) are optimized by back-propagation of a supervised or unsupervised loss function. In this manuscript, we extend the formulation and direct parameterization of CDLNet by introducing a novel implementation of the group-sparsity prior, embodied in the proposed GroupCDL architecture (see Section III). We also show that the noise-adaptive thresholding of CDLNet, derived from BPDN (2), extends to GroupCDL.
Zheng et. al [17] propose a DNN architecture based on a classical dictionary learning formulation of image denoising. However, this network heavily employs black-box models such as UNets [15] and multi-layer perceptrons (MLPs). Our proposed method differentiates itself by using a direct parameterization of variables present in the classical proximal gradient method (4) with a group-sparsity regularizer (6). Furthermore, Zheng et. al's experimental setup does not match a bulk of the existing image denoising literature (by training on a larger set of images) and their network exists in the ultra-high parameter count regime (\(\approx 17\) M), making a fair comparison beyond the scope of this paper.
Directly parameterized dictionary learning networks [1, 2, 3, 4, 5, 18] have gained some popularity in recent years due to their simple design and strong structural similarities to popular ReLU-activation DNNs. This connection was first established in the seminal work of Gregor et. al [19] in the form of the Learned Iterative Shrinkage Thresholding Algorithm (LISTA). Here, we build upon this literature by proposing a novel nonlocal operator (11) for the convolutional dictionary learning formulation of a DNN, derived from a group-sparsity prior (6). We also demonstrate that such a network can compete well with (and sometimes outperform) state-of-the-art methods, without sacrificing interpretability.
Lecouat et. al [2] propose a nonlocal CNN derived from a patch-based dictionary learning algorithm with a group-sparsity prior, dubbed GroupSC. It is well established that the independent processing of image patches and subsequent overlap and averaging (path-processing) is inherently suboptimal to the convolutional model, due to the lack of consensus between pixels in overlapping patch regions [4]. Our method is in-part inspired by GroupSC, but proposes a novel version of the group-sparsity mechanism adapted to the convolutional model and fast application of the network at inference.
### _Nonlocal Networks_
The nonlocal self-similarity prior in image-restoration DNNs is commonly formulated with OW-NLSS to manage its quadratic complexity [2, 6]. The overlap is especially important to ensure that artifacts do not occur on local-window boundaries. Despite such networks often being formulated as CNNs, their window-based inference ultimately diminishes the powerful shift-invariance prior and increases computational cost due to additional processing of overlapping regions (see Section III-B).
To reduce computational burden and correctly account for dependencies between neighboring local windows, we propose a novel sliding-window NLSS, enabled by sparse matrix arithmetic. Recent works have proposed other so-called "sparse attention" mechanisms, however, they have either not been in the context of image restoration [20], not employed a sliding-window [21], or have employed a complicated hashing algorithm to exploit extremely long-range dependencies [22].
Fig. 1: The GroupCDL Architecture. The network begins with no prior of group-sparsity (\(\mathbf{\Gamma}^{(0)}=\mathbf{I}\)). In the second layer, and each subsequent \(\Delta K\) layers, the adjacency matrix \(\mathbf{\Gamma}^{(k)}\) is updated by a row-normalized NLSS computation on the latent representation \(\mathbf{z}^{(k)}\). NLSS is computed with dense arithmetic (on image patches) during training, and with sparse arithmetic (on the entire image) during inference.
## III Proposed Method
We consider the problem of image restoration under the AWGN model (1), though, in principle, the methods presented may be adapted to other degradation models with relative ease. Besides being a fundamental building block of many inverse-problem approaches, AWGN is a popular and successful model for camera noise after white-balance and gamma-correction [23].
### _The GroupCDL Architecture_
We propose a neural network architecture as a direct parameterization of PGM (4) on the convolutional BPDN problem with a group-sparsity prior, dubbed GroupCDL. The GroupCDL architecture is equivalent to replacing the CDLNet [1] architecture's (9) soft-thresholding with group-thresholding w.r.t a row-normalized adjacency matrix \(\boldsymbol{\Gamma}\), as described in Algorithm 1 and shown in Figure 1. Here, noise-adaptive thresholds are computed using parameters \(\boldsymbol{\tau}_{0},\,\boldsymbol{\tau}_{1}\in\mathbb{R}_{+}^{M}\), \(\boldsymbol{A}^{T^{(k)}},\boldsymbol{B}^{(k)}\) are 2D (\(C\) to \(M\) channel, stride-\(s_{c}\)) analysis and (\(M\) to \(C\) channel, stride-\(s_{c}\)) synthesis convolutions, respectively, and \(\boldsymbol{D}\) is our 2D (\(M\) to \(C\) channel, stride-\(s_{c}\)) synthesis convolutional dictionary. For an input noisy image \(\boldsymbol{y}\in\mathbb{R}^{NC}\), our latent representation is thus of the form \(\boldsymbol{z}\in\mathbb{R}^{QM}\), where \(Q=N/s_{c}^{2}\).
The adjacency matrix of the group-sparsity prior (\(\boldsymbol{\Gamma}\in\mathbb{R}_{+}^{Q\times Q}\)) encodes similarity between latent subband pixels \(\boldsymbol{z}[i]\), \(\boldsymbol{z}[j]\)\(\forall\,i,\,j\). To manage computational complexity while staying true to the convolutional inductive bias of the network, we form this adjacency using a local sliding-window of size \(W\times W\). Motivated by the nonlocal similarity computations of black-box networks [6], we compute the similarity after transforming \(\boldsymbol{z}[i]\) and \(\boldsymbol{z}[j]\) along the channel dimension. Specifically, we first compute similarities at the \(k\)-th layer of the network as,
\[\boldsymbol{S}_{ij}^{(k)}=\begin{cases}-\|\boldsymbol{W}_{\theta}\boldsymbol{z }^{(k)}[i]-\boldsymbol{W}_{\phi}\boldsymbol{z}^{(k)}[j]\|_{2}^{2},&\|\vec{i}- \vec{j}\|_{\infty}\leq W\\ -\infty,&\text{otherwise}\end{cases} \tag{10}\]
where \(\boldsymbol{W}_{\theta},\boldsymbol{W}_{\phi}\in\mathbb{R}^{M_{h}\times M}\) are learned pixel-wise transforms shared across all layers. That is, similarities \(\boldsymbol{S}_{ij}^{(k)}\) are only computed for spatial locations \(i\) and \(j\) within a \(W\times W\) window centered on \(i\). The similarity matrix is then normalized via a row-wise softmax operation. To reduce computational complexity, we only compute similarity every \(\Delta K\) layers. We employ a convex combination of this normalized similarity and the adjacency of the previous layer (\(\boldsymbol{\Gamma}^{(k-1)}\)) with a learned parameter \(\gamma\in[0,1]\) (see Alg. 1), to ensure smooth updates. We consider circular boundary conditions when forming nonlocal windows in (10), resulting in a block-circulant with circulant block sparsity pattern for \(\boldsymbol{S}\) and \(\boldsymbol{\Gamma}\), as depicted in Figure 2.
```
1Input: noisy image \(\boldsymbol{y}\), estimated noise-level \(\hat{\sigma}\) ;
2Parameters:\(\Theta=\{\gamma,\,\boldsymbol{W}_{\{\theta,\phi,\alpha,\beta\}},\,\boldsymbol{D},\, \{\boldsymbol{A}^{T^{(k)}},\,\boldsymbol{B}^{(k)},\,\boldsymbol{\tau}_{\{0,1 \}}^{(k)}\}_{\alpha=1}^{K-1}\}\) ;
3Preprocess:\(\tilde{\boldsymbol{y}}=\boldsymbol{y}-\text{mean}(\boldsymbol{y})\) ;
4Initialize:\(\boldsymbol{z}^{(0)}=\boldsymbol{0},\,\boldsymbol{\Gamma}^{(0)}=\boldsymbol{I}, \,\boldsymbol{\tau}^{(k)}=\boldsymbol{\tau}_{0}^{(k)}+\hat{\sigma}\boldsymbol {\tau}_{1}^{(k)}\)\(\forall\)\(k\) ;
5for\(k=0,1,\dots,K-1\)do
6//Update row-normalized adjacency
7if\(k=1\)then
8\(\boldsymbol{\Gamma}^{(1)}=\text{softmax}(\boldsymbol{S}^{(1)})\) //Eq. (10)
9elseif\(\text{mod}(k+1,\,\Delta K)=0\)then
10\(\boldsymbol{\Gamma}^{(k)}=\gamma\,\text{softmax}(\boldsymbol{S}^{(k)})+(1- \gamma)\boldsymbol{\Gamma}^{(k-1)}\) //Eq. (10)
11else
12\(\boldsymbol{\Gamma}^{(k)}=\boldsymbol{\Gamma}^{(k-1)}\) ;
13\(\boldsymbol{r}=\boldsymbol{A}^{T^{(k)}}(\boldsymbol{B}^{(k)}\boldsymbol{z}^{(k) }-\tilde{\boldsymbol{y}})\);
14\(\boldsymbol{z}^{(k+1)}=\text{GT}_{\boldsymbol{\tau}^{(k)}}\left(\boldsymbol{z}^{( k)}-\boldsymbol{r};\boldsymbol{\Gamma}^{(k)}\right)\) //Eq. (11)
15Output:\(\hat{\boldsymbol{x}}=\boldsymbol{D}\boldsymbol{z}^{(K)}+\text{mean}(\boldsymbol{y})\) ;
```
**Algorithm 1**Group-sparse Convolutional Dictionary Learning Network (GroupCDL) Forward Pass
Mimicking the use of subband transforms in the similarity computation (10), we introduce two additional subband transforms, \(\boldsymbol{W}_{\alpha}\in\mathbb{R}^{M\times M_{h}},\boldsymbol{W}_{\beta} \in\mathbb{R}_{+}^{M\times M_{h}}\), into the group-thresholding operation,
\[\begin{split}\text{GT}_{\boldsymbol{\tau}}(\boldsymbol{z}; \,\boldsymbol{\Gamma})=\boldsymbol{z}\circ\left(1-\frac{\boldsymbol{\tau}}{ \boldsymbol{\xi}}\right)_{+},\\ \boldsymbol{\xi}=\overline{\boldsymbol{W}_{\beta}}\sqrt{( \boldsymbol{I}_{M_{h}}\otimes\boldsymbol{\Gamma})(\overline{\boldsymbol{W}_{ \alpha}^{T}}\boldsymbol{z})^{2}},\end{split} \tag{11}\]
,
\(\boldsymbol{\xi}=\overline{\boldsymbol{W}_{\beta}}\sqrt{(\boldsymbol{I}_{M_{h}} \otimes\boldsymbol{\Gamma})(\overline{\boldsymbol{W}_{\alpha}^{T}}\boldsymbol {z})^{2}}\),
where \(\boldsymbol{\xi}\in\mathbb{R}_{+}^{QM}\) contributes to the image-adaptive spatially varying threshold. Here, \(\overline{\boldsymbol{W}}\) refers to a pixel-wise application of a matrix \(\boldsymbol{W}\) (see Table I). In contrast to (7), \(\boldsymbol{W}_{\alpha}\) allows the adjacency-weighted energy of the latent representation to be computed in a compressed subband domain (by setting \(M_{h}<<M\)). Then, \(\boldsymbol{W}_{\beta}\) maps this energy back to the uncompressed subband domain (\(M\) channels), pixel-wise. In Section IV-E, we empirically show that the use of a compressed subband domain has a positive impact on denoising performance and an even greater impact on reducing inference time.
### _Sliding-window vs. Patch based Self-Attention_
The SW-NLSS employed by GroupCDL (10) is favorable to the independent OW-NLSS employed by GroupSC [2] and black-box DNNs [6, 24], because it naturally encourages agreement on overlapping regions and centers the nonlocal windows on each pixel. As shown in Figure 2, OW-NLSS additionally incurs computational overhead by processing overlapping pixels multiple times. This burden inherent to OW-NLSS can be expressed in terms of the image dimensions \(N=N_{1}\times N_{2}\), window-size \(W\times W\), and window-stride \(s_{w}\times s_{w}\). We express the burden factor as a ratio of the number of pixels processed by a single OW-NLSS layer over an SW-NLSS layer,
\[\frac{N_{1}/s_{w}\times N_{2}/s_{w}\times W^{2}}{N_{1}\times N_{2}}=\frac{W^{2} }{s_{w}^{2}}. \tag{12}\]
Common nonlocal window sizes, \(45\times 45\), and window-strides, \(7\times 7\), such as used by NLRN [6], make this burden factor 41 times the computational complexity of an equivalent SW-NLSS. GroupCDL's use of strided convolution may add an additional \(s_{c}^{2}\times\) computational benefit compared to common NLSS implementations by computing similarities over a reduced spatial dimension \(Q=N/s_{c}^{2}\). We further explore the relation between computation time and denoising performance of these two NLSS inference methods in Section IV-D.
### _Group-Thresholding vs. Black-box Attention_
Nonlocal self-similarity is used across domains in DNNs, from transformer architectures [21] to nonlocal image restoration networks [6, 24]. The underlying formula behind these methods is most commonly dot-product attention (DPA), given below,
\[\mathbf{z}^{(k+1)} =(I_{M_{\mathrm{out}}}\otimes\mathbf{\Gamma})\overline{\mathbf{W}_{v}}\mathbf{ z}^{(k)} \tag{13}\] \[\mathbf{\Gamma} =\mathbf{softmax}(\mathbf{S}^{(k)})\] (14) \[\mathbf{S}^{(k)}_{ij} =\frac{\mathbf{z}^{(k)}[j]^{T}\mathbf{W}_{q}^{T}\mathbf{W}_{k}\mathbf{z}^{(k)}[i]} {\sqrt{M_{h}}}, \tag{15}\]
where \(\mathbf{z}^{(k)}\in\mathbb{R}^{NM_{\text{in}}}\), \(\mathbf{W}_{q},\mathbf{W}_{k},\in\mathbb{R}^{M_{h}\times M_{\text{in}}}\), \(\mathbf{W}_{v}\in\mathbb{R}^{M_{\text{out}}\times M_{\text{in}}}\), and \(\mathbf{z}^{(k+1)}\in\mathbb{R}^{NM_{\text{out}}}\). Learned matrices \(\mathbf{W}_{q},\mathbf{W}_{k},\mathbf{W}_{v}\) are understood to transform the input signal \(\mathbf{z}^{(k)}\) to so-called query, key, and value signals.
Both DPA and the proposed GT (11) make use of a normalized adjacency matrix (\(\mathbf{\Gamma}\)), computed in an asymmetric feature domain. Both use this adjacency to weight the current spatial features, identically over channels. However, in DPA, the weighting directly results in the layer's output (via matrix multiplication, see (13)), whereas in GT, this weighting informs a spatially adaptive soft-thresholding.
The proposed GT's decoupling of adjacency application and output dimension is key in allowing group-thresholding to be computationally efficient, as the adjacency matrix-vector multiplication can be performed in a compressed subband domain. In contrast, DPA operating in a compressed feature domain (\(M_{\mathrm{out}}<<M_{\text{in}}\)) would harm the capacity of the network's latent representation. In Section IV-E we show empirical evidence for favoring the negative-norm similarity in GT over the dot-product similarity of DPA.
## IV Experimental Results
### _Experimental Setup_
**Architecture**: We denote the network detailed in Algorithm 1 as GroupCDL. GroupCDL and CDLNet are trained with noise-adaptive thresholds (\(\mathbf{\tau}^{(k)}=\mathbf{\tau}^{(k)}_{0}+\hat{\sigma}\mathbf{\tau}^{(k)}_{1}\)) unless specified using the -B suffix, indicating the models are noise-blind (\(\mathbf{\tau}^{(k)}=\mathbf{\tau}^{(k)}_{0}\)). The hyperparameters for these architectures are given in Table II, unless otherwise specified.
**Dataset and Training**: Let \(f_{\Theta}\) denote the GroupCDL DNN as a function of parameters \(\Theta\). Let \(\mathcal{D}=\{(\mathbf{y},\sigma,\mathbf{x})\}\) denote a dataset of noisy and ground-truth natural image pairs, with noise-level \(\sigma\). Grayscale (and color) GroupCDL models are trained on the (C)BSD432 [25] dataset with the supervised mean squared error (MSE) loss,
\[\underset{\begin{subarray}{c}\mathbf{W}_{q},\mathbf{W}_{q},\mathbf{W}_{q},\\ \mathbf{W}_{2}\geq 0,\ \sigma\in[0],\\ D\in\mathcal{C},\ \{\mathbf{\tau}^{(k)}\geq 0\}_{1}^{K-1},\\ \{A^{(k)}\in\mathcal{C},\ B^{(k)}\in\mathcal{C}\}_{2}^{K-1}\end{subarray}}{ \text{minimize}}\sum_{\begin{subarray}{c}\mathbf{y},\sigma,\mathbf{x}\}\in\mathcal{D} \\ \|f_{\Theta}(\mathbf{y},\sigma)-\mathbf{x}\|_{2}^{2}, \tag{16}\]
\(\{\mathbf{A}^{(k)}\in\mathcal{C},\ B^{(k)}\in\mathcal{C}\}_{2}^{K-1}\)
where \(\mathcal{C}=\{\mathbf{D}:\|\mathbf{D}_{:}\|_{2}^{2}\leq 1\ \forall j\}\). We use the Adam optimizer with default parameters [26], and project the network parameters onto their constraint sets after each gradient step. The dataset is generated online with random crops, rotations, flips, and AWGN of noise-level \(\sigma\) sampled uniformly within \(\sigma^{\mathrm{train}}\) for each mini-batch element. All models are trained with the same hyperparameters given in [1], however, GroupCDL models are trained for 270k iterations with a batch-size of 32.
Test and validation performance is evaluated on several datasets. The dataset name, along with (arithmetic) average dimensions, are provided to better understand reported inference timings: Set12 (362 \(\times\) 362), CBSD68 [25] (481 \(\times\) 321), Urban100 [27] (1030 \(\times\) 751), and NikoSet10\({}^{2}\) (1038 \(\times\) 779).
\begin{table}
\begin{tabular}{c c c c c c} \hline Name & Task & \(K\) & \(M\) & \(M_{h}\) & \(W\) \\ \hline CDLNet(-S,-B) & Gray & 30 & 169 & - & - \\ GroupCDL(-S,-B) & Gray & 30 & 169 & 64 & 35 \\ CDLNet(-S,-B) & Color & 24 & 96 & - & - \\ GroupCDL(-S,-B) & Color & 24 & 96 & 48 & 35 \\ \hline \end{tabular}
\end{table} TABLE II: Architectures of the GroupCDL models, CDLNet models, and variants presented in the experimental section. We use \(C=3\) and \(C=1\) for color and grayscale denoising networks, respectively. A filter size of \(7\times 7\) is used for all models. Conv-stride \(s_{c}=2\), adjacency update period \(\Delta K=5\), is used unless otherwise specified.
Fig. 2: (a) In OW-NLSS, the input image is divided into overlapping windows (of size \(W\times W\) and with window-stride \(s_{w}\times s_{w}\)), processed independently via a DNN with dense self-attention. The denoised windows are then placed in their original positions and averaged on their overlaps. (b) In the proposed SW-NLSS, the entire image is processed in a single forward pass, made possible by sparse matrix arithmetic. The adjacency matrix has a block-circulant with circulant blocks (BCCB) sparsity pattern, where the number of non-zeros in each row/column is at most \(W^{2}\). The adjacency matrix is computed in the subband domain, with spatial dimension \(Q=N/s_{c}^{2}\), where \(s_{c}\) is the convolution stride. Hence, the effective image-domain window-size is \(s_{c}W\times s_{c}W\).
**Training Initialization**: CDLNet models are initialized as ISTA with \(\mathbf{\tau}_{0}=10^{-2},~{}\mathbf{\tau}_{1}=0\), and a base dictionary \(\mathbf{D}\) that has been spectrally normalized. Details are given in [1]. GroupCDL models are initialized with a trained CDLNet model. Pixel-wise transforms \(\mathbf{W}_{\{\theta,\phi,\alpha\}}\) are initialized with the same weights drawn from a Glorot Uniform distribution centered at zero [28], and \(\mathbf{W}_{\beta}\) is drawn from a similar Glorot Uniform distribution, tailored to the positive orthant. We initialize parameter \(\gamma=0.8\).
**Group-Thresholding Training vs. Inference**: A GroupCDL model, with nonlocal window-size \(W\) and conv-stride \(s_{c}\), is trained using image crops of dimension \(s_{c}W\times s_{c}W\) such that a single nonlocal window is used during training. Dense arithmetic is used in construction and application of the normalized adjacency matrix \(\mathbf{\Gamma}^{(k)}\) throughout training.
On inference of a noisy image \(\mathbf{y}\in\mathbb{R}^{NC}\), with latent representation \(\mathbf{z}\in\mathbb{R}^{QM}\) (\(Q=N/s_{c}^{2}\)), the adjacency matrix \(\mathbf{\Gamma}\in\mathbb{R}_{+}^{Q\times Q}\) is constructed with a block circulant with circulant blocks (BCCB) sparsity pattern and sparse matrix arithmetic is used. The adjacency matrix requires an allocation of \(Q\times W^{2}\times\texttt{sizeof}\) (float) bytes. For a high resolution image of size \(1024\times 1024\) (such as images in the Urban100 dataset [27]) and a conv-stride of \(2\), this results in roughly \(2.6\) GB of memory required to store \(\mathbf{\Gamma}^{(k)}\) and \(\mathbf{\Gamma}^{(k-1)}\), which is by and large the memory consumption of inference.
**Hardware**: All models were trained on a single core Intel Xeon CPU with 2.90 GHz clock and a single NVIDIA A100 GPU. CDLNet pre-training takes roughly 6 hours, and GroupCDL training takes approximately an additional 36 hours. Note that training and inference can take place on GPUs with as low as 16 GB of memory. For code-base compatibility reasons, all inference timings in Tables III, IV were determined by running models (provided by the authors of the respective papers), and our trained GroupCDL/CDLNet models, on a single core Intel Xeon CPU with 2.90 GHz clock and a NVIDIA Quadro RTX-8000 GPU.
### _Single Noise-Level Performance_
In this section, we demonstrate the fast inference speed and competitive denoising performance of the proposed GroupCDL. All models are trained on a single noise-level and tested at the same noise-level (\(\sigma^{\rm train}=\sigma^{\rm test}\)). We compare GroupCDL to its fully convolutional counterpart (CDLNet), the non-learned nonlocal method BM3D [29, 30], popular local and nonlocal black-box CNNs [6, 31, 32], and a patch-processing dictionary learning based nonlocal DNN (GroupSC) [2].
Table III shows the grayscale denoising performance and inference speed of the aforementioned models across several common datasets. We include a learned parameter count as a crude measure of expressivity of the model. The group-sparsity prior of GroupCDL significantly increases denoising performance compared to the unstructured sparsity prior of CDLNet, though at a detriment to inference speed. We observe that GroupCDL has denoising performance superior to other dictionary learning based networks using group sparsity prior (GroupSC) and competitive performance with state-of-the-art black-box methods (GCDN, NLRN). Most notably, GroupCDL has the fastest inference runtime among nonlocal methods, with at least an order of magnitude difference. These timing differences between GroupCDL and NLRN (or GroupSC) correspond well with the analysis in Section III-B regarding the use of sliding-window sparse nonlocal processing vs. overlapping-window nonlocal processing.
Table IV shows the color image denoising performance and inference speed of the aforementioned classical benchmark [29, 30], local DNN [31], nonlocal patch-processing DNN [2], and black-box nonlocal DNN [24] against the proposed GroupCDL. We observe that GroupCDL outperforms the interpretable patch-processing dictionary learning DNN (GroupSC) and CNNs (DnNN, CDLNet). GroupCDL performs competitively to the black-box nonlocal DNN (RNAN [24]) at a faction of the learned parameter count and inference time.
Figures 3 and 4 highlight the qualitative differences between the previously mentioned methods. We observe that the group-sparsity prior of GroupCDL is instrumental in suppressing unwanted denoising artifacts (present in CDLNet's results), especially in constant image regions where edges are otherwise hallucinated (see Figure 4 (g) vs (j)). Note that GroupSC produces low spatial-frequency artifacts throughout the image, likely introduced by independent processing of image patches (see Figure 4 (c,h,m)). Further, GroupCDL's results appear qualitatively indistinguishable to those of state-of-the-art black-box models, at a fraction of the inference time.
### _Noise-Level Generalization_
In this section, we look at the denoising performance of trained denoising DNNs on inference of input images with noise-levels (\(\sigma^{\rm test}\)) outside their training noise-level range (\(\sigma^{\rm train}\)). Figure 5 shows the performance of grayscale image denoisers, trained over the range \(\sigma^{\rm train}=[20,30]\). The figure
Fig. 3: Visual comparison for grayscale denoising models (\(\sigma^{\rm test}=\sigma^{\rm train}=25\)) on the starfish test image. PSNR (dB)/\(100\times\)SSIM reported for each image.
shows, as also noted in [1, 3, 33], black-box DNNs (such as DnCNN [31]) and dictionary learning DNNs without noise-adaptive thresholds exhibit a catastrophic failure on inference above their training noise-level range. A less striking but analogous failure is seen on inference below the training noise-level, where a mere plateau in performance is obtained as the denoising problem becomes easier.
In addition to the observations noted in [1], Figure 5 shows that the proposed novel group-thresholding scheme (11) is able to obtain near-perfect noise-level generalization (w.r.t GroupCDL-S performance). This serves as empirical evidence for the interpretation of the unrolled network as performing some approximate/accelerated group-sparse BPDN, as the noise-adaptive thresholds (\(\mathbf{\tau}=\mathbf{\tau}_{0}+\hat{\sigma}\mathbf{\tau}_{1}\)) appear to correspond very well to their classical counter-parts from which they are derived.
We further investigate the behavior of the proposed group-thresholding across noise-levels in Figure 6. The figure shows
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Params} & \multirow{2}{*}{time (s)} & \multicolumn{4}{c}{Noise-level (\(\sigma\))} \\ & & & 10 & 15 & 25 & 30 & 50 \\ \hline BM3D [29] & - & 0.019 & 34.56 & 33.49 & 30.68 & 28.05 & 27.36 \\ DaCNN [31] & 668k & 0.054 & 36.31 & 33.99 & 31.31 & 28.01 \\ CDNet & 694k & 0.009 & 36.31 & 34.04 & 31.39 & 30.52 & 28.18 \\ GronpeC [2] & 119k & 40.81 & 36.40 & 34.11 & 31.44 & 30.58 & 28.05 \\ RNAN [24] & 8.96M & 1.92 & **36.60** & - & - & **30.73** & 28.35 \\ GroupCDL-S & 698k & 0.39 & 36.43 & **34.19** & **31.58** & 30.70 & **28.37** \\ \hline \hline \end{tabular}
\end{table} TABLE IV: Color Denoising performance, PSNR (dB), and GPU inference runtimes on CBSD68 [25]. All learned methods are trained on CBSD432 [25] (\(\sigma=\sigma^{\rm train}=\sigma^{\rm test}\)).
Fig. 4: Visual comparison of deep denoisers. Top and middle rows: color denoisers for \(\sigma^{\rm train}=\sigma^{\rm test}=50\). Bottom row: grayscale denoisers \(\sigma^{\rm train}=\sigma^{\rm test}=25\). PSNR/\(100\times\)SSIM shown in respective captions.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Noise \(\sigma\)} & BM3D [29] & DnCNN [31] & CDLNet [1] & GroupSC [2] & GCDN [32] & NLRN [6] & GroupCDL-S \\ & & - & 556k & 507k & 68k & 6M & 340k & 550k \\ \hline \multirow{3}{*}{Set12} & 15 & 32.37/89.52 & 32.86/90.31 & 32.87/90.43 & 32.85/90.63 & 33.14/90.72 & **33.16**/90.70 & 33.05**/**90.73** \\ & 25 & 29.97/85.04 & 30.44/86.22 & 30.52/86.55 & 30.48/86.74 & 30.78/86.87 & **30.80**/86.89 & 30.75/**86.93** \\ & 50 & 26.72/76.76 & 27.18/78.29 & 27.42/79.41 & 27.14/77.97 & 27.60/79.57 & **27.64**/**90.80** & 27.63**/**80.04** \\ time (s) & & 0.010 & 0.119 & 0.019 & 22.07 & 404.8 & 25.62 & 0.68 \\ \hline \multirow{3}{*}{BDS68 [25]} & 15 & 31.07/87.17 & 31.73/89.07 & 31.74/89.18 & 31.70/**89.63** & 31.83/89.33 & **31.88**/89.32 & 31.82/89.41 \\ & 25 & 28.57/80.13 & 29.23/82.78 & 29.26/83.06 & 29.20/83.36 & 29.35/83.32 & **29.41**/83.31 & 29.38/**83.51** \\ & 50 & 25.62/68.64 & 26.23/71.89 & 26.35/72.69 & 26.17/71.83 & 26.36/**37.89** & **26.47**/72.98 & **26.47**/73.32 \\ time (s) & & 0.011 & 0.039 & 0.02 & 23.63 & 539.7 & 26.66 & 0.65 \\ \hline \multirow{3}{*}{Urban100 [27]} & 15 & 32.35/92.20 & 32.68/92.55 & 32.59/92.85 & 32.72/93.08 & **33.47**/**93.58** & 33.42/93.48 & 33.07/93.40 \\ & 25 & 29.70/87.77 & 29.92/87.97 & 30.03/89.00 & 30.05/89.12 & **30.95**/**90.20** & 30.88/90.03 & 30.61/90.03 \\ & 50 & 25.95/77.91 & 26.28/78.74 & 26.66/81.11 & 26.43/80.02 & **27.41**/81.60 & 27.40/82.44 & 27.29/**83.05** \\ time (s) & & 0.030 & 0.096 & 0.090 & 93.33 & 1580 & 135.8 & 3.56 \\ \hline \hline \end{tabular}
\end{table} TABLE III: Grayscale denoising performance (PSNR (dB)/ \(100\times\)SSIM) and GPU inference runtimes. All learned methods are trained on BSD432 [25] with an MSE loss function (\(\sigma=\sigma^{\rm train}=\sigma^{\rm test}\)). Learned parameter counts are displayed below the method names.
a single input nonlocal window \(\mathbf{y}\) across noise-levels \(\sigma\) and the computed adjacency values of the three types of GroupCDL models (GroupCDL-S, GroupCDL-B, GroupCDL).
In agreement with the catastrophic failure of the noise-level blind GroupCDL-B model of Figure 5, the adjacency visualizations of GroupCDL-B in Figure 6 show a catastrophic failure in the similarity computations of the network above \(\sigma^{\rm train}\), as no structure is found. We observe a similar pattern in the GroupCDL models (with noise-adaptive thresholds) to the visualizations of GroupCDL-S, however, the single noise-level models seem to have more variation in their computed similarities across noise-levels. This could suggest room for improvement in GroupCDL by parametrizing the similarity computations of (7) and (10) to also be noise-adaptive. However, this subtle dissimilarity may also be due to the fact that each row-element for the GroupCDL-S column in Figure 6 is from a different trained model with different weights (i.e. with similarities computed in different domains), whereas the visualizations from the GroupCDL column are all from the same model.
### _Sliding vs. Overlapping Window Nonlocal Processing_
From Figure 2, it is clear that the OW-NLSS strategy (used by NLRN [6] and GroupSC [2]) is able to achieve a denoising to speed trade-off by reducing the amount of overlap between windows, i.e. increasing window-stride (\(s_{w}\)). SW-NLSS can also achieve a similar performance-speed trade-off by instead reducing the nonlocal window-size (\(W\)) during inference.
Fig. 5: Noise-level generalization of different grayscale denoising networks tested on BSD68 [25]. GroupCDL-S is trained at \(\sigma^{\rm test}\) for each point on the graph. All other networks are trained on \(\sigma^{\rm train}=[20,30]\).
Fig. 6: Visualization of normalized adjacency \(\mathbf{\Gamma}_{\hat{i}:}^{(K)}\) for input nonlocal window \(\mathbf{y}\) at noise-levels \(\sigma=10,25,50\). Columns GroupCDL-S, GroupCDL-B, and GroupCDL visualize the adjacency of models trained under conditions of \(\sigma^{\rm train}=\sigma\), \(\sigma^{\rm train}\in[20,30]\) without noise-adaptive thresholds, and \(\sigma^{\rm train}\in[20,30]\) with noise-adaptive thresholds, respectively. Catastrophic failure of GroupCDL-B model is observed for \(\sigma>\sigma^{\rm train}\). The same noise-realization is used across models and noise-levels (with appropriate scaling).
Fig. 7: Inference-time vs. denoising-performance trade-off for GroupCDL. Circle, pentagon, and triangle markers are generated by the same trained GroupCDL model (\(W^{\rm train}=35\)) under different inference strategies (SW-NLSS w/ window-size \(W\), OW-NLSS w/ window-stride \(s_{w}\)). The square markers are each generated by a GroupCDL with a different training window-size \(W^{\rm train}\). Performance (a) PSNR, (b) \(100\times\)SSIM, is evaluated on BSD68 [25] with \(\sigma^{\rm train}=\sigma^{\rm test}=25\). OW-NLSS (serial,parallel) curves are generated via processing independent overlapping windows either sequentially or all at once, respectively. The same noise-realization for the dataset was used across all evaluations plotted. Note that \(W\) corresponds between SW curve markers vertically (the same inference time), and \(s_{w}\) corresponds between OW curve markers horizontally (the same PSNR/SSIM).
Figure 7 plots the denoising performance vs. inference time trade-off attainable by GroupCDL under the SW-NLSS strategy and the OW-NLSS strategy. The SW: \(W^{\rm train}=35\), and OW curves show a single GroupCDL model (trained with a nonlocal window-size \(W^{\rm train}=35\)) under SW-NLSS inference with varying nonlocal window-size \(W^{\rm test}\), and OW-NLSS inference with varying window-stride \(s_{w}\), respectively. Each point in the SW: \(W^{\rm train}=W^{\rm test}\) curve shows the performance of a GroupCDL model trained with a different non-local window-size and performing SW-NLSS inference with their respective training window-size.
First, we observe that SW-NLSS consistently out-performs OW-NLSS across the PSNR-speed and SSIM-speed trade-offs - in both parallel and serial window processing forms of OW-NLSS. This agrees well with the burden-factor analysis in Section III-B. The curves further highlight that competitive denoising performance in OW-NLSS inference is predicated on using a small window-stride, in order to compensate for the neglect of dependencies between overlapping window regions by window-averaging. Second, we observe that increasing the training window-size \(W^{\rm train}\) has diminishing returns on denoising performance. This is consistent with the intuition that non-local similarities of natural images are generally located close to the pixel of interest. Lastly, we observe that the OW-NLSS curves of Figure 7 are not monotonically increasing with smaller window-stride, and in-fact have significant drops in the SSIM curves (Fig. 7 (b), blue triangle and pentagon curves). The source of this behavior is windowing artifacts, highlighted visually in Figure 8. These visualizations show that the denoising-speed trade-off exhibited by OW-NLSS processing comes with the penalty of unnatural artifacts in the form of grid-lines corresponding to the spatial pattern of window overlaps. In contrast, the proposed SW-NLSS does not exhibit windowing artifacts. Instead, as the window-size decreases, SW-NLSS processing transitions to fully convolutional CDLNet processing, and artifacts associated with FCNNs (such as hallucinated edges) are observed.
### _Ablation Studies_
In this section we examine the denoising and inference time performance of the GroupCDL model under different hyper-parameters associated with the proposed group-thresholding operation (11), (10). Table V shows the effect of the update-period parameter (\(\Delta K\)), which determines how often we update the adjacency \(\mathbf{\Gamma}^{(k)}\) in the network (every \(\Delta K\) layers, see Alg. 1). We observe that decreasing the update frequency increases denoising performance, with diminishing returns, at cost to the inference speed of the network.
Table VI shows the effect of employing learned pixel-wise transforms in the similarity computation (10) (\(\mathbf{W}_{\theta}\), \(\mathbf{W}_{\phi}\)) and group-thresholding (11) (\(\mathbf{W}_{\alpha}\), \(\mathbf{W}_{\beta}\)). The table also shows the effect of employing channel reduction in these transforms
\begin{table}
\begin{tabular}{c|c c} \hline \(\Delta K\) & PSNR/\(100\times\)SSIM & time (s) \\ \hline
2 & 30.22/86.88 & 4.01 \\
3 & 30.22/86.87 & 4.72 \\
5 & 30.21/86.87 & 3.47 \\
10 & 30.21/86.87 & 3.24 \\
15 & 30.19/86.79 & 3.18 \\ \hline \end{tabular}
\end{table} TABLE V: Effect of update frequency (\(\Delta K\)). Grayscale denoising performance averaged over the NikoSet10 dataset (\(\sigma^{\rm train}=\sigma^{\rm test}=25\)).
Fig. 8: Comparison of inference-speed/denoising trade-off between OW-NLSS (b-d) and SW-NLSS (f-h) processing by a color GroupCDL model (\(\sigma^{\rm train}=\sigma^{\rm test}=50\), \(W^{\rm train}=35\)). PSNR (dB) / \(100\times\)SSIM / GPU inference-time (s) shown in respective captions. Zoomed-in regions highlight blocking artifacts exist across shown OW-NLSS window-strides (\(s_{w}\)), whereas SW-NLSS processing exhibits no blocking artifacts across inference window-sizes (\(W\)). Yellow arrows (b-d) point to specific blocking artifact boundaries of OW-NLSS. Orange arrows (f) point to edge/texture hallucination artifacts of SW-NLSS with a small windowsize. The inclusion of (d) (\(s_{w}=32\)), demonstrates that blocking artifacts are not merely a result of the effective spatial windowsize (\(s_{c}W=70\)) being divisible by the overlapping window-stride. The same noise-realization is used across all methods (b-d, f-h).
(\(M_{h}<<M=169\)) and a comparison of using the negative distance similarity (as presented in (10)) or, as more commonly used in black-box nonlocal and transformer DNNs, the dot-product similarity (15). From these experiments, we observe that the use of pixel-wise transforms increases denoising performance. Most importantly, we observe only a marginal decrease in performance for setting the latent similarity channel dimension less than the latent subband dimension \(M_{h}<M\). This marginal decrease is met by a massive reduction in GPU inference time, which is predicted well by the channel reduction ration \(M/M_{h}=169/64\approx 8.89/3.47\). This demonstrates one of the advantages of the proposed group-thresholding operation over black-box dot-product attention, in that the latent similarity channel dimension \(M_{h}\) is decoupled from the layer's output channel dimension and can be tuned to achieve a better trade-off between speed and performance.
## V Discussion and Conclusion
GroupCDL adapts the classical, patch-processing based, group-sparsity prior to convolutional sparse coding (CSC) and applies it to the direct-parametrization unrolling frame-work of CDLNet [1]. In contrast to the group-sparsity prior of Lecouat et. al's patch-based dictionary learning network (GroupSC) [2], which employ's overlapping window processing, we formulate our group-sparsity prior on convolutional sparse codes, and in doing so naturally arrive at a sliding-window nonlocal self-similarity consistent with the CSC model. As discussed in Section III-B and empirically validated in Section IV-D, the proposed SW-NLSS enjoys an improved denoising performance over OW-NLSS by properly accounting for correlations between neighboring image regions and centering similarity computations on each latent pixel of interest. The sparse array arithmetic employed at inference time, enables orders of magnitude speed-up for the proposed design compared to state-of-the-art competitor networks (Section IV-B).
Notably, GroupCDL's performance comes without the use of common deep-learning operations, such as batch-normalization [34] or residual learning [31], and instead relies on the tools of classical signal processing and optimization such as basis-pursuit denoising and proximal operators.
The fast inference of GroupCDL is aided by a novel decoupling of the latent subband dimension (\(M\)) from the hidden adjacency/similarity dimension (\(M_{h}\)) (see Equation (11)). This allows the computational bottleneck of sparse-matrix dense-vector multiplication to be tuned without harming the capacity of the latent representation (Table VI) - something which is not achievable in the dot-product attention employed by black-box nonlocal and transformer networks (Section III-B).
Similar to CDLNet [1], GroupCDL is formulated as a direct parameterization of the proximal gradient method (4). We show that this derivation allows for near perfect generalization outside of its training noise-level range (Fig. 5), simply by parametrizing its thresholds as an affine function of the input noise-level, as suggested by the classical BPDN formulation (2) and the universal thresholding theorem [1, 8]. In contrast, black-box networks [31] are shown to fail catastrophically above the noise-level range and simply plateau in performance when the noise-level decreases (i.e. the problem becomes easier). In GroupCDL, generalization is additionally observed in its adjacency matrix (Fig. 6).
In future work, we aim to adapt GroupCDL to other imaging modalities. We believe GroupCDL's speed, performance, interpretability, and robustness are well suited to tackle large signal reconstruction problems with nonlocal image-domain artifacts, such compressed sensing magnetic resonance imaging. The unsupervised learning and demosaicing work of CDLNet [1] may be adapted for GroupCDL to this end.
## Acknowledgments
The authors would like to thank NYU HPC for its computing resources and technical support. The authors are grateful to Che Maria Baez for her linguistic revisions on a preliminary draft of this manuscript.
|
2308.05724 | Optimizing Performance of Feedforward and Convolutional Neural Networks
through Dynamic Activation Functions | Deep learning training training algorithms are a huge success in recent years
in many fields including speech, text,image video etc. Deeper and deeper layers
are proposed with huge success with resnet structures having around 152 layers.
Shallow convolution neural networks(CNN's) are still an active research, where
some phenomena are still unexplained. Activation functions used in the network
are of utmost importance, as they provide non linearity to the networks. Relu's
are the most commonly used activation function.We show a complex piece-wise
linear(PWL) activation in the hidden layer. We show that these PWL activations
work much better than relu activations in our networks for convolution neural
networks and multilayer perceptrons. Result comparison in PyTorch for shallow
and deep CNNs are given to further strengthen our case. | Chinmay Rane, Kanishka Tyagi, Michael Manry | 2023-08-10T17:39:51Z | http://arxiv.org/abs/2308.05724v2 | Optimizing Performance of Feedforward and Convolutional Neural Networks through Dynamic Activation Functions
###### Abstract
Deep learning training training algorithms are a huge success in recent years in many fields including speech, text,image video etc. Deeper and deeper layers are proposed with huge success with resnet structures having around 152 layers. Shallow convolution neural networks(CNN's) are still an active research, where some phenomena are still unexplained. Activation functions used in the network are of utmost importance, as they provide non linearity to the networks. Relu's are the most commonly used activation function.We show a complex piece-wise linear(PWL) activation in the hidden layer. We show that these PWL activations work much better than relu activations in our networks for convolution neural networks and multilayer perceptrons. Result comparison in PyTorch for shallow and deep CNNs are given to further strengthen our case.
activation function, non-linearity, piecewise linear
## 1 Introduction
In recent years, deep learning architectures have found tremendous success in speech recognition, image recognition, language recognition and translation. For speech, language recognition and translation, deep recurrent neural networks(RNNs)[1][2][3] have shown improvements over older technologies. These networks can not only process images but also can process sequences of images such as videos, text and speech. The RNN structure consists of cells and gates. These cells store important information over time and the gates decide the passage of information in and out of the cells. RNNs are faced with vanishing gradient problems and cannot process words over a long period of time. To address this situation, networks such as long shot term memory(LSTM)[4], gated recurrent unit(GRU)[5] and transformers[6] are shown. These networks are designed to handle long sequences over time and are also used for image processing. Convolutional Neural networks(CNNs)([7], [8], [9]) are more widely used in image based applications. Combination networks such as CNN-LSTM, CNN-transformer are also used for visual recognition[10][11] and time series analysis[12]
CNNs are used for image based application as a feature extractor, where one doesn't need to explicitly extract features for classifying images. Applications for CNNs include in diabetic retinopathy screening[13], lesion detection[14][15], skin lesion classification[16], human action recognition[17][18], face recognition [19][20], document analysis[21][22] and in many other applications. CNNs can be trained using gradient approaches such as back propagation ([23], [24], [25], [26]), and conjugate gradient ([27],[28],[29], [30]).
Despite their popularity, CNNs still have some limitations such as its poorly understood shift - invariance, overfitting of the data, and the use of oversimplified nonlinear activation functions such as relu [31],and leaky relu [32][33].
Nonlinear activation functions such as relu[31] and leaky relu [32] have been widely used in a number of computer vision[34] and deep neural network[35] applications. These activation functions are not as complex as sigmoids [36] or hyperbolic tangent functions(tanh[37] but are favored because they partially solve the vanishing gradient problem [38]. These relu activations do not guarantee optimal results as different sets of activations can lead to optimal results for each of the filters. For example. A CNN for a image classification application with 20 filters might need 20 different activations. The number of filters required for a particular application is not known.
Although, these activations lead to universal approximation[39] in multilayer perceptrons, many attempts have been made to create adaptive or fixed piecewise linear activation function [[40], [41], [42],[43],[44]]. Adaptive activation functions for deep CNNs are introduced in[45], where the author trains the slope and hinges on the curve using gradient descent techniques. The author has shown promising results in terms of testing accuracies in CIFAR-10 and CIFAR-100 image recognition dataset[46] and high-energy physics involving Higgs boson decay modes[47].
In this paper, we first briefly review the multilayer perceptron's (MLP's) architecture, notation, training and its properties. In section II, we review the CNN architecture and notation as well as its back propagation algorithm. In Section III, we investigate a trainable piecewise linear (PWL) activation, since they can approximate the optimal mix of activations through universal approximation. In section IV, we compare our results with relu activations. Finally, In Section VII, we discuss additional work and conclude this dissertation.
## 2 Prior Work
In this section, we review our notation for a single hidden layer cascade connected MLP, briefly summarize several commonly used feedforward classifier training methods and describe some of the MLP's properties.
### MLP Structure and notation
A cascade connected MLP with one hidden layer is shown in figure 1. Input weight \(w(k,n)\) connects the \(n^{th}\) input to the \(k^{th}\) hidden unit. Output weight \(w_{oh}(i,k)\) connects the \(k^{th}\) hidden unit's activation \(o_{p}(k)\) to the \(i^{th}\) output. \(y_{p}(i)\) is the \(p^{th}\) pattern, \(i^{th}\) output activation which is linear activation in figure1. In the training pattern \(\{\mathbf{x}_{p}\), \(\mathbf{t}_{p}\}\) for a MLP,
Figure 1: Single Hidden Layer MLP
the \(p^{th}\) input vector \(\mathbf{x}_{p}\) is initially of dimension N and the \(p^{th}\) desired output (target) vector \(\mathbf{t}_{p}\) has dimension M. The pattern number p varies from 1 to \(N_{v}\). The threshold is handled by augmenting \(\mathbf{x}_{p}\) with an extra element \(X_{p}(n+1)\) which is equal to one where, \(\mathbf{x}_{p}=[x_{p}(1),x_{p}(2),....,x_{p}(N+1)]^{T}\)
For the \(p^{th}\) pattern, the \(k^{th}\) hidden unit's net function \(n_{p}(k)\) is then
\[n_{p}(k)=\sum_{n=1}^{N+1}w(k,n)\cdot x_{p}(n) \tag{1}\]
which can be summarized as
\[\mathbf{n}_{p}=\mathbf{W}\cdot\mathbf{x}_{p} \tag{2}\]
where \(\mathbf{n}_{p}\) denotes the \(N_{h}\) dimensional column vector of net function values and the input weight matrix \(\mathbf{W}\) is \(N_{h}\) by (N+1). For the \(p^{th}\) pattern, the \(k^{th}\) hidden unit's output, \(o_{p}(k)\), is given as
\[o_{p}(k)=f(n_{p}(k)) \tag{3}\]
where \(f(.)\) denotes a nonlinear hidden layer activation function, such as relu[31] which is represented as
\[f(n_{p}(k))=\begin{cases}n_{p}(k),&if\quad n_{p}(k)\geq 0\\ 0,&if\quad n_{p}(k)<0\end{cases} \tag{4}\]
The threshold in the hidden layer is handled by augmenting \(\mathbf{o}_{p}\) with an extra element \(o_{p}(N_{h}+1)\) which is equal to one where \(\mathbf{o}_{p}=[o_{p}(1),o_{p}(2),....,o_{p}(N_{h}+1)]^{T}\). The network's output vector for the \(p^{th}\) pattern is \(\mathbf{n}_{po}\). The \(i^{th}\) element \(n_{po}(i)\) of the M-dimensional output vector \(\mathbf{n}_{po}\) is
\[n_{po}(i)=\sum_{k=1}^{N_{h}+1}w_{o}(i,k)\cdot o_{p}(k) \tag{5}\]
which can be summarized as
\[\mathbf{N}_{po}=\mathbf{W}_{o}\cdot\mathbf{o}_{p} \tag{6}\]
\(\mathbf{W}_{o}\) is the output weight matrix with dimensions \(M\) by (\(N_{h}+1\)).
The output layer net vector \(\mathbf{n}_{po}\) is passed through an activation function for \(i^{th}\) output which is given in terms of \(p^{th}\) pattern and \(i^{th}\) output, \(y_{p}(i)\) as,
\[y_{p}(i)=f_{o}(\mathbf{n}_{po}(i)) \tag{7}\]
where \(f_{o}(.)\) denotes a output hidden layer activation function. The most commonly used activation function for approximation data is the linear activation, sigmoid activation is mostly used in logistic regression and softmax activation [48] is used for classification models which is defined in equation (8).
The softmax output activation function is
\[y_{p}(i)=\frac{exp(n_{po}(i))}{\sum_{k=1}^{M}exp(n_{po}(k))} \tag{8}\]
The most commonly used objective function for a classification task is cross entropy loss function[48]
\[E_{ce}=\frac{1}{N_{v}}\sum_{p=1}^{N_{v}}[-\sum_{i=1}^{M}t_{p}(i)\cdot log(y_{p} (i))] \tag{9}\]
where \(t_{p}(i)\) is the \(p^{th}\) pattern and \(i^{th}\) class one hot encoded output. \(t_{p}(i)\) is found from \(ic_{p}(i)\), where \(ic_{p}(i)\) is the class number. For approximation or regression, mean square error(MSE)[49] defined in equation (10) is the most widely used objective function. The MSE can also be used in the classification task with output reset[50][51].
\[E=\frac{1}{N_{v}}\sum_{p=1}^{N_{v}}\sum_{i=1}^{M}[t_{p}(i)-y_{p}(i)]^{2} \tag{10}\]
### CNN structure and notation
In this section, we first review notation and training of a convolutional neural network with a single convolution layer. Then we extend the notation to cover CNNs with multiple hidden layers.
The CNN network structure is shown in figure 2. Let \(\mathbf{f}_{p}\) denote the \(p^{th}\) input image and let \(i_{c}(p)\) denote the correct class number of the \(p^{th}\) pattern, where \(p\) varies from \(1\) to \(N_{v}\), and \(N_{v}\) is the total number of training images or patterns.
During forward propagation, a filter of size \(N_{f}\) x \(N_{f}\) is convolved over the image \(f_{1}\) with \(N_{r}\) rows, \(N_{c}\) columns.The number of channels is denoted by \(C\), where color input images have \(C\) equal to \(3\) and grayscale images have \(C\) equal to \(1\).
For the \(k^{th}\) filter, the net function output for the \(i^{th}\) row and \(j^{th}\) column is
\[n_{p}(k,i,j)=t_{r}(k)+\sum_{m=1}^{N_{f}}\sum_{n=1}^{N_{f}}\sum_{c=1}^{C}w_{f}( k,m,n,c)\cdot f_{p}(m+(i-1)s,n+(j-1)s,c) \tag{11}\]
where, \(\mathbf{n}_{p}\) is of size (\(K\) by \(M_{o}\) by \(N_{o}\)), where \(K\) is the number of filters, \(M_{o}\) is the height of the convolved image output and \(N_{o}\) is the width of the convolved image output. \(w_{f}(k,m,n,c)\) is the filter of size (\(K\) by \(N_{f}\) by \(N_{f}\) by \(C\)). The threshold vector \(\mathbf{t}_{r}\) is added to the net function output as shown in equation (11). The stride \(s\) is the number of filter shifts over input images. Note that the output \(n_{p}(k,i,j)\) in 11 us a threshold plus a sum of \(C\) separate 2-D convolution, rather than a 3-D convolution.
To achieve non-linearity, the convolved image with element \(n_{p}(k,i,j)\) is passed through a relu activation [31] as
\[o_{p}(k,i,j)=f^{\prime}(n_{p}(k,i,j)) \tag{12}\]
where, \(o_{p}(k,i,j)\) is the \(k^{th}\) filter's hidden unit activation output for the \(i^{th}\) row, \(j^{th}\) column for the \(p^{th}\) pattern of size (\(K\) by \(N_{rb}\) by \(N_{cb}\)), where \(N_{rb}\) by \(N_{cb}\) is the row and column size of the output of the convolved image respectively.
The net function \(\mathbf{n}_{po}\) for \(i^{th}\) element of the CNN's output layer for the \(p^{th}\) pattern is
\[n_{po}(i)=t_{o}(i)+\sum_{m=1}^{M_{o}}\sum_{n=1}^{N_{o}}\sum_{k=1}^{K}w_{o}(i,m,n,k)\cdot o_{p}(k,m,n) \tag{13}\]
where \(\mathbf{W_{o}}\) is the 4 dimensional matrix of size (\(M\) by \(M_{o}\) by \(N_{o}\) by \(K\)), which connects hidden unit activation outputs or features to the output layer net vector \(\mathbf{n}_{po}\), \(\mathbf{o}_{p}\) is a 3 dimensional hidden unit activation output matrix of size (\(M_{o}\cdot N_{o}\cdot K\)) and \(\mathbf{t_{o}}\) is the vector of biases added to net output function as in equation (13).
Before calculating the final error, the vector \(\mathbf{n}_{po}\) is passed through an activation function such as softmax in 8. Finally the cross entropy loss function is calculated using equation 9. The objective function is reduced with respect to the
Figure 2: Shallow CNN with Linear softmax cross-entropy classifier
unknown weights. In this chapter, we discuss CNN training of classification models. We minimise the loss function \(E_{ce}\) using steepest descent [52][53][54], [55][56]. The most common optimizer used for CNN weight training is using Adams optimizer [57]. It is computationally efficient and easier to implement than optimal learning factor [58]. It uses momentum and adaptive learning rates to converge faster, which is said to be inherited from RMSProp[59] and AdaGrad[60]. The default parameters are given in [57].
## 3 Training algorithm
### Scaled conjugate gradient algorithm
Conjugate gradient (CG) [61] line-searches in successive conjugate directions and has faster convergence than steepest descent. To train an MLP using the CG algorithm (CG-MLP), we update all the network weights \(\mathbf{w}\) simultaneously as follows:
\[\mathbf{w}\leftarrow\mathbf{w}+z\cdot\mathbf{p} \tag{14}\]
where \(z\) is the learning rate that can be derived as [62],[61].
\[z=-\frac{\frac{\partial E(\mathbf{w}+z\cdot\mathbf{p})}{\partial z}}{\frac{ \partial^{2}E(\mathbf{w}+z\cdot\mathbf{p})}{\partial z^{2}}}|_{z=0} \tag{15}\]
The direction vector \(\mathbf{p}\) is obtained from the gradient \(\mathbf{g}\) as
\[\mathbf{p}\leftarrow-\mathbf{g}+B_{1}\cdot\mathbf{p} \tag{16}\]
where \(\mathbf{p}\) = \(vec\)\((\mathbf{P},\mathbf{P}_{\text{ob}},\mathbf{P}_{\text{oi}})\) and \(\mathbf{P}\), \(\mathbf{P}_{\text{oh}}\) and \(\mathbf{P}_{\text{oi}}\) are the direction vectors corresponding to weight arrays \((\mathbf{W},\mathbf{W}_{\text{oh}},\mathbf{W}_{\text{oi}})\). CG uses backpropagation to calculate \(\mathbf{g}\). \(B_{1}\) is the ratio of the gradient energy from two consecutive iterations. If the error function were quadratic, CG would converge in \(N_{w}\) iterations [63], where the number of network weights is \(N_{w}\) = \(dim(\mathbf{w})\). CG is scalable and widely used in training large datasets, as the network Hessian is not calculated [64]. Therefore, in a CG, the step size is determined using a line search along the direction of the conjugate gradient.
SCG [65] scales the conjugate gradient direction by a scaling factor determined using a quasi-Newton approximation of the Hessian matrix. This scaling factor helps to accelerate the algorithm's convergence, especially for problems where the condition number of the Hessian matrix is large. SCG requires the computation of the Hessian matrix (or an approximation) and its inverse. A critical difference between CG and SCG is how the step size is determined during each iteration, with SCG using a scaling factor that helps to accelerate convergence. Other variations of CG exist [66]. However, in this study, we choose to use SCG.
### Levenberg-Marquardt algorithm
The Levenberg-Marquardt (LM) algorithm [61] is a hybrid first- and second-order training method that combines the fast convergence of the steepest descent method with the precise optimization of the Newton method [67]. However, inverting the Hessian matrix \(\mathbf{H}\) can be challenging due to its potential singularity or ill-conditioning [68]. To address this issue, the LM algorithm introduces a damping parameter \(\lambda\) to the diagonal of the Hessian matrix as
\[\mathbf{H}_{LM}=\mathbf{H}+\lambda\cdot\mathbf{I} \tag{17}\]
where \(\mathbf{I}\) is an identity matrix with dimensions equal to those of \(\mathbf{H}\). The resulting matrix \(\mathbf{H}_{LM}\) is then nonsingular, and the direction vector \(\mathbf{d}_{LM}\) can be calculated by solving:
\[\mathbf{H}_{LM}\mathbf{d}_{LM}=\mathbf{g} \tag{18}\]
The constant \(\lambda\) represents a trade-off value between first and second order for the LM algorithm. When \(\lambda\) is close to zero, LM approximates Newton's method and has minimal impact on the Hessian matrix. When \(\lambda\) is large, LM approaches the steepest descent and the Hessian matrix approximates an identity matrix. However, the disadvantage of the LM algorithm is that it scales poorly and is only suitable for small data sets [61].
### Basic MOLF
In basic MOLF MLP training [69], the input weight matrix \(\mathbf{W}\) is initialized randomly using zero-mean Gaussian random numbers. To initialize the output weight matrix \(\mathbf{W}_{\text{o}}\), we use output weight optimization (OWO) [61]. OWO minimizes the error function from equation (10) with respect to \(\mathbf{W}_{\text{o}}\) by solving the \(M\) sets of \(N_{u}\) equations in \(N_{u}\) unknowns given by
\[\mathbf{C}=\mathbf{R}\cdot\mathbf{W}_{\text{o}}^{\mathbf{T}} \tag{19}\]
where the cross-correlation matrix \(\mathbf{C}\) and the auto-correlation matrix \(\mathbf{R}\) are respectively
\[\mathbf{C}=\frac{1}{N_{v}}\sum_{p=1}^{N_{v}}\mathbf{X}_{\text{ap}}\cdot \mathbf{t}_{\text{p}}^{T} \tag{20}\]
\[\mathbf{R}=\frac{1}{N_{v}}\sum_{p=1}^{N_{v}}\mathbf{X}_{\text{ap}}\cdot \mathbf{X}_{\text{ap}}^{T} \tag{21}\]
In terms of optimization theory, solving equation (19) is merely Newton's algorithm for the output weights [61]. After initialization of \(\mathbf{W}\), \(\mathbf{W}_{\text{oi}}\), \(\mathbf{W}_{\text{oh}}\), we begin a two step procedure in which we modify \(\mathbf{W}\) and perform OWO to modify \(\mathbf{W}_{\text{o}}\). In the \(\mathbf{W}\) modification step we first find the input weight negative gradient matrix \(\mathbf{G}\) and solve
\[\mathbf{D}\cdot\mathbf{R}_{\text{i}}=\mathbf{G} \tag{22}\]
for \(\mathbf{D}\), where
\[\mathbf{R}_{\text{i}}=\frac{1}{N_{v}}\sum_{p=1}^{N_{v}}\mathbf{x}_{\text{ap}} \cdot\mathbf{x}_{\text{ap}}^{T} \tag{23}\]
It has been shown [61] that the improved input weight change matrix \(\mathbf{D}\) is the negative gradient matrix that results when the inputs are whitened. In MOLF, the idea is to use an \(N_{h}\) dimensional learning factor vector \(\mathbf{z}\) and to write the updated ouputs as
\[\begin{split} y_{p}(i)=\sum_{n=1}^{N+1}w_{oi}(i,n)x_{p}(n)+\sum _{k=1}^{N_{h}}w_{oh}(i,k)f(n_{p}(k))\\ n_{p}(k)=\sum_{n=1}^{N+1}(w(k,n)+z_{k}d(k,n)x_{p}(n))\end{split} \tag{24}\]
where, \(d(k,n)\) is an element of the matrix \(\mathbf{D}\). We use Newton's method to obtain \(\mathbf{z}\) by solving
\[\mathbf{H}_{\text{molf}}\cdot\mathbf{z}=\mathbf{g}_{\text{molf}} \tag{25}\]
where \(\mathbf{H}_{\text{molf}}\) and \(\mathbf{g}_{\text{molf}}\) are the Hessian and negative gradient, respectively, of the error with respect to \(\mathbf{z}\). Our implementation of Newton's method solves equation (25) for \(\mathbf{z}\) using orthogonal least squares (OLS) [61]. After finding \(\mathbf{z}\), we update the input weight matrix \(\mathbf{W}\) as
\[\mathbf{W}\leftarrow\mathbf{W}+diag(\mathbf{z})\cdot\mathbf{D} \tag{26}\]
The MOLF training algoroth (MA algorithm) [55] is as given in algorithm 1.
### Modifying targets with output reset
The MA algorithm of subsection 3.1 is a first attempt at attacking problems (P4) and (P5) in approximation networks. In this subsection, we adapt MA to classification networks. Problem (P1) is attacked by developing new target outputs \(\dot{t}_{p}^{{}^{\prime}}(i)\), while keeping the constraint that the target margin satisfies \(\dot{t}_{p}^{{}^{\prime}}(i_{\text{\emph{.}}})\) - \(\dot{t}_{p}^{{}^{\prime}}(i_{\text{\emph{.}}})\geq 1\). There are two kinds of inconsistent errors which simultaneously increase E and decrease \(P_{e}\) or leave \(P_{e}\) the same. First each pattern's output \(\mathbf{y}_{p}\) can have a bias so that the average of \(t_{p}(i)\) over \(i\), differs from the average of \(y_{p}(i)\). Second, as stated in problem (P1), we can have \(y_{p}(i_{\text{\emph{.}}})>t_{p}(i_{\text{\emph{.}}})\) or \(y_{p}(i_{\text{\emph{.}}})<t_{p}(i_{\text{\emph{.}}})\). In order to remove these inconsistent errors we design a new error function \(E^{{}^{\prime}}\)[69] in which targets, but not labels are changed [70].
\[E^{{}^{\prime}}=\frac{1}{N_{v}}\sum_{p=1}^{Nv}\sum_{i=1}^{M}[t_{p}^{{}^{\prime}}(i )-y_{p}(i)]^{2} \tag{27}\]
where \(t_{p}^{{}^{\prime}}(i)\) is modeled as
\[t_{p}^{{}^{\prime}}(i)=t_{p}(i)+a_{p}+d_{p}(i) \tag{28}\]
and where \(a_{p}\) and \(d_{p}(i)\) are initially equal to zero. Since \(a_{p}\) is the same for each class, it has no effect on \(P_{e}\). Following [69], we calculate the closed form expression for \(a_{p}\) by setting \(\frac{\partial E^{{}^{\prime}}}{\partial a_{p}}\) = 0, therefore obtaining
\[a_{p}=\frac{1}{M}\sum_{i=1}^{M}[y_{p}(i)-t_{p}^{{}^{\prime}}(i)-d_{p}(i)] \tag{29}\]
Similarly, \(d_{p}(i)\) is defined as
\[d_{p}(i)=y_{p}(i)-t_{p}^{{}^{\prime}}(i)-a_{p} \tag{30}\]
such that \(d_{p}(i_{c})\geq\) 0 and \(d_{p}(i_{d})\leq\) 0. It is true that \(a_{p}\) and \(d_{p}(i)\) can be included in \(t_{p}^{{}^{\prime}}(i)\) or \(y_{p}(i)\) during training. However, these parameters are not available during testing because they make use of the correct class \(i_{c}(p)\), which is unknown. Therefore we include them in \(t_{p}^{{}^{\prime}}(i)\).
To avoid inconsistent errors we need \(t_{p}^{{}^{\prime}}(i_{c})\geq y_{p}(i_{c})\) and \(t_{p}^{{}^{\prime}}(i_{d})\leq y_{p}(i_{d})\) so
\[d_{p}(i_{c})=[y_{p}(i_{c})-a_{p}-t_{p}(i_{c})]u(y_{p}(i_{c})-a_{p}-t_{p}(i_{c})) \tag{31}\]
and
\[d_{p}(i_{d})=[y_{p}(i_{d})-a_{p}-t_{p}(i_{d})]u(t_{p}(i_{d})-a_{p}-y_{p}(i_{d})) \tag{32}\]
where u(\(\cdot\)) denotes the unit step function. Note here that for a classifier, the training halts when \(E^{{}^{\prime}}\) becomes zero, even if \(E\) is nonzero. The effect of (P2) is reduced because
\[\lim_{y_{p}(i_{c})\rightarrow\infty}(y_{p}^{{}^{\prime}}(i_{c})-t_{p}(i_{c}))=0 \tag{33}\]
and similarly
\[\lim_{y_{p}(i_{d})\rightarrow-\infty}(y_{p}^{{}^{\prime}}(i_{d})-t_{p}(i_{d}))=0 \tag{34}\]
We denote the process of obtaining \(t_{p}^{{}^{\prime}}\) as an output reset (OR) algorithm and describe it as follows
Note that in algorithm 2, letting the maximum number of iterations equal 3 allows considerable improvement in performance without a significant change in training time [69]. In [69], the heuristic iterative OR algorithm is replaced by an efficient closed form expression for the target outputs. In other words, the OR algorithm is a multi-class version of Ho-Kashyap [69]. By adding OR to the MA algorithm, we generate MA-OR. The MA-OR algorithm can be summarized as follows:
```
1: Read the training data.
2: Randomly split off 30 % data of the training data to be validation data.
3: Initialize \(N_{it}^{3}\), \(i_{t}\) \(\leftarrow\) 0, \(\mathbf{C}\) \(\leftarrow\) 0, \(N_{h}\) and network weights.
4: Calculate \(\mathbf{R}\) and \(\mathbf{C}\) using equation (20).
5: Solve equation (19) for the initial output weight matrix \(\mathbf{W}_{\mathrm{o}}\).
6:while\(i_{t}<N_{it}^{3}\)do
7: BP step : Compute \(\mathbf{G}\) using \(t_{p}^{{}^{\prime}}(i)\) from OR instead of \(t_{p}(i)\), and solve for \(\mathbf{D}\) using equation (22)
8: MOLF step : Solve equation (25) for \(\mathbf{z}\) using OLS
9: Update \(\mathbf{W}\) as \(\mathbf{W}\leftarrow\mathbf{W}\) + diag(\(\mathbf{z}\))
```
10: OWO step : Accumulate \(\mathbf{C}\) using \(t_{p}^{{}^{\prime}}(i)\) in-place of \(t_{p}(i)\). Solve equation (19) to obtain \(\mathbf{W}_{\mathrm{o}}\)
11: Save W and \(o(n)\) if validation error has decreased.
12:\(i_{t}\gets i_{t}\) + 1
13:endwhile ```
**Algorithm 3** MA-OR algorithm
### Softmax classifier
The softmax classifier [61] is a generalized logistic regression classifier that outputs approximate class probabilities. Structurally, it's a linear model with softmax functions [71] at the output units. For the \(p^{th}\) pattern, it maps the input vector \(\mathbf{x}_{\mathrm{p}}\) to the output class labels as
\[\mathbf{y}_{\mathrm{p}}=\mathbf{W}_{\mathrm{s}}\cdot\mathbf{x}_{\mathrm{p}} \tag{35}\]
where \(\mathbf{W}_{\mathrm{s}}\) is a weight matrix. The performance measure is a cross-entropy loss function [61] defined as
\[E_{softmax}=-\frac{1}{N_{v}}\sum_{p=1}^{N_{v}}\sum_{i=1}^{M}log(\frac{e^{y_{p} (i)}}{\sum_{j=1}^{M}e^{y_{p}(j)}}) \tag{36}\]
The softmax classifier is often trained using the L-BFGS training algorithm[61].
### Fixed Activation function
### Piecewise Linear Unit(PLU) Activation
PWL functions are composed of Relu activations [35]. Activations such as sigmoid[36] and tanh[37] can be approximated using relu units. Several investigators have tried adaptive PWL activation functions in MLPs[41] and deep learning[45] and have published promising results. One of them includes hybrid piecewise linear units(PLU) with an activation function that is a combination of tanh and relu activations[40].
From figure 3, we see that PLU is a combination of relu and tanh activations. The equation for calculating fixed PWL activations is given as
\[o_{p}(k)=max\left[\alpha(n_{p}(k)+c)-c,min(\alpha(n_{p}(k)+c)-c,n_{p}(k))\right] \tag{37}\]
where, \(\alpha\) and \(c\) are user chosen parameters. The author has also proposed that the \(\alpha\) can also be a trainable parameter. The paper [40] demonstrates the performance of fixed PWL in an MLP for paramteric functions, 3D surface approximation and invertible network datasets. The author shows that fixed PLUs work better than relu functions as PLUs are represented using more hinges than relu functions. The author also shown promising results using CNN for the cifar 10[46] dataset. The fixed PWL activation function has only 3 linear segments, hinges \(H\) = 3 and will not be adaptive until the \(\alpha\) parameter is trained in every iteration. Since the \(H\) is fixed and there is minimal or no training, there is no universal approximation.
### Piecewise Linear Activation
An alternate piecewise linear activation has been demonstrated[45], which is specifically designed for deep networks with trainable PWL activations. This method therefore outperforms the fixed PWL activation in section 3.7. The adaptive PWL activations here can equal those of section 3.7 and can also generate more complicated curves. The author has implemented an adaptive piecewise linear activation unit where the number of hinges \(H\) is a hyperparameter that is user chosen. The author shows the best results for CIFAR10 data using \(H=5\) and \(H=2\), and for CIFAR100 data using \(H=2\) and \(H=1\)(no activation hinge training). The initialization for there adaptive activations is not properly specified.
The equation for calculating the the adaptive activation is given as
\[o_{p}(k)=max(0,n_{p}(k))+\sum_{s=1}^{H}a_{k}^{*}\cdot max(0,-n_{p}(k)+b_{k}^{ *}) \tag{38}\]
where, \(a_{i}^{*}\) and \(b_{i}^{*}\) for \(i=1..H\) are learned using gradient descent. \(a_{i}\) variables control the slopes of the linear segments and \(b_{i}^{*}\) determine the locations of sample points.
Figure 3: Fixed PWL activations
figure 4 shows adaptive PWL with slope \(a\) as 0.2 and \(b\) as 0. Similarly, figure 5 shows adaptive PWL with slope \(a\) as -0.2 and \(b\) as -0.5.
## 4 Proposed work
### Mathematical Background
Section 3.8 describes an adaptive PWL activation which trains the locations and slopes of the hinges. The author claims that a small number of hinges achieved better results. We see in section 3.7 that a network with two hinges outperforms relu for a particular application. To approximate a linear output, only one hinge is needed on the PWL curve. Similarly, to approximate a quadratic output, the number of hinge sets on the PWL curve should be larger than three. Therefore, the number of hinges should not be less for more complicated datasets. This can also result in using fewer hidden layers and filters as the network doesn't need to train for a longer time.
We further investigate the use of PWL activations in CNNs[72] and we first determine that PWL activations can approximate any other existing activation functions. Consider a CNN filter's net function \(n_{1}\) defined as
\[n_{1}=t+\sum_{m=1}^{N}w_{i}(m)\cdot x(m) \tag{39}\]
where t is the threshold, \(w_{i}(m)\) is the \(m^{th}\) filter weight, and \(x(m)\) is the \(m^{th}\) input to the net function. The filter can be represented as \(\{\mathbf{w_{i}},t\}\). The continuous PWL activation \(f(n_{1})\) is
\[f(n_{1})=\sum_{k=1}^{N_{s}}a_{k}\cdot r(n_{1}-ns_{k}) \tag{40}\]
where \(N_{s}\) denotes the number of segments in the PWL curve, \(r()\) denotes a ramp (relu) activation, and \(ns_{k}\) is the net function value at which the \(k^{th}\) ramp switches on.
Figure 4: 2 relu curves Figure 5: 4 relu curves
Figures 6 and 7 show approximate sigmoid curves generated using relu activations where figure 6 has \(N_{s}\) = 2 relu curves and figure 7 has \(N_{s}\) = 4 relu curves. Comparing the two figures we see that larger values of \(N_{s}\) lead to better approximation. The contribution of \(f(n_{1})\) to the \(j^{th}\) net function \(n_{2}(j)\) in the following layer is
\[+n_{2}(j)=f(n_{1})\cdot w_{o}(j) \tag{41}\]
Decomposing the PWL activation into its \(N_{s}\) components, we can write
\[+n_{2}(j) =\sum_{k=1}^{N_{s}}a_{k}\cdot r(n_{1}-ns_{k})\cdot w_{o}(j) \tag{42}\] \[=\sum_{k=1}^{N_{s}}w^{\prime}_{o}(j,k)\cdot r(n_{1}(k))\]
where \(w^{\prime}_{o}(j,k)\) is \(a_{k}\cdot w_{o}(j)\) and \(n_{1}(k)\) is \(n_{1}-ns_{k}\).
A single PWL activation for filter \(\{\mathbf{w}_{i},t\}\) has now become \(N_{s}\) relu activations \(f(n_{1}(k))\) for \(N_{s}\) filters, where each ramp \(r(n_{1}-d_{k})\), is the activation output of a filter. These \(N_{s}\) filters are identical except for their thresholds. Although, relu activations are efficiently computed, they have the disadvantage that back-propagating through the network activates a relu unit only when the net values are positive and zero, this leads to problems such as dead neurons[73] which means if a neuron is not activated initially or during training it is deactivated. This means it will never turn on causing gradients to be zero leading to no training of weights, Such relu units are called dying relu[74]. Using section 4.1, In the next section, we will define a much robust PWL calculation which can be initialized using any pre-defined activation such relu[31], leaky relu[32] and also which can be differentiable.
### Piecewise Linear Activations(PLA) and initialization
PWL activations in subsection 4.1 have some limitations. The calculation fails when the distance between heights of two hinges are very close, which can happen as we train the hinges. In this section, we derive new notation and calculation of PWL activation activation using linear interpolation.
figure 8, show a PWL activation for K hidden units which consists of multiple ramps where, \(ns(1,k)\) is the first hinge of the \(k^{th}\) hidden unit and \(a(1,k)\) is its activation value. These hinge values \(ns\) are constant throughout training. From the figure we can observe that the PWL curve which passes through the activation of each of the 7 hinges. We define total number of hinges as \(H\). The equation for above figure is given in equation 45. Let \(s\) denote maximum value of a net function and \(r\) denote minimum value of net function. The activations are calculated betwe
Figure 8: Piecewise Linear Curve
net values between first two hinges are calculated with \(ns(1,k)\) denoted as \(m_{1}\) and \(ns(2,k)\) denoted as \(m_{2}\). Similarly, activations output between next two hinges are calculated by denoting \(ns(2,k)\) denoted as \(m_{1}\) and \(ns(3,k)\) denoted as \(m_{2}\). We do this for \(H\) hinges. \(m_{1}\) and \(m_{2}\) for each hinge is calculated as \(m_{1}=\lceil\frac{n_{p}}{\delta ns}\rceil\) and \(m_{2}=m_{1}+1\). Given the net function \(n_{p}(k)\), \(o_{p}(k)\) is calculated as,
\[w_{1p}(k)=\frac{ns(m_{2},k)-n_{p}(k)}{ns(m_{2},k)-ns(m_{1},k)} \tag{43}\]
\[w_{2p}(k)=\frac{n_{p}(k)-ns(m_{1},k)}{ns(m_{2},k)-ns(m_{1},k)} \tag{44}\]
\[o_{p}(k)=\left\{w_{1p}(k)\cdot a(m_{1},k)+w_{2p}(k)\cdot a(m_{2},k)\right. \qquad\qquad for\quad n_{p}(k)>s\]
\[\left.\begin{array}{c}\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad
From table 1, we can see that there are total \(H=7\) hinges ranging from -4 to 4, which are our minimum and maximum values from the net function and its sigmoid activations as activation samples \(a\). Finally we plot these points on the sigmoid curve from figure 9.
The curve after plotting these points should look like figure 10
figure 10 is the plot for a fixed piecewise sigmoid activation for net versus activations values where \(7\) hinges are plotted on to the sigmoid curve. Now, for the final piecewise linear curve we remove the sigmoid curve and linearly join 2 points using linear interpolation technique.
Linear interpolation involves estimating a new value of a function between two known fixed points [75][76].
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|l|} \hline H & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ \hline Fixed & -4 & -2.67 & -1.33 & 0 & 1.33 & 2.67 & 4 \\ hinges (\(ns_{1}\)) & & & & & & & \\ \hline Activations & 0.02 & 0.07 & 0.21 & 0.5 & 0.79 & 0.94 & 0.98 \\ for hinges(\(a_{1}\)) & & & & & & & \\ \hline \end{tabular}
\end{table}
Table 1: PWL samples and activations for one hidden unit
Figure 10: Sigmoid with Fixed Samples
figure 11 relates the use of linear interpolation between 2 fixed \(ns\) points. We can say that if we have a new sample net value \(n_{1}(1)\), its corresponding activation value is as shown in the figure. To find \(o_{1}(1)\) between \(a(1,1)\) and \(a(2,1)\), we use the following equation.
\[o_{1}(1)=\frac{ns(2,1)-n_{1}(1,1)}{ns(2,1)-ns(1,1)}\cdot a(1,1)+\frac{n_{1}(1)- ns(1,1)}{ns(2,1)-ns(1,1)}\cdot a(2,1) \tag{46}\]
Finally, we use the equation 45 to find all the activation output, The plot of net versus activation is shown in figure 8.
### PLA gradients
Above discussed PWL activations \(\mathbf{A}\) are trained via steepest descent. The negative gradient matrix \(\mathbf{G_{a}}\) with respect to \(E_{ce}\) is calculated as,
\[g_{a}(k,m)=-\frac{\partial E_{ce}}{\partial a(k,m)} \tag{47}\]
where \(k\) is the hidden unit number and \(m\) is the \(H^{th}\) hinge.
\[g_{a}(k,m)=\frac{2}{N_{v}}\sum_{p=1}^{N_{v}}\sum_{i=1}^{M}(t_{p}(i)-y_{p}(i)) \cdot\frac{\partial y_{p}(i)}{\partial a(u,m)} \tag{48}\]
\[\frac{\partial y_{p}(i)}{\partial a(u,m)}=w_{oh}(i,u)\cdot\frac{\partial o_{p }(i)}{\partial a(u,m)} \tag{49}\]
\[\frac{\partial o_{p}(i)}{\partial a(u,m)}=w_{oh}(i,u)\cdot\left((\delta(m-m_{ 1})\cdot w_{1}(p,u))+(\delta(m-m_{2})\cdot w_{2}(p,u))\right) \tag{50}\]
where, for the \(p^{th}\) pattern and \(k^{th}\) hidden unit of the net value we find \(m_{1}\) and \(m_{2}\), where the \(p^{th}\) pattern of \(k^{th}\) hidden unit of the net value lies between the two fixed piecewise linear sample values \(m_{1}\) and \(m_{2}\) of the \(u^{th}\) hidden unit as described in the search algorithm. Also we need to find \(w_{1}(p,u)\) and \(w_{2}(p,u)\) from equations 43 and 44. A search algorithm is used to find the correct \(m\) sample for a particular pattern's hidden unit is found[72]. The equation 50 solves for the \(p^{th}\) patterns \(u^{th}\) hidden unit of the piecewise linear activations and accumulates the gradient for all the \(p^{th}\) patterns of their respective \(u^{th}\) hidden units.
Adams optimizer[57] is used to find learning factor and update the weights of the activation. which are updated as follows
\[\mathbf{A}=\mathbf{A}+z\cdot\mathbf{G_{a}} \tag{51}\]
Figure 11: Linear interpolation between 2 points
### Pla Olf
Using the gradient \(\mathbf{G_{a}}\), the optimal learning factor for activations training is calculated as, The activation function vector \(\mathbf{o}_{p}\) can be related to its gradient as,
\[\mathbf{o}_{p}(k)=w_{1}(p,k)\cdot[a(k,m_{1})+z\cdot g_{o}(k,m_{1})]+w_{2}(p,k) \cdot[a(k,m_{2})+z\cdot g_{o}(k,m_{2})] \tag{52}\]
The first partial derivative of E with respect to z is
\[\frac{\partial E}{\partial z}=\frac{2}{N_{v}}\sum_{p=1}^{N_{v}}\sum_{i=1}^{M} (t_{p}(i)-y_{p}(i))\cdot\frac{\partial y_{p}(i)}{\partial z} \tag{53}\]
where
\[\frac{\partial y_{p}(i)}{\partial z}=\sum_{k=1}^{N_{h}}w_{oh}(i,k)\cdot((w_{1 }(p,k)\cdot g_{o}(k,m_{1}))+(w_{2}(p,k)\cdot g_{o}(k,m_{2}))) \tag{54}\]
where, \(m_{1}\) and \(m_{2}\) for the \(p^{th}\) pattern and \(k^{th}\) hidden unit of the net vector \(n_{p}(k)\) is again found, and find \(g_{o}(k,m_{1})\) and \(g_{o}(k,m_{2})\) from the gradient calculated from equations (48, 49, 50)
Also the Gauss-Newton[77] approximation of the second partial is
\[\frac{\partial^{2}E(z)}{\partial z^{2}}=\frac{2}{N_{v}}\sum_{p=1}^{N_{v}}\sum _{i=1}^{M}\left[\frac{\partial y_{p}(i)}{\partial z}\right]^{2} \tag{55}\]
Thus the learning factor is calculated as
\[z=\frac{\frac{-\partial^{2}E(z)}{\partial z^{2}}}{\frac{\partial E}{\partial z}} \tag{56}\]
After finding the optimal learning factor the piecewise linear activations, \(\mathbf{A}\), are updated in a given iteration as
\[\mathbf{A}=\mathbf{A}+z\cdot\mathbf{G_{a}} \tag{57}\]
where \(\mathbf{z}\) is a scalar optimal learning factor and \(\mathbf{G_{a}}\) is the gradient matrix calculated in equation 56 and 47.
The ADAPT-ACT-OLF algorithm can be summarized as follows:
```
1:Initialize \(\mathbf{W}\), \(\mathbf{W_{oi}}\), \(\mathbf{W_{oh}}\), \(N_{it}\)
2:Initialize Fixed hinges ns and hinge activation \(\mathbf{a}\) as described in sub section 4.3, it\(\leftarrow\) 0
3:while\(\text{it}<N_{it}\)do
4: Find gradient \(G\) and \(G_{hwo}\) from equations 47 and 22.
5: Find learning factor \(z\)
6: Calculate gradient and learning factor for activation from equation 47 and equation 56 respectively and update the activations as in equation 57.
7: OWO step : Solve equation (19) to obtain \(\mathbf{W_{o}}\)
8: it \(\leftarrow\) it + 1
9:endwhile
```
**Algorithm 4** ADAPT-ACT-OLF algorithm
PLA adavantages(sine example with a theory on why it should be used with initialization of relu, leaky relu)
In this subsection, we demonstrate the advantage of adaptive activations using a simple sine data function and a more complicated rosenbrock function. We will be training the Basic MOLF-ADAPT algorithm explained in section. For each of the experiment, we will be using a constant activation from sigmoid,tanh,relu and leaky relu and will compare with ADAPT-ACT-OLF algorithm with initial activations as sigmoid, tanh, relu and leaky relu respectively.
#### 4.6.1 Sinusoidal Approximation
The sine data is generated using one feature and 5000 uniformly distributed random samples in the range of \(0\) to \(4pi\). The output generated is the sine function of the uniformly distributed random samples.
Fig 12 shows Input versus Target output plot where \(x\) is the input which is in the range of \(0\) to \(4pi\) and the Target ouput is the sine of inputs. Looking at the plot one can assume that a sigmoid or a tanh activation would work better than relu or leaky relu.
In the following section, we will train a Basic MOLF-ADAPT algorithm with each of the above mentioned activations. Similarly, we will use ADAPT-ACT-OLF algorithm with respective initial activations. The number of hidden units we will be using is one and 10 respectively. From our understanding we believe that sigmoid and tanh activations will work better for the sinusoidal data with relu and leaky relu failing as compared to sigmoid and tanh. So we will start with relu and leaky relu followed by sigmoid and tanh activation results.
ReLU ActivationFigure 13 shows the Basic MOLF-ADAPT and ADAPT-ACT-OLF results using ReLU as the activation function where 13a is the input versus ouput using one fixed ReLU activation. We can observe that using one hidden unit ReLU activation cannot approximate the simple sinusoidal function. Now, we increase our hidden unit size to 10 and we can still observe from figure 13b that relu activation still cannot approximate the sinusoidal curve, This can be due to the fixed relu activation. Now we use relu as an initial activation function for adaptive activation and based on multiple experimentations we observed that to achieve better results for sinusoidal, more samples should be used. In this particular experimentation we used 20 samples. From figure 13c we we see that using adaptive activation the model was able to approximate better than fixed relu activations. From the figure we can also observe that adaptive activations fails to approximate the curve accurately. This problem can be eliminated by adding more samples. From our experimentation, we observed that by using 40 samples the approximation is similar to the fixed sigmoid activations, but this comes with the increase computational cost. But we can conclude that adaptive activation with relu as initial activation works better than the fixed activation. Next we will look at leaky relu activation with alpha value as 0.01. As
Figure 12: Sine Training Data
Figure 13: Relu activations output
it is similar to relu except for in negative net values, we should expect similar results. Also, figure 12(d) shows the single hidden unit after training and we can observe that the net vs activation output mimics the output curve but with high activations output
Leaky ReLU ActivationFigure 14 shows the Basic MOLF-ADAPT and ADAPT-ACT-OLF results using Leaky ReLU as the activation function where 12(a) is the input versus ouput using one fixed Leaky ReLU activation. We can observe from figure 13(a) and 13(b) leaky relu does not approximate the sinusoidal as similar to relu activation. But from figure 13(c) we can observe that adaptive activations work better again and almost similar to relu activations. Even the net vs activation output from figure 13(d) is similar to the relu activations.
Sigmoid ActivationFigure 15 shows the Basic MOLF-ADAPT and ADAPT-ACT-OLF results where 14(a) is the input versus ouput using one fixed sigmoidal activation. We can observe that using one hidden unit is not sufficient to approximate the simple sinusoidal function. Now as we increase our hidden unit to 10 the training approximates a sinusoidal curve as it is evident from figure 14(b). Now we will discuss the results with adaptive activations where the initial hidden unit is sigmoid. For the number of samples we chose 20 samples as we observed that if the input output is a curve function, more number of samples are needed as our adaptive activation is piecewise linear. From figure 14(c) we can observe that the linear part of the sinusoidal curve is approximated well but because of the linearity of the adaptive activations it fails to approximate the curve accurately. This problem can be elimiated by adding more samples. From our experimentation, we observed that by using 40 samples the approximation is similar to the fixed sigmoid activations, but this comes with the increase computational cost. Figure 14(d) which shows the net vs activation graph which does not mimic exact similar graph to sigmoid but is close to sigmoid with smaller peaks at each of the positive and negative activation axis. Also thing to note is that the activation output values are very small as compared to the relu and leaky relu activations shown in paragraph 4.6.1 and 4.6.1
TanH Activation
Figure 14: Leaky Relu activations output
Figure 15: Sigmoid activations output
Figure 16 shows the Basic MOLF-ADAPT and ADAPT-ACT-OLF results where 16a is the input versus ouput using one fixed Tanh activation. We can observe that using one hidden unit is not sufficient to approximate the simple sinusoidal function. Now as we increase our hidden unit to 10 the training approximates a sinusoidal curve as it is evident from figure 16b. Now we will discuss the results with adaptive activations where the initial hidden unit is sigmoid. For the number of samples we chose 20 samples as we observed that if the input output is a curve function, more number of samples are needed as our adaptive activation is piecewise linear. From figure 16c we can observe that the linear part of the sinusoidal curve is approximated well but because of the linearity of the adaptive activations it fails to approximate the curve accurately. This problem can be eliminated by adding more samples. From our experimentation, we observed that by using 40 samples the approximation is similar to the fixed sigmoid activations, but this comes with the increase computational cost.Figure 16d which shows the net vs activation graph which does not mimic exact similar graph to sigmoid but is more closer to the sinusoidal curve than sigmoidal adaptive activation. Also thing to note is that the activation output values are very small as compared to the relu and leaky relu activations shown in paragraph 4.6.1 and 4.6.1
To conclude, Using fixed activation to approximates a sinusoidal curve does not yeild promising results for every activation especially the ones without curve in the function which according to our experimentation is relu and leaky relus. But using adaptive activation, the model converges to the same output with any initial activation with the hidden unit trying to mimic the output curve. After multiple experimentation we can conclude that to achieve sinusoidal curve, more samples are needed to achieve the smooth curve
#### 4.6.2 Rosenbrock Approximation
In this subsection we demonstrate approximation of the rosenbrock[78]. The input is generated using constant values a= 1 and b = 100 and total of 1000 uniformly distributed random samples are generated for each of the two inputs. The output generated is the rosenbrock[78] function. Here we normalize both the inputs and outputs with zero mean and standard deviation on 1 for ease of training. We saw no difference in results
Figure 16: Tanh activations output
Figure 17a shows input 1 versus Output for a MLP trained using ReLU activations. In 17a, we can see a scatter plot of predicted versus the actual output. We can observe that it does not approximate accurately. But when we observe figure 17c and figure 17f which are the results for adaptive activations training with 5 samples and 11 samples respectively. The approximation there is not as accurate as the actual output but is better than the model trained using relu activation. Similarly, Figure 17b shows input 2 versus Output for a MLP trained using ReLU activations. Again, we can see that the approximation is not as accurate. But when we observe figure 17d and figure 17d which are the results for adaptive activations training with 5 samples and 11 samples respectively. The approximation there is not as accurate as the actual output but is better than the model trained using relu activation
Figure 17: Rosenbrock Approximation with ReLU Activations
Figure 18a shows input 1 versus Output for a MLP trained using Leaky ReLU activations. In 18a, we can see a scatter plot of predicted versus the actual output. We can observe that it does not approximate accurately. But when we observe figure 18c and figure 18f which are the results for adaptive activations training with 5 samples and 11 samples respectively. The approximation there is not as accurate as the actual output but is better than the model trained using relu activation. Similarly, Figure 18b shows input 2 versus Output for a MLP trained using Leaky ReLU activations. Again, we can see that the approximation is not as accurate. But when we observe figure 18d and figure 18d which are the results for adaptive activations training with 5 samples and 11 samples respectively. The approximation there is not as accurate as the actual output but is better than the model trained using Leaky relu activation
Figure 18: Rosenbrock Approximation with Leaky ReLU Activations
Figure 19a shows input 1 versus Output for a MLP trained using Sigmoid activations. In 19b, we can see a scatter plot of predicted versus the actual output. We can observe that it does not approximate accurately. But when we observe figure 19c and figure 19d which are the results for adaptive activations training with 5 samples and 11 samples respectively. The approximation rsults with 5 samples are not as good but results with 11 samples are better than the model trained using Sigmoid activation. Similarly, Figure 19b shows input 2 versus Output for a MLP trained using Sigmoid activations. Again, we can see that the approximation is not as accurate. But when we observe figure 19e and figure 19f which are the results for adaptive activations training with 5 samples and 11 samples respectively. The approximation there is not as accurate as the actual output but is better than the model trained using Sigmoid activation
Figure 19: Rosenbrock Approximation with Sigmoid Activations
Figure 14(a) shows input 1 versus Output for a MLP trained using TahH activations. In 14(b), we can see a scatter plot of predicted versus the actual output. We can observe that it does not approximate accurately. But when we observe figure 14(c) and figure 14(d) which are the results for adaptive activations training with 5 samples and 11 samples respectively. The approximation results with 5 samples are not as good but results with 11 samples are better than the model trained using TahH activation. Similarly, Figure 14(b) shows input 2 versus Output for a MLP trained using TahH activations. Again, we can see that the approximation is not as accurate. But when we observe figure 14(e) and figure 14(f) which are the results for adaptive activations training with 5 samples and 11 samples respectively. The approximation there is not as accurate as the actual output but is better than the model trained using TahH activation
## 5 Experimental Methods and Results
In this section, we discuss experiment results for the proposed algorithm in which we demonstrate relative testing results for widely available approximation and classification data. Finally, we show results for a shallow convolutional neural network architecture.The computational cost is measured on a Windows 10, Intel i-7, 3 Mhz CPU platform with 32 GB RAM.
In this section, we demonstrate network performance comparison between ADAPT-ACT-OLF, ADAPT-ACT-OLF,CG-MLP[79][66][61] and LM [80][81] for approximation and classification data. We also show results for shallow convolution neural networks with custom networks and deep CNNs using transfer learning.
Figure 14: Rosenbrock Approximation with Tanh Activations
Figure 13: Rosenbrock Approximation with Tanh Activations
### Approximation Datasets Results
From the Table 2, we can observe that Adapt-OLF is the top performer in 5 out of the 7 data sets in terms of testing MSE. The following best-performing algorithm is MOLF-ADAPT for weather data which is with smaller margin and is also tied in comparison to testing MSE for two dataset. Adapt-OLF has slightly more number of parameters depending on the number of samples but the testing MSE is substantial reduced. Next best performer is LM for the oh7 dataset. However, LM being a second-order method, its performance comes at a significant cost of computation - almost two orders of magnitude greater than the rest. of the models proposed
### Classifier Datasets Results
From the Table 3, we can observe that Adapt-OLF is the top performer in 5 out of the 6 data sets in terms of testing MSE. The following best-performing algorithm is MOLF-ADAPT for weather data which is with smaller margin. As LM requires significant cost of computation it cannot be used for pixel based inputs hence loses out on majority of the image classification based use cases
### CNN Results
#### 5.3.1 Shallow CNN results
In this section, we demonstrate shallow CNN results with relu, leaky relu and adaptive activations. We will be using a one VGG block [82], two vggg block and three VGG block. We will also be using cifar 10 dataset for benchmarking our results. Figure 21 is the One VGG block.
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|l|} \hline Dataset & SCG/Nh & CG- & LM/Nh & MOLF- & Adapt-OLF & Adapt-OLF & Adapt-OLF \\ & & MLP/Nh & & ADAPT & 3sam- & 5 sam- & 9sam- \\ & & & \(L_{R}elu/Nh\) & ples/Nh & ples/Nh & ples/Nh & ples/Nh \\ \hline Oh7 & 1.971/30 & 1.52/100 & \(\mathbf{1.41}/30\) & 1.51/15 & 1.49/20 & 1.46/20 & 1.44/15 \\ \hline White Wine & 0.6/20 & 0.56/100 & 0.57/30 & 0.55/30 & \(\mathbf{0.54}/\mathbf{100}\) & 0.56/20 & \(\mathbf{0.54}/\mathbf{100}\) \\ \hline twod & 0.5/30 & 0.23/100 & 0.17/15 & \(\mathbf{0.149}/\mathbf{15}\) & \(\mathbf{0.149}/\mathbf{15}\) & 0.18/15 & 0.15/15 \\ \hline Superconduct & 230.91/15 & 180.21/100 & 170.2/100 & 144.46/100 & 142.53/100 & \(\mathbf{139.62}/\mathbf{100}\) & 142.53/100 \\ \hline F24 & 1.14/20 & 0.31/100 & 0.30/30 & 0.281/100 & 0.282/100 & 0.283/100 & \(\mathbf{0.279}/\mathbf{100}\) \\ \hline Concrete & 61.11/5 & 34.64/30 & 32.12/20 & 32.29/100 & \(\mathbf{30.70}/\mathbf{100}\) & 34.34/10 & 35.56/100 \\ \hline Weather & 316.68/15 & 283.23/30 & 286.27/30 & \(\mathbf{283.20}/\mathbf{10}\) & 284.39/15 & 284.34/15 & 284.73/15 \\ \hline \end{tabular}
\end{table}
Table 2: 10-fold cross validation mean square error (MSE) testing results for approximation dataset, (best testing MSE is in bold)
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|l|} \hline Dataset & SCG/Nh & CG- & LM/Nh & MOLF- & Adapt-OLF & Adapt-OLF & Adapt-OLF \\ & & MLP/Nh & & ADAPT & 3sam- & 5 sam- & 9sam- \\ & & & & \(L_{R}elu/Nh\) & ples/Nh & ples/Nh & ples/Nh \\ \hline GongTrn & 10.28/100 & 10.46/100 & 8.94/30 & \(\mathbf{8.62}/30\) & 8.65/30 & 8.64/30 & 8.72/30 \\ \hline Comf18 & 15.69/100 & 14.50/100 & 12.63/5 & 11.83/20 & 11.93/30 & 11.87/30 & \(\mathbf{11.79}/\mathbf{30}\) \\ \hline f17c & 3.22/100 & 3.69/100 & 3.96/100 & 2.45/100 & 2.38/100 & 2.41/100 & \(\mathbf{2.34}/\mathbf{100}\) \\ \hline Speechless & 44.26/100 & 43.07/100 & 39.72/100 & 36.65/100 & \(\mathbf{35.96}/\mathbf{100}\) & 38.94/100 & 37.6/100 \\ \hline Cover & 27.39/100 & 29.87/100 & NA & 20.1/30 & 19.43/30 & 19.47/30 & \(\mathbf{19.42}/\mathbf{30}\) \\ \hline Scrap & 25.58/100 & 20.77/100 & NA & 19.9/100 & 19.57/100 & \(\mathbf{18.8}/\mathbf{100}\) & 19.2/100 \\ \hline \end{tabular}
\end{table}
Table 3: 10-fold cross validation Percentage of Error (PE) testing results for classification dataset, (best testing MSE is in bold)
From Figure 21 we can see that there is one VGG layer and a classification layer. The one VGG layer consists of 2 convolution layers with 3x3 kernel and 32 filters in the first and and 3rd layer and activations are in 2nd and 4th layer. The final layer is a 2 x 2 maxpool layer. Similarly two VGG layer has 2 vgg layers and one classification layer where the first vgg layer has the same configuration as in the figure with 32 filters and the 2nd vgg layer consists of convolution layers with 3 x 3 kernels and depth as 64 for each convolution layer as shown in Figure 22
Similarly, in 3 VGG layer, we have 2 vgg layers as mentioned above and a 3rd vgg layer has the same configuration as the 2nd vgg layer with 3 x 3 kernels but the depth is 64 filters for each convolution layer as shown in Figure 23
Figure 21: One VGG layer with classifier
Figure 22: Two VGG layer with classifier
One thing to note here is that when the model is used with relu activations are used all the activations used are relu, similarly with leaky relu but in adaptive activations the last VGG layer activations are trained and the remaining are not trained and are either kept as relu or leaky relu. For example, In 2 - VGG layer, the 2nd VGG layer's adaptive activation functions are trained and the first VGG layers adaptive activations are not used as trainable parameters. The number of samples(n) used in the activations for the below experiment are kept as 3 as follows [minimum value, 0, maximum value], where the first is the minimum value of the output of the convolution layer before the adaptive activation, the second value is \(0\) and the final value is maximum value of the output of the convolution layer before the adaptive activation layer. For example, if we are training a 2 VGG layer model, as described above, activations in the VGG1 layer are not trained. The out put of the VGG1 layer that is the 2 x 2 maxpool output is used a input to the 2nd VGG layer where th output of the first convolution layer's maximum and minimum value in the 2nd VGG layer is used. The activation function used in the adaptations where the activations are not trainable are leaky relu and also the initialization of the adaptive activaions are done using leaky relu. As suggested earlier, any activations can be used in the initialization. The decision here was taken based on the results seen in the Table 4 for the model with leaky relu activations.
Table 4 shows results for 1-vgg layers, 2-vgg layers, 3-vgg layers model on CIFAR10[46] dataset with Glorot normal initialization From the table we can observe that as the number of vgg layer increase, adaptive activation gives better accuracy. One thing to note is that in 3-VGG layer model only the 3rd vgg layer activations are trained and we can observe significant difference in the accuracy.
#### 5.3.2 Transfer Learning using Deep CNN results
In this section, we use 2 of the widely use Pretrained deep learning models, VGG11 and Resnet18. These models are pretrained using Image net data. We use transfer learning approach where we modify the final classification layer with a new linear layer with number of class equal to 10 which is the total number of classes in cifar10 and train the model for atleast 100 iterations. The importance of transfer learning is that the model already has learnt the important features which helps in generating good results with less number of data and iterations. Similarly, in the model with adaptive
\begin{table}
\begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline \hline Dataset & Models & Weight Initialization & Adaptive Activations & ReLU activations & LeakyReLU Activations \\ \hline CIFAR10 & 1 - VGG layers & \begin{tabular}{c} Glorot Normal \\ \end{tabular} & \begin{tabular}{c} **67.8** \\ \end{tabular} & 66.56 & 66.45 \\ \hline CIFAR10 & 2 - VGG layers & \begin{tabular}{c} Glorot Normal \\ \end{tabular} & \begin{tabular}{c} **74.2** \\ \end{tabular} & 71.82 & 73.09 \\ \hline CIFAR10 & 3 - VGG layers & \begin{tabular}{c} Glorot Normal \\ \end{tabular} &
\begin{tabular}{c} **75.53** \\ \end{tabular} & 72.58 & 73.3 \\ \hline \end{tabular}
\end{table}
Table 4: 10-fold cross validation mean square error (MSE) testing results for classification dataset, (best testing MSE is in bold)
Figure 23: Three VGG layer with classifier
activations we change the final linear layer and also change some of the final layers, such as in resnet18 achitecture we replace the layer 4 basic blocks relu activation function with adaptive activations and in VGG11 the relu activation after the 7th and 8th convolution layer, which also are the last 2 activations in the feature layer with adaptive activations. Two main reasons for using adaptive activations in the final feature layer. 1st - less number of parameters and 2nd - more complex and abstract are in the deeper layer[83]. From the results shown in 5, we can observe that adaptive activations gives better results but with slight increase in parameters and hence training time
## 6 Conclusion and Future Work
A new adaptive piecewise linear activation(PLA) has been introduced. We demonstrate our results on simple and complex function and show that fixed activation functions cannot approximated any function. We also show results for various MLP structures in comparison with PLA activations and different set of samples with traditional fixed activations for approximation and classification data. Finally, we show shallow convolutional neural network results with fixed and PLA activations. From all the experiments we observed that adaptive activations gives better results. The drawback of PLA activations is that it needs more computations but results achieved are comparatively higher than the one with fixed activations. We also observed that applications with a curved output involve using more number of PWL samples to approximate.
|
2309.00561 | Exact Learning with Tunable Quantum Neural Networks and a Quantum
Example Oracle | In this paper, we study the tunable quantum neural network architecture in
the quantum exact learning framework with access to a uniform quantum example
oracle. We present an approach that uses amplitude amplification to correctly
tune the network to the target concept. We applied our approach to the class of
positive $k$-juntas and found that $O(n^22^k)$ quantum examples are sufficient
with experimental results seemingly showing that a tighter upper bound is
possible. | Viet Pham Ngoc, Herbert Wiklicky | 2023-09-01T16:18:39Z | http://arxiv.org/abs/2309.00561v1 | # Exact Learning with Tunable Quantum Neural Networks and a Quantum Example Oracle
###### Abstract
In this paper, we study the tunable quantum neural network architecture in the quantum exact learning framework with access to a uniform quantum example oracle. We present an approach that uses amplitude amplification to correctly tune the network to the target concept. We applied our approach to the class of positive \(k\)-juntas and found that \(O(n^{2}2^{k})\) quantum examples are sufficient with experimental results seemingly showing that a tighter upper bound is possible.
## 1 Introduction
Introduced in [1], the model of exact learning is, together with QPAC learning, one of the main framework of learning theory. Initially, the learner is given access to two different oracles: the membership oracle and the equivalence oracle. Suppose that \(c\) is the target concept. Given \(x\), a call to the membership oracle will return \(c(x)\) and given the learner's current hypothesis \(h\), the equivalence oracle will output a randomly chosen \(x\in\mathbb{B}^{n}\) such that \(h(x)\neq c(x)\) should such \(x\) exists The goal is then to produce a hypothesis \(h^{*}\) such that it is always equal to \(c\). Given the impracticality of the equivalence oracle it has become common practice to only work with the membership oracle [2, 3] or forgo these two oracle altogether and use another kind, generally a random example oracle. In this case, the preferred flavor is the uniformly distributed examples [4, 5, 6]. These last two research directions naturally gave rise to quantum generalisations of these oracles.
In this work we will study the performances of the tunable quantum neural network as introduced in [7] in the quantum exact learning framework when the learner has access to the quantum version of the uniform example oracle. To do so we devised a training algorithm that leverages amplitude amplification and fine tuned it by learning generic Boolean functions. We show that in this case it performs better than a naive algorithm that does not use amplitude amplification. Finally we adapted this algorithm to learn the class of \(k\)-juntas and found that it requires less examples than what can be found in the literature.
## 2 Quantum Exact Learning
As said previously, quantum exact learning is the extension of the exact learning framework to quantum oracles. We define the quantum membership oracle [8, 9, 10] and the uniform quantum example oracle [11] and formalise the training target in the quantum exact learning framework.
**Definition 2.1**.: _Let \(n\in\mathbb{N}\) and suppose that the target concept is \(c\in\mathbb{B}^{\mathbb{B}^{n}}\). Then the quantum membership oracle \(\mathbf{MO}(c)\) will have the following action:_
\[\forall x\in\mathbb{B}^{n},\forall b\in\mathbb{B},\ \mathbf{MO}(c)\left|x,b \right\rangle=\left|x,b\oplus c(x)\right\rangle \tag{1}\]
On the other hand, the uniform quantum example oracle is defined as:
**Definition 2.2**.: _Let \(n\in\mathbb{N}\) and \(c\in\mathbb{B}^{\mathbb{B}^{n}}\) be the target concept, then a query to the quantum example oracle \(\mathbf{EX}(c)\) will return the superposition \(\ket{\psi(c)}\) such that:_
\[\ket{\psi(c)}=\frac{1}{\sqrt{2^{n}}}\sum_{x\in\mathbb{B}^{n}}\ket{x,c(x)} \tag{2}\]
By using either one of these oracles, the goal of the learner in the exact learning framework is:
**Definition 2.3**.: _Let \(n\in\mathbb{N}\) and \(\mathcal{C}\subseteq\mathbb{B}^{\mathbb{B}^{n}}\) be a concept class. An algorithm is said to be an exact learner for \(\mathcal{C}\) if for every \(c\in\mathcal{C}\), it outputs a hypothesis \(h\in\mathbb{B}^{\mathbb{B}^{n}}\) such that, with high probability:_
\[\forall x\in\mathbb{B}^{\mathbb{B}^{n}},h(x)=c(x) \tag{3}\]
_This notion of high probability is to be defined. While some authors use a threshold of 2/3 [8], we chose to use 0.95._
In this contribution, we aim at performing exact learning with a learner having access to the uniform quantum example oracle.
## 3 Learning Algorithm
### Preliminary Analysis
Let \(n\in\mathbb{N}\) and \(c\in\mathbb{B}^{\mathbb{B}^{n}}\). As stated previously, we have access to the oracle \(\mathbf{EX}(c)\), a query to which will produce the superposition \(\ket{\psi(c)}\) given in Equation 2.
Now suppose that, in its current state (denoted \(\mathbf{T}(h)\)), the network is expressing \(h\in\mathbb{B}^{\mathbb{B}^{n}}\) then:
\[\mathbf{T}(h)\ket{\psi(c)}=\frac{1}{\sqrt{2^{n}}}\sum_{h(x)=c(x)}\ket{x}\ket{ 0}+\frac{1}{\sqrt{2^{n}}}\sum_{h(x)\neq c(x)}\ket{x}\ket{1} \tag{4}\]
So from Equation 4, it comes that the number of misclassified inputs and the probability of measuring the read-out qubit in state \(\ket{1}\) are directly proportional:
**Proposition 3.1**.: _Let \(E(h)=\{x\in\mathbb{B}^{n}\mid h(x)\neq c(x)\}\) be the set of misclassified inputs and \(P_{1}\), the probability of measuring the read-out qubit in state \(\ket{1}\), then:_
\[P_{1}=\frac{\left|E(h)\right|}{2^{n}} \tag{5}\]
_In particular, we have:_
\[\min_{P_{1}>0}P_{1}=\frac{1}{2^{n}} \tag{6}\]
The idea is thus to use amplitude amplification to boost this probability and hence measure as many misclassified inputs as possible before updating the network. In this setup, the diffusion operator is as follows:
**Definition 3.1**.: _Let \(n\in\mathbb{N}\) and \(c\in\mathbb{B}^{\mathbb{B}^{n}}\) be the target concept. Suppose that the network is expressing the function \(h\in\mathbb{B}^{\mathbb{B}^{n}}\) in its current state, denoted \(\mathbf{T}(h)\), we set:_
\[\mathcal{X}_{\alpha_{0}}(c)=\mathbf{I}-2\ket{\psi(c)}\bra{\psi(c)} \tag{7}\]
\[\mathbf{U}(h)=\mathbf{T}(h) \tag{8}\]
_And_
\[\mathcal{X}_{G}=\mathbf{I}^{\otimes n}\otimes\mathbf{Z} \tag{9}\]
_This allows us to define the diffusion operator:_
\[\mathbf{Q}(c,h)=-\mathbf{U}(h)\mathcal{X}_{\alpha_{0}}(c)\mathbf{U}^{\dagger} (h)\mathcal{X}_{G} \tag{10}\]
Because the minimum non-zero probability of measuring a misclassified input is known, we can also determine the maximum number of iterations of the diffusion operator needed to appropriately amplify the amplitudes of the inputs of interest:
**Definition 3.2**.: _Let \(n\in\mathbb{N}\), we define:_
\[\theta_{\min}=\arcsin\left(\frac{1}{\sqrt{2^{n}}}\right) \tag{11}\]
_And:_
\[m_{\max}=\arg\min_{k\in\mathbb{N}}\left|(2k+1)\theta_{\min}-\frac{\pi}{2}\right| \tag{12}\]
We now show the following property:
**Proposition 3.2**.: _Let \(P_{1}\) be a non-zero probability of measuring the read-out qubit in state \(\left|1\right>\) in this setup. Let us denote \(P_{1}^{(m)}\) this same probability but after \(m\) rounds of amplitude amplification. Then there exists \(m_{1/2}\leq m_{\max}\) such that:_
**Proof**:
\[P_{1}^{(m_{1/2})}\geq\frac{1}{2} \tag{13}\]
Let \(\theta=\arcsin\bigl{(}\sqrt{P_{1}}\bigr{)}\), then because \(P_{1}\) is non-zero, we have:
\[\frac{1}{2^{n}}\leq P_{1}\leq 1 \tag{14}\]
Thus:
\[\theta_{\min}\leq\theta\leq\frac{\pi}{2} \tag{15}\]
If \(\frac{\pi}{4}\leq\theta\leq\frac{\pi}{2}\), then we can take \(m_{1/2}=0\). Otherwise, there exist \(m>0\) such that:
\[\frac{\pi}{4(2m+1)}\leq\theta<\frac{\pi}{4(2m-1)} \tag{16}\]
So:
\[\frac{\pi}{4}\leq(2m+1)\theta<\frac{\pi}{4}+\frac{\pi}{2(2m-1)}\leq\frac{3\pi} {4} \tag{17}\]
Hence after \(m\) rounds of amplitude amplification, we have:
\[\frac{1}{2}\leq\sin^{2}((2m+1)\theta)=P_{1}^{(m)} \tag{18}\]
Let us denote this \(m\), \(m_{1/2}\), then because \(P_{1}\geq\frac{1}{2^{n}}\), we also have \(m_{1/2}\leq m_{\max}\)
The general idea for the algorithm is the following:
1. E is the set of misclassified inputs, initialised with the empty set
2. For \(m\in[0,m_{\max}]\), perform \(m\) rounds of AA: * Perform \(s\) measurements * If 1 is measured on the read-out qubit, add the measurement of the \(n\) first qubits to \(E\) the set of misclassified inputs
3. Update the TNN according to \(E\) and restart from 1.
4. The algorithm stops if \(E\) remains empty when 3. is reached
In order to ensure that the algorithm correctly terminates 95% of the time, we give a condition on the number of measurements \(s\).
**Definition 3.3**.: _Let \(m\in[0,m_{\max}]\) and \(s\in\mathbb{N}\), we define \(N_{m}\) the number of times the read-out qubit is measured in state \(\left|1\right>\) after \(m\) rounds of amplification and over \(s\) measurements. Then:_
\[N_{m}\sim\text{B}\left(s,P_{1}^{(m)}\right) \tag{19}\]
Using Definition 3.3, we can now estimate the probability for the algorithm to terminate incorrectly:
**Proposition 3.3**.: _The probability that the algorithm stops when there are still misclassified inputs is given by:_
\[P(N_{0}=0,\ldots,N_{m_{\max}}=0\mid P_{1}>0) \tag{20}\]
_If \(s\geq 5\), then_
\[P(N_{0}=0,\ldots,N_{m_{\max}}=0\mid P_{1}>0)\leq 0.05 \tag{21}\]
**Proof**:
We have:
\[P(N_{0}=0,\ldots,N_{m_{\text{max}}}=0\mid P_{1}>0)\leq P(N_{m_{1/2}}=0\mid P_{1}>0) \tag{22}\]
But
\[P(N_{m_{1/2}}=0\mid P_{1}>0)=(1-P_{1}^{(m_{1/2})})^{s}\leq\frac{1}{2^{s}} \tag{23}\]
So by taking \(s\geq 5\), we do have:
\[P(N_{0}=0,\ldots,N_{m_{\text{max}}}=0\mid P_{1}>0)\leq 0.05 \tag{24}\]
Given this condition on \(s\), we will now specify further the learning algorithm through the task of learning a generic Boolean function.
### Learning a Generic Boolean Function
Let \(n\in\mathbb{N}\), in this section, we aim at learning \(c\in\mathbb{B}^{\mathbb{B}^{n}}\) without assuming any property or structure on \(c\), that is we take the concept class \(\mathcal{C}\) to be \(\mathbb{B}^{\mathbb{B}^{n}}\). This task allows us to specify the learning algorithm to train a TNN in the exact learning framework. To do so we compare it to a naive approach. Without assuming anything about the target concept \(c\), the naive way to correctly learn it, is to evaluate \(c(x)\) for all \(x\in\mathbb{B}^{\mathbb{B}^{n}}\). Given that we only have access to a uniform quantum example oracle and not a quantum membership oracle the expected number of queries to measure all the inputs is given by:
**Theorem 3.1**.: _Let \(K\) be a set with \(|K|=N\) and \(\{k_{i}\}\) be uniformly sampled elements from \(K\). Now let \(s\in\mathbb{N}\) such that:_
\[\{k_{i}\}_{1\leq i\leq s}=K \tag{25}\]
_This problem is known as the coupon collector's problem [12] and it can be shown that:_
\[E(s)=N\ln(N) \tag{26}\]
#### 3.2.1 Naive algorithm
Using Theorem 3.1 we propose the naive learning algorithm described in Algorithm 1. With this algorithm, during each update phase, \(\theta(n2^{n})\) queries to the oracle are done. Because after each update, we should have measured all of the misclassified inputs, the result from [7] can be applied and the total number of updates is in \(\Theta(n)\). The total number of queries to the example oracle can thus be evaluated to be in \(\Theta(n^{2}2^{n})\).
```
Data:\(|\psi(c)\rangle\) and \(\mathbf{T}\) the network with all gates initialised to \(\mathbf{I}\) Result: Tuned network \(\mathbf{T}\) expressing \(c\) \(E\leftarrow[0]\); \(s\leftarrow\lfloor 2^{n}\ln(2^{n})\rfloor\); while\(E\neq\varnothing\)do \(E\leftarrow[]\); for\(1\leq i\leq S\)do Measure \(\mathbf{T}\left|\psi(c)\right\rangle\); if1 is measured on the ancillary qubitthen Add the first \(n\) qubits to \(E\); end if end for for\(u\in E\)do Update \(\mathbf{G}_{u}\) in \(\mathbf{T}\); end for end for
```
**Algorithm 1**Naive algorithm
One property of this algorithm is that it puts an emphasis on the misclassified inputs. However within these \(2^{n}\ln(2^{n})\) measurements, a mix of misclassified and correctly classified inputs will be measured. By using amplitude amplification, the results of the measurements can then be focused on the inputs of interest. The idea is thus to adapt the number of samples to the number of amplification rounds in order to measure just what is necessary.
#### 3.2.2 A first improvement
Let \(N_{err}\in[1,2^{n}]\) be the number of misclassified inputs and \(\theta_{err}\in\left]0,\frac{\pi}{2}\right]\) defined by:
\[\theta_{err}=\arcsin\left(\sqrt{\frac{N_{err}}{2^{n}}}\right)=\arcsin\left( \sqrt{P_{1}}\right) \tag{27}\]
Then there exists \(m_{err}\in\mathbb{N}\) such that:
\[\theta_{err}\in\left]\frac{\pi}{2(2m_{err}+3)},\frac{\pi}{2(2m_{err}+1)}\right] \tag{28}\]
This yields:
\[N_{err}\in\left]\sin^{2}\left(\frac{\pi}{2(2m_{err}+3)}\right)2^{n},\sin^{2} \left(\frac{\pi}{2(2m_{err}+1)}\right)2^{n}\right] \tag{29}\]
And after \(m_{err}\) rounds of amplification, the probability \(P_{1}^{(m_{err})}\) of measuring the readout qubit in state \(\left|1\right\rangle\) is now:
\[P_{1}^{(m_{err})}\in\left]\sin^{2}\left(\frac{\pi}{2}-\frac{\pi}{2m_{err}+3} \right),1\right] \tag{30}\]
For \(m\) sufficiently large enough, if \(N_{err}\in\left]\sin^{2}\left(\frac{\pi}{2(2m_{err}+3)}\right)2^{n},\sin^{2} \left(\frac{\pi}{2(2m_{err}+1)}\right)2^{n}\right]\), then after \(m\) rounds of amplification, the results of the measurements will mostly be misclassified inputs. Applying Theorem 3.1 and taking into account the condition on the number of samples from Property 3.3 we define the following:
**Definition 3.4**.: _Let \(n\in\mathbb{N}\) and \(m_{\max}\) as in Definition 3.2. Then for \(m\in[0\ldots m_{\max}]\), we denote:_
\[N_{m}=\sin^{2}\left(\frac{\pi}{2(2m+3)}\right)2^{n} \tag{31}\]
_And_
\[s_{m}=\max(5,N_{m}\ln(N_{m})) \tag{32}\]
The total number of samples during an update phase is thus: \(\sum_{m=0}^{m_{\max}}s_{m}\) which is to be compared to \(2^{n}\ln(2^{n})\)
**Proposition 3.4**.: _Let \(n\in\mathbb{N}\), \(m_{\max}\) as in Definition 3.2 and \(s_{m}\) defined as in Definition 3.4. Then for \(n\) large enough:_
\[\sum_{k=0}^{m_{\max}}s_{m}<2^{n}\ln(2^{n}) \tag{33}\]
**Proof**:
For the sake of clarity, for \(m\in[0\ldots m_{\max}]\), we define:
\[u_{m}=\sin^{2}\left(\frac{\pi}{2(2m+3)}\right) \tag{34}\]
So that \(N_{m}=u_{m}2^{n}\). Instead of directly showing \(\sum_{m=0}^{m_{\max}}s_{m}<2^{n}\ln(2^{n})\), we will instead show:
\[\sum_{m=0}^{m_{\max}}\left(N_{m}\ln(N_{m})+5\right)<2^{n}\ln(2^{n}) \tag{35}\]
For the first sum:
\[\sum_{m=0}^{m_{\max}}N_{m}\ln(N_{m})=\sum_{m=0}^{m_{\max}}\ln\bigl{(}N_{m}^{N_ {m}}\bigr{)}=\ln\Biggl{(}\prod_{m=0}^{m_{\max}}N_{m}^{N_{m}}\Biggr{)} \tag{36}\]
And
\[\prod_{m=0}^{m_{\max}}N_{m}^{N_{m}} = \prod_{m=0}^{m_{\max}}(u_{m}2^{m})^{N_{m}}=\prod_{m=0}^{m_{\max} }u_{m}^{N_{m}}\prod_{m=0}^{m_{\max}}\left(2^{n}\right)^{N_{m}}=\prod_{m=0}^{m_ {\max}}u_{m}^{N_{m}}\prod_{m=0}^{m_{\max}}\left(2^{n}\right)^{u_{m}2^{n}} \tag{37}\] \[= \prod_{m=0}^{m_{\max}}u_{m}^{N_{m}}\prod_{m=0}^{m_{\max}}\left( \left(2^{n}\right)^{2^{n}}\right)^{u_{m}} \tag{38}\]
Because \(u_{m}\leq 1\), we have \(\prod_{m=0}^{m_{\max}}u_{m}^{N_{m}}\leq 1\), hence:
\[\prod_{m=0}^{m_{\max}}N_{m}^{N_{m}}\leq\prod_{m=0}^{m_{\max}}\left(\left(2^{n} \right)^{2^{n}}\right)^{u_{m}} \tag{39}\]
But:
\[\prod_{m=0}^{m_{\max}}\left(\left(2^{n}\right)^{2^{n}}\right)^{u_{m}}=\left( \left(2^{n}\right)^{2^{n}}\right)^{\sum_{m=0}^{m_{\max}}u_{m}} \tag{40}\]
And:
\[\sum_{m=0}^{m_{\max}}u_{m} =\sum_{m=0}^{m_{\max}}\sin^{2}\left(\frac{\pi}{2(2m+3)}\right) \tag{41}\] \[\leq\sum_{m=0}^{m_{\max}}\left(\frac{\pi}{2(2m+3)}\right)^{2}\] (42) \[\leq\frac{\pi^{2}}{4}\sum_{m=1}^{m_{\max}+1}\frac{1}{(2m+1)^{2}}\] (43) \[\leq\frac{\pi^{2}}{4}\left(\sum_{m=1}^{+\infty}\frac{1}{m^{2}}- \sum_{m=1}^{+\infty}\frac{1}{(2m)^{2}}-1\right)\] (44) \[\leq\frac{\pi^{2}}{4}\left(\frac{\pi^{2}}{6}-\frac{\pi^{2}}{24}-1\right)\] (45) \[\leq 0.58 \tag{46}\]
This means:
\[\prod_{m=0}^{m_{\max}}N_{m}^{N_{m}}\leq\left(\left(2^{n}\right)^{2^{n}} \right)^{0.58} \tag{47}\]
Hence:
\[\sum_{m=0}^{m_{\max}}N_{m}\ln(N_{m})\leq 0.58\times 2^{n}\ln(2^{n}) \tag{48}\]
Now for the second sum:
\[\sum_{m=0}^{m_{\max}}5=5(m_{\max}+1) \tag{49}\]
But \(m_{\max}\approx\sqrt{2^{n}}\) so for \(n\) large enough:
\[\sum_{m=0}^{k_{\max}}5\approx 5\sqrt{2^{n}}\lll 2^{n}\ln(2^{n}) \tag{50}\]
Its contribution is thus negligible in the overall sum and we have:
\[\sum_{m=0}^{m_{\max}}s_{m}<\sum_{m=0}^{m_{\max}}\left(N_{m}\ln(N_{m})+5\right) <2^{n}\ln(2^{n}) \tag{51}\]
In order to have a better idea of the reduction in oracle queries introduced by this algorithm, we have plotted in Figure 1 the ratio, as a function of \(n\) for \(n\geq 4\), between the total number of samples and \(2^{n}\ln(2^{n})\). As shown in Figure 0(a), for \(n\) large enough, the total number of samples is approximately halved when compared to the naive algorithm. To reduce further this number and speed up the training algorithm, we decided to increment \(m\), the number of amplification rounds, not by 1 but with powers of 2. The ratio of the total number of queries using this scheme, compared to the naive algorithm is given in Figure 0(b). In this case, we can see that it has been divided by more than 2. For these reasons, the increment schedule for \(m\) will now be:
\[\left[0,1,2,4,\ldots,m_{\max}\right] \tag{52}\]
The improved procedure is detailed in Algorithm 2. If \(m\) is the number of amplification rounds, the circuit used in Algorithm 2 is depicted in Figure 2.
#### 3.2.3 A further refinement
One issue with this algorithm resides in the fact that for \(n\in\mathbb{N}\) and \(0\leq m\leq m_{\text{max}}\), the range of misclassified inputs to be covered is:
\[\left]\sin^{2}\left(\frac{\pi}{2(2m+3)}\right)2^{n},\sin^{2}\left(\frac{\pi}{2(2 m+1)}\right)2^{n}\right] \tag{53}\]
So as \(m\) increases, the range will decrease. This means that approximating the number of misclassified inputs with the lower bound becomes more accurate as \(m\) increases. However, for small \(m\), this approximation is not as accurate. For example, for \(m=0\), the range given in Equation 53 becomes:
\[\left]\frac{2^{n}}{4},2^{n}\right] \tag{54}\]
This range can be refined by adding a second ancillary qubit, initialised in state \(\left|0\right>\) and a rotation gate acting on this qubit and controlled by the first ancillary qubit. We give a more accurate definition of this gate:
**Definition 3.5**.: _Let \(m_{0}\in\mathbb{N}\), we define:_
\[\theta_{m_{0}}=\frac{\pi}{2(2m_{0}+1)} \tag{55}\]
Figure 1: Ratio \(\sum_{m=0}^{m_{\text{max}}}s_{m}/2^{n}\ln(2^{n})\) as a function of \(n\) for \(n\geq 4\). In Figure 0(a) \(m\) has been incremented by 1 while in Figure 0(b), \(m\) has been incremented with powers of 2
_And we denote \(\mathbf{CR}_{m_{0}}\) the controlled rotation (around the \(y\)-axis) gate such that:_
\[\mathbf{CR}_{m_{0}}\left|10\right\rangle=\cos\left(\theta_{m_{0}}\right)\left|1 0\right\rangle+\sin\left(\theta_{m_{0}}\right)\left|11\right\rangle \tag{56}\]
Then if \(c\in\mathbb{B}^{\mathbb{B}^{n}}\) is the target concept and the network is expressing \(h\in\mathbb{B}^{\mathbb{B}^{n}}\) in its current state \(\mathbf{T}(h)\), we have:
\[\mathbf{CR}_{m_{0}}\mathbf{T}(h)\left|\psi(c)\right\rangle\left|0\right\rangle =\sin\left(\theta_{m_{0}}\right)\frac{1}{\sqrt{2^{n}}}\sum_{h(x)\neq c(x)} \left|x\right\rangle\left|11\right\rangle+\left|\bot\right\rangle \tag{57}\]
So the inputs of interest are now marked with the two ancillary qubits being in \(\left|11\right\rangle\) and the probability \(P_{11}\) of measuring such inputs is now such that:
\[P_{11}\in\left[0,\sin^{2}\left(\theta_{m_{0}}\right)\right] \tag{58}\]
And in this case:
\[\min_{P_{11}>0}=\sin^{2}\left(\theta_{m_{0}}\right)\frac{1}{2^{n}} \tag{59}\]
We adapt Definitions 3.1 and 3.2 to take into account this modification:
**Definition 3.6**.: _Let \(n\in\mathbb{N}\), \(c\in\mathbb{B}^{\mathbb{B}^{n}}\) be the target concept and \(m_{0}\in\mathbb{N}\). Suppose that the network is expressing the function \(h\in\mathbb{B}^{\mathbb{B}^{n}}\) in its current state \(\mathbf{T}(h)\). We denote:_
\[\mathcal{X}_{\alpha_{0}}(c)=(\mathbf{I}-2\left|\psi(c)\right\rangle\left\langle \psi(c)\right|)\otimes\mathbf{I} \tag{60}\]
\[\mathbf{U}_{m_{0}}(h)=\mathbf{CR}_{m_{0}}\mathbf{T}(h) \tag{61}\]
_And_
\[\mathcal{X}_{G}=\mathbf{I}^{\otimes n+1}\otimes\mathbf{Z} \tag{62}\]
_The diffusion operator \(\mathbf{Q}_{m_{0}}(c,h)\) is then defined as:_
\[\mathbf{Q}_{m_{0}}(c,h)=-\mathbf{U}_{m_{0}}(h)\mathcal{X}_{\alpha_{0}}(c) \mathbf{U}_{m_{0}}^{\dagger}(h)\mathcal{X}_{G} \tag{63}\]
We now define the quantities that are related to this modified process.
**Definition 3.7**.: _Let \(n\in\mathbb{N}\) and \(m_{0}\in\mathbb{N}\), we denote:_
\[\theta_{\min,m_{0}}=\arcsin\left(\sin\left(\theta_{m_{0}}\right)\frac{1}{ \sqrt{2^{n}}}\right) \tag{64}\]
_And:_
\[m_{\max,m_{0}}=\arg\min_{m\in\mathbb{N}}\left|(2m+1)\theta_{\min,m_{0}}-\frac {\pi}{2}\right| \tag{65}\]
Let \(m_{0}\in\mathbb{N}\). Now let \(\theta_{err}\in\left]0,\frac{\pi}{2}\right]\) such that:
\[\theta_{err}=\arcsin\Bigl{(}\sqrt{P_{11}}\Bigr{)} \tag{66}\]
Then there exists \(m\in\mathbb{N}\) such that:
\[\theta_{err}\in\left]\frac{\pi}{2(2m+3)},\frac{\pi}{2(2m+1)}\right] \tag{67}\]
But according to Equation 58, we have \(P_{11}\leq\sin^{2}\left(\theta_{m_{0}}\right)\) so it comes that:
\[m\geq m_{0} \tag{68}\]
Now if \(N_{err}\) is the number of misclassified inputs, then Equation 57 yields:
\[N_{err}=\sin^{2}(\theta_{err})\frac{2^{n}}{\sin^{2}(\theta_{m_{0}})} \tag{69}\]
Figure 2: Quantum circuit for the improved algorithm
Hence:
\[N_{err}\in\bigg{]}\!\sin^{2}\left(\frac{\pi}{2(2m+3)}\right)\frac{2^{n}}{\sin^{2 }(\theta_{m_{0}})},\sin^{2}\left(\frac{\pi}{2(2m+1)}\right)\frac{2^{n}}{\sin^{ 2}(\theta_{m_{0}})}\bigg{]} \tag{70}\]
So for \(m\geq m_{0}\), the ratio between the two bounds of this interval is:
\[1\leq\frac{\sin^{2}\left(\frac{\pi}{2(2m+1)}\right)}{\sin^{2}\left(\frac{\pi}{ 2(2m+3)}\right)}\leq\frac{\sin^{2}\left(\frac{\pi}{2(2m_{0}+1)}\right)}{\sin^{ 2}\left(\frac{\pi}{2(2m_{0}+3)}\right)}\approx\left(\frac{m_{0}+3}{m_{0}+1} \right)^{2}=\left(1+\frac{2}{m_{0}+1}\right)^{2} \tag{71}\]
Where the approximation is valid for \(m_{0}>0\). So as \(m_{0}\) grows, it is possible for the bounds of the interval given in Equation 70 to become quite close. Moreover, after \(m\) rounds of amplitude amplification, we have:
\[P_{11}^{(m)}\in\bigg{]}\!\sin^{2}\left(\frac{2m+1}{2m+3}\frac{\pi}{2}\right),1\bigg{]} \tag{72}\]
With:
\[\sin^{2}\left(\frac{2m+1}{2m+3}\frac{\pi}{2}\right)=\sin^{2}\left(\frac{\pi}{ 2}-\frac{\pi}{2m+3}\right)=\cos^{2}\left(\frac{\pi}{2m+3}\right)=1-\sin^{2} \left(\frac{\pi}{2m+3}\right) \tag{73}\]
As \(m\geq m_{0}\), we have:
\[\sin^{2}\left(\frac{2m+1}{2m+3}\frac{\pi}{2}\right)\geq 1-\sin^{2}\left(\frac{ \pi}{2m_{0}+3}\right) \tag{74}\]
Here again, for \(m_{0}\) large enough, it is possible for the lower bound to be very close to 1. This means that for \(m_{0}\) large enough, whatever the number of misclassified inputs (or equivalently the probability \(P_{11}\)), as long as it is non-zero, it is possible to find a \(0\leq m\leq m_{\max,m_{0}}\) such that after \(m\) rounds of amplitude amplification, the measurement process will be a random process where the probability of measuring a misclassified input is close to 1. In addition, the number of such inputs can more accurately be approximated by the lower bound of the interval given in Equation 70. However, it does not suffice to take \(m_{0}\) very large. Indeed, as \(m_{\max,m_{0}}\) increases with \(m_{0}\), so does the total number of samples. A good choice for this parameter is thus one that allows for the behaviour previously described while ensuring that the number of samples remains small enough.
**Definition 3.8**.: _Let \(n\in\mathbb{N}\), \(m_{0}\in\mathbb{N}\) and \(m_{\max,m_{0}}\) as in Definition 3.7. For \(m_{0}\leq m\leq m_{\max,k_{0}}\) we define:_
\[N_{m,m_{0}}=\sin^{2}\left(\frac{\pi}{2(2m+3)}\right)\frac{2^{n}}{\sin^{2}( \theta_{m_{0}})} \tag{75}\]
_And_
\[s_{m,m_{0}}=\max\left(N_{m,m_{0}}\ln(N_{m,m_{0}}),5\right) \tag{76}\]
The total number of samples during an update phase is thus \(\sum_{m=m_{0}}^{m_{\max}}s_{m,m_{0}}\) but in order to speed up the algorithm we chose to keep the schedule introduced in the first improvement hence:
**Definition 3.9**.: _Let \(n\in\mathbb{N}\), \(m_{0}\in\mathbb{N}\), \(m_{\max,m_{0}}\) as in Definition 3.7 and \(s_{m,m_{0}}\) as in Definition 3.8, then the total number of samples \(S_{m_{0}}\)during an update phase is given by:_
\[S_{m_{0}}=s_{m_{0},m_{0}}+\sum_{m_{0}<2^{p}<m_{\max,m_{0}}}s_{m,m_{0}}+s_{m_{ \max,m_{0}},m_{0}} \tag{77}\]
To guide the choice for a suitable \(m_{0}\), the ratio \(S_{m_{0}}/2^{n}\ln(2^{n})\) has been plotted for different values of \(n\in\mathbb{N}\) and \({}_{m}0\in\mathbb{N}\) in Figure 3. From this figure, it is apparent that the values \(0\leq m_{0}\leq 4\) are suitable with \(m_{0}=0\) corresponding to the first improvement introduced earlier. Algorithm 3 describes the training algorithm that will be used to train the network. Notice that when measuring, we still look at the first ancillary qubit. Indeed, because of the controlled rotation gate \(\mathbf{CR}_{m_{0}}\), a misclassified input will result in the ancillary qubits being measured either in state \(\left|10\right\rangle\) or \(\left|11\right\rangle\). While the latter is used in the amplification process to refine it, the former still corresponds to a state of interest and it would be a waste to ignore it during the measurements.
For \(m_{0}\in\mathbb{N}\) and \(m\in\mathbb{N}\), the circuit being measured is depicted in Figure 4.
### Implementation
Let \(n\in\mathbb{N}\). In order to specify the choice of \(m_{0}\) we have implemented Algorithm 3 for \(0\leq m_{0}\leq 4\) with the target being a generic Boolean function \(c\in\mathbb{B}^{\mathbb{B}^{n}}\). In detail, for \(4\leq n\leq 8\), 16 target functions have been chosen randomly and for each target function, the network has been trained 50 times. Each time, the number of samples needed until the algorithm stops has been recorded as well as the final error rate. As a comparison, the naive algorithm has also been implemented and studied under the same regime. Concerning the error rate, when using amplitude amplification, whatever the values for \(n\) and \(m_{0}\), the final error rate was consistently equal to 0, as shown in Figure 4(a), indicating that
Figure 3: Ratio \(S_{m_{0}}/2^{n}\ln(2^{n})\) as a function of \(n\) for different values of \(m_{0}\)
the target function has indeed been exactly learnt. On the other hand, when training with the naive algorithm, some of the experiments for \(n=4\) to \(6\) failed to exactly learn the target function as depicted in Figures 4(b) to 4(d).
Hence, when only looking at the goal of exactly learning, the algorithm using amplitude amplification performs better than the naive algorithm and thus for the different values of \(m_{0}\) that have been selected. To further confirm the advantage of the refined algorithm over the naive one, we put our focus on the number of samples required to learn the target functions. For each chosen \(m_{0}\), the mean number of samples taken over all the experiments, i.e. all the 16 functions and the 50 runs by function, has been plotted against the dimension of the input space \(n\) for \(4\leq n\leq 8\) in Figure 6. As a point of comparison, the same metric has been plotted for the naive algorithm.
What transpires from this comparison is that for the selected \(m_{0}\), the number of necessary samples is considerably lower than for the naive algorithm, with the gap increasing as the input dimension increases. Two values for \(m_{0}\) particularly stand out: \(m_{0}=2\) and \(m_{0}=4\) as they lead to the lowest number of samples with the plot for \(m_{0}=4\)
Figure 4: Quantum circuit for the improved algorithm
Figure 5: Final error rate for different experiments
being below the one for \(m_{0}=2\). While this trend seems to be contradictory with Figure 3, this can be explained by the fact that the number of updates when \(m_{0}=4\) is lower than when \(m_{0}=2\). Seeing that the difference in terms of sample number between these two choices is relatively small, it seems that \(m_{0}=2\) offers the best compromise between minimising the number of samples and minimising the running time of the training algorithm. In Section 4, a modified version of Algorithm 3 will be used with \(m_{0}=2\) to learn \(k\)-juntas. The modifications will concern the number of queries during an update stage as well as the update strategy with regard to the measurement outcomes.
## 4 Learning Positive \(k\)-Juntas
### Description of the Concept Class and the Update Algorithm
**Definition 4.1**.: _Let \(n\in\mathbb{N}^{*}\) and \(k<n\). A Boolean function \(c\in\mathbb{B}^{\mathbb{B}^{n}}\) is said to be a \(k\)-junta if its output only depends on at most \(k\) of its input variables, these variables are then called the relevant variables. Let \(\rho_{c}\subset[0\ldots n-1]\) be the set of the relevant variables of \(c\), then:_
\[|\rho_{c}|\leq k \tag{78}\]
_Additionally, we define a positive \(k\)-junta, \(c\in\mathbb{B}^{\mathbb{B}^{n}}\), as being a \(k\)-junta such that:_
\[c(0)=0 \tag{79}\]
_And we denote \(\mathcal{J}_{k}^{+}\) the class of the positive \(k\)-juntas._
The fact that a function is a positive \(k\)-junta will impose a restriction on its algebraic normal form:
**Proposition 4.1**.: _Let \(n\in\mathbb{N}^{*}\) and \(c\in\mathbb{B}^{\mathbb{B}^{n}}\) a positive \(k\)-junta. Then there exist \(u_{1}^{(c)},\ldots,u_{p}^{(c)}\in\mathbb{B}^{n}\setminus\{0\}\) with \(p<2^{k}\) such that:_
\[\forall i\in[1,p],1_{u_{i}^{c}}\subseteq\rho_{c} \tag{80}\]
_And:_
\[c=\bigoplus_{i=1}^{p}m_{u_{i}^{c}} \tag{81}\]
_We will call \(u_{1}^{(c)},\ldots,u_{p}^{(c)}\) the principals of \(c\)._
Figure 6: Mean number of samples taken over all the experiments (functions and runs) against the dimension \(n\) for different values of \(m_{0}\) (\(0\leq m_{0}\leq 4\)) and the naive algorithm
**Proof**:
Let \(n\in\mathbb{N}^{*}\), \(k<n\) and \(c\in\mathcal{J}_{k}^{+}\). For \(x\in\mathbb{B}^{n}\), we denote \(x_{|\rho_{c}}\) the restriction of \(x\) to the relevant variables of \(c\). As \(c\) is a \(k\)-junta, there exist \(c^{\prime}\in\mathbb{B}^{\mathbb{B}^{|\rho_{c}|}}\) such that:
\[\forall x\in\mathbb{B}^{n},c(x)=c^{\prime}(x_{|\rho_{c}}) \tag{82}\]
As \(c^{\prime}\in\mathbb{B}^{\mathbb{B}^{|\rho_{c}|}}\), it has an ANF, i.e. there exist \(u_{1}^{(c^{\prime})},\ldots,u_{p}^{(c^{\prime})}\in\mathbb{B}^{|\rho_{c}|}\) such that:
\[c^{\prime}=\bigoplus_{i=1}^{p}m_{u_{i}^{c^{\prime}}} \tag{83}\]
Because \(c(0)=c^{\prime}(0)=0\), we also have:
\[u_{1}^{(c^{\prime})},\ldots,u_{p}^{(c^{\prime})}\in\mathbb{B}^{|\rho_{c}|} \setminus\{0\} \tag{84}\]
Now to extend \(c^{\prime}\) to \(c\), for \(i\in[1\ldots p]\) we simply extend \(u_{i}^{c^{\prime}}\in\mathbb{B}^{|\rho_{c}|}\) to \(u_{i}^{c}\in\mathbb{B}^{n}\) by padding with zeros where necessary. This ensures that \(1_{u_{i}^{c}}\subseteq\rho_{c}\), \(u_{1}^{(c)},\ldots,u_{p}^{(c)}\in\mathbb{B}^{n}\setminus\{0\}\) and:
\[c=\bigoplus_{i=1}^{p}m_{u_{i}^{c}} \tag{85}\]
To understand the effect of the principals of a \(k\)-junta, we will introduce the notion of filter, derived from set theory:
**Definition 4.2**.: _Let \(n\in\mathbb{N}\) and \(u\in\mathbb{B}^{n}\). We denote by \({}_{\uparrow}u\) the filter generated by \(u\) defined by:_
\[{}_{\uparrow}u=\{v\in\mathbb{B}^{n}\mid 1_{u}\subseteq 1_{v}\} \tag{86}\]
By using the filters generated by the principals of a positive \(k\)-junta \(c\), a characterisation of the inputs \(v\in\mathbb{B}^{n}\) such that \(c(v)=1\) can be done.
**Proposition 4.2**.: _Let \(n\in\mathbb{N}^{*}\), and \(c\in\mathcal{J}_{k}^{+}\). Let \(u_{1}^{(c)},\ldots,u_{p}^{(c)}\) be the principals of \(c\), then for \(v\in\mathbb{B}^{n}\):_
**Proof**: \(1\) _if and only if there exists an odd number of principals, \(u_{i}^{(c)}\), such that \(v\) is an element of \({}_{\uparrow}u_{i}^{(c)}\)._
This result stems from the ANF of the function. \(\blacksquare\)
In order to show the effects of updating the network, we introduce the following:
**Proposition 4.3**.: _Let \(n\in\mathbb{N}^{*}\) and \(c\in\mathcal{J}_{k}^{+}\). Let \(v\in\mathbb{B}^{\mathbb{B}^{n}}\) such that \(c(v)=1\), then:_
\[\forall w\in{}_{\uparrow}v,c(w)\oplus m_{v}(w)=\overline{c(w)} \tag{87}\]
_And:_
**Proof**:
This is a consequence of Property 4.2.
Let \(v\in\mathbb{B}^{n}\) such that \(c(v)=1\). Then according to Property 4.2:
\[v\in\bigcup_{i=1}^{p}{}_{\uparrow}u_{i}^{(c)} \tag{89}\]
And because \({}_{\uparrow}v\subset\bigcup_{i=1}^{p}{}_{\uparrow}u_{i}^{(c)}\), it comes that any misclassified input is an element of the filter generated by at least one principal of the target function. By updating the network with gates controlled by misclassified inputs we thus ensure that at any time during the training process, the network will express a hypothesis \(h\in\mathbb{B}^{\mathbb{B}^{n}}\) such that:
\[h=\bigoplus_{u\in G}m_{u}\text{ where }G\subseteq\bigcup_{i=1}^{p}{}_{ \uparrow}u_{i}^{(c)} \tag{90}\]
So the goal of the tuning algorithm is to gradually descend to the principals of the target function by adding to the network gates controlled by inputs of progressively lower Hamming weight. Once a gate controlled by \(v\) is added, all the gates controlled by inputs in \({}_{\uparrow}v\) can be trimmed from the network.
To facilitate this process, it would be advantageous to measure misclassified inputs that are close, in terms of Hamming weight, to the principals of the target function. Indeed, by doing so, the network could be updated with gates that are closer to the principals, thus cutting down the number of update steps. This can be achieved by using, once more, amplitude amplification. The target function being a \(k\)-junta, we know that its principals have a Hamming weight of at most \(k\). So by using AA to focus on the inputs with that property, this descent process can be facilitated. Formally, let us define the following:
**Definition 4.3**.: _Let \(n\in\mathbb{N}^{*}\) and \(k<n\). We define \(\mathcal{X}_{\leq k}\in U(2^{n})\) such that:_
\[\forall x\in\mathbb{B}^{n},\mathcal{X}_{\leq k}\ket{x}=\begin{cases}-\ket{x}& \text{if }w_{H}(x)\leq k\\ &\ket{x}&\text{else}\end{cases} \tag{91}\]
_Additionally, we define \(\mathcal{X}_{\psi}\in U(2^{n+1})\) by:_
\[\mathcal{X}_{\psi}=2\ket{\psi(c)}\!\!\bra{\psi(c)}-\mathbf{I} \tag{92}\]
_The diffusion operator for the inputs with Hamming weight of at most \(k\) is then given by:_
\[\mathcal{X}_{\psi}(\mathcal{X}_{\leq k}\otimes\mathbf{I}) \tag{93}\]
Because we know exactly how many of these inputs are in the superposition, we can determine the number of iterations of this diffusion operator are needed to appropriately amplify these inputs:
**Definition 4.4**.: _Let \(n\in\mathbb{N}^{*}\) and \(k<n\). Let \(N_{\leq k}\) be the number of inputs with Hamming weight of at most \(k\), then:_
\[N_{\leq k}=\sum_{j=0}^{k}\binom{n}{j} \tag{94}\]
_This allows us to define \(p_{\leq k}\), the number of iterations for the diffusion operator defined in Definition 4.3:_
\[p_{\leq k}=\arg\min_{p\in\mathbb{N}}\left|(2p+1)\arcsin\left(\sqrt{\frac{N_{ \leq k}}{2^{n}}}\right)-\frac{\pi}{2}\right| \tag{95}\]
We can now abstract the amplification process into a unitary:
**Definition 4.5**.: _Let \(n\in\mathbb{N}^{*}\) and \(k<n\). Let \(\mathcal{X}_{\leq k}\) and \(\mathcal{X}_{\psi}\) be the gates defined in Definition 4.3 and \(p_{\leq k}\) as in Definition 4.4, then the operator \(\mathbf{A}_{\leq k}\in U(2^{n+1})\) defined by:_
\[\mathbf{A}_{\leq k}=[\mathcal{X}_{\psi}(\mathcal{X}_{\leq k}\otimes\mathbf{I} )]^{p_{\leq k}} \tag{96}\]
_Will appropriately amplify the amplitudes of the inputs with Hamming weight of at most \(k\)._
Before continuing, we must briefly redefine the diffusion operator introduced in Definition 3.6 for it to be compatible with this additional procedure.
**Definition 4.6**.: _Let \(n\in\mathbb{N}^{*}\) and \(k<n\) and \(\mathbf{A}_{\leq k}\) as defined in Definition 4.5. We denote:_
\[\ket{\psi_{\leq k}(c)}=\mathbf{A}_{\leq k}\ket{\psi(c)} \tag{97}\]
_From this, we redefine \(\mathcal{X}_{\alpha_{0}}\) as introduced in Definition 3.6 by:_
\[\mathcal{X}_{\alpha_{0}}=(\mathbf{I}-2\ket{\psi_{\leq k}(c)}\!\!\bra{\psi_{ \leq k}(c)})\otimes\mathbf{I} \tag{98}\]
_The other components remaining the same (with \(m_{0}=2\)) we then have:_
\[\mathbf{Q}=-\mathbf{U}(h)\mathcal{X}_{\alpha_{0}}(c)\mathbf{U}(h)^{\dagger} \mathbf{X}_{G} \tag{99}\]
_Where \(\mathbf{U}(h)=\mathbf{C}\mathbf{R}_{2}\mathbf{T}(h)\)._
The circuit used in the learning process is depicted in Figure 7.
An update phase will thus unfold as follows (to facilitate the explanation, a gate and its controlling input will be conflated). To take advantage of the filter structure, the measurements are separated into correctly classified and misclassified inputs. For each group, the inputs are sorted by increasing Hamming weight and treated in this order. For each of the misclassified inputs, the number of gates to be added, of which the misclassified input is an element of the filter, is counted. If this number is even, this input is added to the list of gates to be updated and all the gates that are in the current network and are also in this input's filter are to be removed. This update process is also performed with the correctly classified inputs but in this case, the number has to be odd in order to add the input to the list of gates to be updated. The update process described above is detailed in Algorithm 4.
One quantity that remains to be determined is the number of queries to the quantum example oracle during such an update phase. While it would have been possible to train the network using the sampling schedule introduced in Section 3.2, this approach is best used when no characteristic is known of the target concept. In the case of \(k\)-juntas, it seems natural for the query complexity to be a function of \(k\). As a result, when learning \(k\)-juntas, we chose to perform \(2^{k}\) measurements after each amplification phase leading to:
**Proposition 4.4**.: _Let \(n\in\mathbb{N}^{*}\) and \(k<n\). Suppose that the concept class is \(\mathcal{J}_{k}^{+}\). If the number of queries after each amplification phase is \(2^{k}\), then during an update phase, the number of queries to the quantum example oracle is in:_
\[\Theta(n2^{k}) \tag{100}\]
This leads to the sample complexity of the whole algorithm. Thanks to Property 4.4 we have already established that one update phase will perform \(\Theta(n2^{k})\) calls to the example oracle. To determine the number of updates, we make the following observation. Let \(u_{i}^{(c)}\) be one of the principals of the target function \(c\in\mathcal{J}_{k}^{+}\). Suppose further that we update the network with a gate controlled by \(v\in{{}_{\uparrow}u_{i}^{(c)}}\). We are assured that during a further update step, we will correct an input \(w\in{{}_{\uparrow}u_{i}^{(c)}}\) such that \(v\in{{}_{\uparrow}w_{i}}\). Either because \(w\) was measured randomly or because all of the elements of \({{}_{\uparrow}v}\) have been corrected beforehand, thus ensuring that \(w\) will be measured. However, thanks to the pre-amplification that focuses on the inputs with Hamming weight of at most \(k\), the first event is most likely to happen. Once the gate
Figure 7: Quantum circuit to learn \(k\)-juntas
controlled by \(w\) has been updated, the same process will repeat until we reach \(u_{i}^{(c)}\). All in all, we have that the number of updates is in \(O(n)\), and hence the query complexity is in:
\[O(n^{2}2^{k}) \tag{101}\]
### Experimental Results
We have implemented1 this learning algorithm for \(n\in[5\dots 8]\) and \(k\in[2\dots n-1]\). For each pair of \(n\) and \(k\), the network has been trained to learn 16 randomly created \(k\)-junta. Each training has been repeated 25 times. The final error rates for these experiments were all similar to what is reported in Figure 8.
Footnote 1: The code can be found here: [https://github.com/vietphamgo/exact_AA](https://github.com/vietphamgo/exact_AA)
This figure has been plotted using violin plots, hence it shows that all of the training successfully stopped with a correctly tuned network. We now focus on the number of update steps required to reach these results.
To have an overview of this metric, for a given \(n\) and a given \(k\), we have aggregated all of the training runs for all of the functions. For a given \(n\), this set of data has then been plotted against \(k\) using box plots. This way we can visualise the median, first and third quartiles, as depicted by the red line, the lower bound, and the upper bound of the box respectively. The maximum and minimum are indicated by the whiskers and the circles represent outliers. All of these are shown in Figure 9. From these results, we can see that for a given \(n\), the number of samples is indeed in the order of \(n\). But more interestingly, it slightly decreases as \(k\) increases. This can be explained by the fact that during an update phase, the update algorithm has access to more samples, hence the descent to the principals of the target function is quicker. Another reason comes from the fact that for the algorithm to find a principal when \(k\) is small, it will potentially have to go "deeper" as the Hamming weight of a principal is at most \(k\).
However, our upper bound of \(n\) still holds. From these experiments, we have verified that the total sample complexity is in \(O(n^{2}2^{k})\). In [11], the task was also to learn \(k\)-juntas while given access to a uniform quantum example oracle albeit in the QPAC-learning framework. The complexity of their algorithm was then \(O\left(\frac{k}{\epsilon}\log(k)\right)\). Assuming that this algorithm can be applied in the exact learning framework by taking \(\epsilon=\frac{1}{2^{n}}\), we end up with a complexity of \(O(2^{n}k\log(k))\). So in the case where \(k\ll n\), our algorithm will perform better.
## 5 Conclusion
In this work, we have devised an algorithm to train a tunable quantum neural network in the exact learning framework with access to a uniform quantum example oracle. We refined it by employing it to learn generic Boolean functions and by comparing it to a naive algorithm. We then adapted this algorithm to learn the class of \(k\)-juntas. Following the implementation of this approach, we found that the query complexity is in \(O(n^{2}2^{k})\). This complexity is lower than what can be found in the literature in the case where \(k\ll n\).
|
2308.00890 | Tango: rethinking quantization for graph neural network training on GPUs | Graph Neural Networks (GNNs) are becoming increasingly popular due to their
superior performance in critical graph-related tasks. While quantization is
widely used to accelerate GNN computation, quantized training faces
unprecedented challenges. Current quantized GNN training systems often have
longer training times than their full-precision counterparts for two reasons:
(i) addressing the accuracy challenge leads to excessive overhead, and (ii) the
optimization potential exposed by quantization is not adequately leveraged.
This paper introduces Tango which re-thinks quantization challenges and
opportunities for graph neural network training on GPUs with three
contributions: Firstly, we introduce efficient rules to maintain accuracy
during quantized GNN training. Secondly, we design and implement
quantization-aware primitives and inter-primitive optimizations that can speed
up GNN training. Finally, we integrate Tango with the popular Deep Graph
Library (DGL) system and demonstrate its superior performance over
state-of-the-art approaches on various GNN models and datasets. | Shiyang Chen, Da Zheng, Caiwen Ding, Chengying Huan, Yuede Ji, Hang Liu | 2023-08-02T00:51:37Z | http://arxiv.org/abs/2308.00890v2 | # Tango: rethinking quantization for
###### Abstract.
Graph learning is becoming increasingly popular due to its superior performance in tackling many grand challenges. While quantization is widely used to accelerate Graph Neural Network (GNN) computation, quantized training faces remarkable roadblocks. Current quantized GNN training systems often experience longer training time than their full-precision counterparts for two reasons: (i) addressing the quantization accuracy challenge leads to excessive overhead, and (ii) the optimization potential exposed by quantization is not adequately leveraged. This paper introduces Tango which re-thinks quantization _challenges_ and _opportunities_ for graph neural network training on GPUs with three contributions: Firstly, we introduce efficient rules to maintain accuracy during quantized GNN training. Secondly, we design and implement quantization-aware primitives and inter-primitive optimizations to speed up GNN training. Finally, we integrate Tango with the popular Deep Graph Library (DGL) system and demonstrate its superior performance over the state-of-the-art approaches on various GNN models and datasets.
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
data format nor floating-point format with dynamically adjusted exponent and mantissa. The model trained using SWALP or clipped gradients needs more epochs to converge because the weights are updated less frequently. _(ii) The optimization potential offered by quantization is not well-utilized._ ActNN (Wang et al., 2017), TinyKG (Wang et al., 2017), and EXACT (Wang et al., 2017) quantize the tensors to save memory and dequantize them back to full-precision for computation, increasing the overall training time (Wang et al., 2017; Wang et al., 2017). For example, TinyKG with 8-bit quantization is 54.1% slower than using FP32. Of note, Degree-Quant (Zheng et al., 2019) performs Quantization-Aware Training (QAT) (Wang et al., 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2018; Wang et al., 2018; Wang et al., 2019) that uses full-precision to "simulate quantization" in training to reduce the error for quantized inference, which, again, experiences longer training time.
This paper introduces Tango, the first GPU-based quantized GNN training system that both _maintains the model accuracy_ and _reduces turnaround time_ when compared to the full precision counterpart. Particularly, Tango encompasses three contributions:
* We introduce several lightweight rules to maintain accuracy for quantized GNN training. The rules include GPU-accelerated stochastic rounding, derivation of proper quantization bit count, novel quantization-aware GEMM, SPMM, and SDDMM, and full precision weight update and softmax.
* We design and implement a quantization-aware system to reduce the GNN training time on GPUs. Our techniques include GEMM with on-the-fly quantization, incidence-matrix-based adaptive SPMM, SDDMM with on-the-fly dequantization, and inter-primitive optimizations.
* For ease of use, we integrate Tango with DGL, which uses PyTorch as the backend. Therefore, all existing DGL-based models can enjoy the performance benefits from Tango without any changes. We demonstrate that Tango constantly outperforms state of the art for all evaluated GNN models while maintaining the training accuracy.
The remainder of this paper is organized as follows: Section 2 presents the background. Section 3 discusses the design and techniques in Tango. Specifically, Section 3.1 describes the challenges and opportunities of Tango, Section 3.2 illustrates the lightweight rules for maintaining training accuracy during quantization, and Section 3.3 presents the systematic effort on quantization accelerated training. We evaluate Tango in Section 4, describe related work in Section 5, and conclude in Section 6.
## 2. Background
### A running example for GAT training
This section uses a running example to illustrate how to express the forward and backward computations of GAT with three key primitives, i.e., GEMM, SPMM, and SDDMM.
**Primitives for forward computation.** Figure 1a presents the forward workflow of GAT on a toy graph, i.e., node projection (), attention computation (), and message aggregation ().
In step 1, GAT resorts to _GEMM primitive_ to perform a linear transformation for node features, i.e., \(\mathbf{H}^{\prime}=\mathbf{H}^{(l-1)}\cdot\mathbf{W}\). In \(\mathbf{H}^{(l-1)}\), where each row of \(\mathbf{H}^{(l-1)}\), in the same color, represents the features of one node. Each node feature contains two heads in \(\mathbf{H}^{(l-1)}\). Using the first row of \(\mathbf{H}^{(l-1)}\) as an example, [0.01, 0.40] is the first head, and the remaining two values belong to the second head. \(\mathbf{W}\) is the learnable weight matrix for linear transformation.
In step 2, GAT consolidates each head of the feature vector into one scalar by GEMM, i.e., \(\mathbf{S}=(\mathbf{H}^{\prime}\cdot\mathbf{a}_{src})^{T}\) and \(\mathbf{D}=(\mathbf{H}^{\prime}\cdot\mathbf{a}_{dst})^{T}\). Using node \(\mathbf{v}_{0}\) of 1 as an example, for the source feature, [0.59, 0.73]\(\times\)[0.91, 0.90]\({}^{T}\) = 1.20, and [0.51, -0.65]\(\times\)[0.42, 0.62] = -0.19. Similarly, we can derive the entire S and D.
In step 3, the source (**S**) and destination (**D**) node feature matrices are combined by an _SDDMM primitive_ to arrive at the edge feature \(\mathbf{E}\). Formally, the SDDMM is defined as \(\mathbf{E}=\mathbf{G}\odot(\mathbf{S}\oplus\mathbf{D}^{T})\), where every row of \(\mathbf{S}\) computes against every row of \(\mathbf{D}\) with the customized operation 1, and \(\odot\) is a Hadamard product operator. The resultant matrix \(\mathbf{E}\) is masked out by the sparse adjacency matrix \(\mathbf{G}\) of the graph so \(\mathbf{E}[i][j]=0\) when there is no edge between \(v_{i}\) and \(v_{j}\). In Figure 1a, \(\oplus\) denotes addition. Using edge \(e_{3}\) as an example, since it connects source \(v_{0}\) and destination \(v_{3}\), we arrive at [1.20, -0.19] + [0.20, 0.05] = [1.40, -0.14] for \(e_{3}\). Then an element-wise LeakyReLU is applied to the edge features. Particularly, each non-negative entry in \(\mathbf{E}\) is unchanged while negative ones become close to 0, which we use 0 to represent. Hence [1.40, -0.14] becomes [1.40, 0.00] in Figure 1a.
In step 1, edges of the same destination come together to compute the head-wise attention scores through softmax operation.
Figure 1. GAT training on a toy graph, i.e., middle left of (a), with two heads. This example is used throughout this paper.
Using node \(v_{3}\) as an example, it has \(e_{3}\) and \(e_{4}\) as the incoming edges. Therefore, the attention scores of \(e_{3}\) and \(e_{4}\) are computed as \(0.63=\frac{e^{\bot 0}}{e^{\bot 0.64\bot 0.85}}\), and \(0.46=\frac{e^{0.46}}{e^{\bot 0.46\bot 16}}\) for \(e_{3}\), and \(0.37=\frac{e^{0.86}}{e^{\bot 0.86}}\) and \(0.54=\frac{e^{0.46}}{e^{\bot 0.46\bot 16}}\) for \(e_{4}\). Putting this example into a general formula, we use SPMM and SDDMM operations together to compute the denominator as follows: First, we use _SPMM_ to aggregate the in-edges for every node such as \(\mathbf{M}^{\prime}=(\mathbf{G}\odot exp(\mathbf{E}))\cdot\mathbf{1}\). Of note, \(\mathbf{1}\) is an all '1' dense matrix. Second, since the first step computed the denominator for each destination vertex, we use _SDDMM_ to assign this denominator back to each incoming edge via \(\mathbf{E}^{\prime}=\mathbf{G}\odot(\mathbf{1}\cdot\mathbf{M}^{\prime T})\). Subsequently, \(\boldsymbol{\alpha}=\frac{exp(\mathbf{E})}{\mathbf{E}}\).
In step 1, GAT performs an _SPMM_ to derive the new node embedding via \(\mathbf{H}^{(I)}=(\mathbf{G}\odot\boldsymbol{\alpha})\cdot\mathbf{H}^{\prime}\). Intuitively, this step derives the new embedding by computing the weighted sum of all the incoming neighbors to a destination vertex. Using \(v_{3}\) as an example, its incoming neighbors are \([v_{0},v_{2}]\). We arrive at \(\mathbf{H}^{(I)}[v_{3}]=\boldsymbol{\alpha}[e_{5}]\cdot\mathbf{H}^{\prime}[v _{0}]+\boldsymbol{\alpha}[e_{4}]\cdot\mathbf{H}^{\prime}[v_{2}]\), resulting in [0.49, 0.61, 0.77, -0.58].
**Primitives for backward computation.** Figure 0(b) is the backward pass of Figure 0(a). Steps 1 (_SPMM_) and 1 (_SDDMM_) of Figure 0(b) are the corresponding backward operations for step 1 in Figure 0(a). In the forward SPMM operation (1), \(\mathbf{H}^{(I)}=(\mathbf{G}\odot\boldsymbol{\alpha})\cdot\mathbf{H}^{\prime}\), we hence disperse the gradients of \(\mathbf{H}^{(I)}\) to both \(\mathbf{H}^{\prime}\) and \(\sigma\). First, we arrive at \(\partial\mathbf{H}^{\prime}=(\mathbf{G}^{T}\odot\boldsymbol{\alpha})\cdot \partial\mathbf{H}^{(I)}\), which is an _SPMM_ (1) on the reversed graph since the updated node feature \(\mathbf{H}^{(I)}\) is aggregated from the source node feature. Using \(\mathbf{H}[v_{1}]\) as an example, it receives gradients from both \(e_{0}\) and \(e_{2}\). Therefore, we arrive at \(\partial\mathbf{H}^{(I-1)}[v_{1}]=\boldsymbol{\alpha}[e_{0}]\cdot\partial \mathbf{H}^{(I)}[v_{0}]+\boldsymbol{\alpha}[e_{2}]\cdot\partial\mathbf{H}^{( I)}[v_{2}]\). That is, [1.56,1.57,-0.19,0.49]\(=1.0\times\)[0.54, 0.51] \(+1.0\times\)[1.02,1.06] \(\|_{concat}\)\(1.0\times\)[-0.26, -0.07] \(+1.0\times\)[0.07,0.56]. Second, we have \(\partial\boldsymbol{\alpha}=\mathbf{G}\odot(\partial\mathbf{H}^{(I)}\cdot \mathbf{H}^{T})\), which is an SDDMM operator on the original graph, where it performs row-wise dot-product (1). Using \(\partial\boldsymbol{\alpha}[e_{0}]\) as an example, it connects nodes \(v_{1}\) and \(v_{0}\), we arrive at \(\partial\boldsymbol{\alpha}[e_{0}]=\partial\mathbf{H}^{(I)}[v_{0}]\cdot \mathbf{H}^{\prime}[v_{1}]\). In the example, we get [0.78,-0.13] \(=\)[0.54,0.51] \(\times\)[0.76,0.73] \({}^{T}\|_{concat}\)[-0.26,-0.07]\(\times\)[0.79,-1.07]\({}^{T}\).
Step 1 computes the gradient of edge features using attention scores. Using \(e_{3}\) as an example, we compute \(\partial\mathbf{E}[e_{3}]=\boldsymbol{\alpha}[e_{5}](\partial\boldsymbol{ \alpha}[e_{5}]-(\partial\boldsymbol{\alpha}[e_{5}]\boldsymbol{\alpha}[e_{5}] \boldsymbol{\alpha}[e_{5}]+\partial\boldsymbol{\alpha}[e_{4}]\boldsymbol{ \alpha}[e_{4}])\) based on the derivative of softmax operation. We first use _SPMM_ to aggregate the incoming edge features for every node \(\mathbf{P}=(\mathbf{G}\odot\boldsymbol{\alpha}\odot\boldsymbol{\alpha})\cdot \mathbf{1}\). For example, \(\partial\boldsymbol{\alpha}[e_{5}]\boldsymbol{\alpha}[e_{5}]+\partial \boldsymbol{\alpha}[e_{4}]\boldsymbol{\alpha}[e_{4}]\) is the aggregation on \(v_{3}\) as \(0.80\times 0.63+0.45\times 0.37=0.67\) for the first head. Then we compute the final gradient for every edge with _SDDMM_, \(\partial\mathbf{E}=\boldsymbol{\alpha}\odot(\partial\boldsymbol{\alpha}-( \mathbf{G}^{T}\odot(\mathbf{P}\cdot\mathbf{1}^{T})))\). That is, every node feature \(\mathbf{P}\) is assigned to their out-edges in the reversed graph, and then computed with \(\partial\boldsymbol{\alpha}\) and \(\boldsymbol{\alpha}\). The gradient of the first head of \(e_{3}\) is \(0.63\times(0.80\cdot 0.67)=0.08\).
In step 1 and 1, the gradient of edge attention score is used to compute the source feature and destination feature with two _SPMM_ operations, \(\partial\mathbf{S}=(\mathbf{G}^{T}\odot\mathbf{\partial E})\cdot 1\) and \(\partial\mathbf{D}=(\mathbf{G}\odot\mathbf{\partial E})\cdot\mathbf{1}\), where nodes aggregate their out-edge and in-edge attention scores, respectively. Still use \(v_{3}\) as an example, its gradient \(\partial\mathbf{S}[v_{3}]=\partial\mathbf{E}[e_{1}]=[0,0]\) and \(\partial\mathbf{D}[v_{3}]=\partial\mathbf{E}[e_{3}]+\partial\mathbf{E}[e_{4}]= [0,0.15]\). The gradients from multiple out-edges are accumulated.
Note steps 1 and 1, which do not depend on the graph structure, will follow the traditional DNN back propagation method for gradient computation. We skip the details.
### GNN models
There exist a variety of GNN models. Graph Convolutional Network (GCN) (Gai et al., 2017) derives a graph convolutional operator through spectral graph theory. It can be expressed by GEMM and SPMM operations. Later, GraphSAGE (Gai et al., 2017) uses sampling to encode the graph topology for inductive learning. The model is applicable for unseen nodes because it learns the feature from sampled sub-graph. _GraphSAGE can be implemented with GEMM and SPMM_. GAT (Gai et al., 2018) further introduces graph attention mechanisms that can attend to various neighbors with weights. This model contains GEMM, SPMM, and SDDMM primitives. Later, Relational GCN (RGCN) extends GCN via assigning different parameters to edges with different types (Shi et al., 2018; Wang et al., 2018). _Here, RGCN consists of GEMM and SPMM primitives._ HGT proposes a transformer-based GNN model for heterogeneous graphs (Wang et al., 2018). It contains different parameters for distinct edge and node types. _This model includes GEMM, SDDMM, and SPMM primitives_.
We study two GNN models, i.e., GCN and GAT, for two reasons: (i) These two models are the most popular and cover all the required primitives for most GNN models. (ii) These two models contain relatively large and complete training and testing datasets.
### Quantization
For a collection of values \(\mathbf{X}=\{\mathbf{X}_{i}\mid\mathbf{X}_{i}\in[\mathbf{X}_{min},\mathbf{X}_{ max}]\}\) which are represented in full precision, quantization uses fewer number of bits (i.e., \(B\)) to represent each \(\mathbf{X}_{i}\). Quantization scatters \(\mathbf{X}\) into \(2^{B}-1\) buckets. Subsequently, all the \(\mathbf{X}_{i}\)'s in the same bucket are represented as the same value, i.e., the bucket value.
For _uniform_ quantization, we assign each bucket the same value range, that is, \(s=\frac{\alpha-\beta}{2\beta-1}\), where \([\alpha,\beta]\) is the clipping range of \(\mathbf{X}\). There are also _nonuniform quantization_ whose quantized values are not necessarily uniformly spaced. If one wants to include the entire value range of \(\mathbf{X}\), one needs \(\alpha=\mathbf{X}_{min}\), and \(\beta=\mathbf{X}_{max}\). Formally,
\[\mathbf{X}_{i,Quant}=round(\frac{\mathbf{X}_{i}}{s})-Z, \tag{1}\]
where \(Z=\frac{\alpha+\beta}{2}\) is the zero point after quantization. One can recover the original value \(\mathbf{X}_{i}\) by dequantizing \(\mathbf{X}_{i,Quant}\):
\[\mathbf{X}_{i}\approx s\cdot(\mathbf{X}_{i,Quant}+Z). \tag{2}\]
Quantization further includes the following three configurations: (i) _Asymmetric_ vs. _symmetric_ quantization. Particularly, for \(s\), one can let \(-\alpha\)=\(\beta\)=max\((|\mathbf{X}_{max}|,\)\(|\mathbf{X}_{min}|)\). While asymmetric quantization will likely enjoy a more precise clipping range when compared to symmetric quantization, the latter design, however, simplifies the quantization function in Equation 1 as \(Z=\frac{\alpha+\beta}{2}=0\). (ii) _Quantization granularity_ concerns about the size of \(\mathbf{X}\). Using a matrix as an example, we can extract the same \(s\) for the entire matrix or one \(s\) per row/column of a matrix. The latter has a finer granularity than the former. (iii) _Static vs dynamic_ quantization determines whether we change \(s\) for the same tensor \(\mathbf{X}\) from iteration to iteration. The dynamic version does so, while the static one does not. In Tango, we adopt _symmetric, tensor-level granularity_, _dynamic_ quantization to maintain training accuracy and enhance training speed.
## 3. Tango: An Accuracy and Speed
Co-designed Quantization System
### Tango overview
Our key observation is that quantization presents both _challenges_ and _opportunities_ for GNN training. Tango aims to tackle the challenges efficiently while extracting quantization benefits as follows:
**Challenge: Maintaining training accuracy** poses three issues: (i) quantization could introduce additional computation tasks in addition to the steps in Figure 1. We need to reduce the overhead brought by those additional computations. (ii) For various operations in GNN training (in Figure 1), we need to decide what operations should be quantized and how we should quantize the tensors in each operator to meet the training accuracy requirements. In addition, (iii) those rules should expose optimization opportunities for Tango to accelerate the most time-consuming operations in GNN training with quantization.
In this paper, (i) we introduce GPU-friendly stochastic rounding and a lightweight operation to determine the required # of quantization bits, reducing the cost of meeting accuracy requirements. (ii) We determine that weight update and softmax operations should be performed in full precision, while GEMM, SPMM, and SDDMM can be performed in our novel quantization-aware manner. This minimizes the impact on training accuracy while providing critical optimization opportunities for reducing turnaround time. Notably, (iii) GEMM, SPMM, and SDDMM are the most time-consuming phases in GNN, and our quantization-aware design offers optimization opportunities (see below) to reduce computation costs in GEMM and memory costs for SPMM and SDDMM.
**Opportunity: Accelerating training by quantization.** Quantization offers two avenues to improve training speed, that is, higher computation throughput and less memory traffic with values in lower precision. We use quantized computing to accelerate the most computation-intensive primitives and operations, i.e., GEMM, SPMM and SDDMM. _However, the problem is that these primitives are highly optimized and fine-tuned by commercial libraries._ And CUBLAS GEMM and cuSPARSE SPMM, and SDDMM are closed-source. Integrating our proposed optimizations into these kernels and achieving the desired speedup is extremely challenging.
In this paper, (i) we utilize our novel quantization-aware GEMM to reduce computation time. Moreover, we identify an optimal tiling strategy to overlap the on-the-fly quantization of the matrix with the subsequent quantized computations. (ii) To address the memory-intensive nature of SPMM and SDDMM, we sequentially quantize the input tensor and write the quantized value in memory. The computation then randomly accesses the smaller quantized tensor, which provides better cache behavior than direct random access to full-precision tensors.
### Lightweight rules for maintaining training accuracy during quantized training
**GPU-accelerated stochastic rounding.** We adopt stochastic rounding to reduce the quantization error (Wang et al., 2017), with which the expectation of the quantization error should be 0 statistically. In particular, given a scaled floating-point number \(x\) between \([-2^{B-1}-1,+2^{B-1}-1]\) as the range of \(B\)-bit integers, we round it to integer based on:
\[x_{Quant}=\begin{cases}floor(x)+1,&w/probability\quad x-floor(x);\\ floor(x),&w/probability\quad 1-(x-floor(x)).\end{cases} \tag{3}\]
We design and implement a GPU-accelerated pseudo-random number generator to facilitate fast stochastic rounding, which is \(\sim\)20\(\times\) faster than the native cuRAND random number generator on GPU (Wang et al., 2017). Our key optimization is storing random generator states in GPU registers as opposed to in global memory, which is the case in the existing cuRAND library (Wang et al., 2017). Since the random number generator is a memory-bound operation, this optimization helps significantly improves the throughput. Of note, because cuRAND is closed-source, we cannot directly integrate this optimization into cuRAND. We thus implement our generator based upon xoshiro256++(Wang et al., 2017) with our memory optimizations.
**Lightweight rule for deriving # of desired quantization bits.** We develop a metric to measure the quantization error, which subsequently helps derive the # of desired quantization bits. During quantization, a value \(\mathbf{X}_{i}\) will be rounded to one of the quantization grid points \(\mathbf{X}_{i,Quant}\). We introduce the following metric to estimate the quantization error of a tensor \(\mathbf{X}\):
\[Error\mathbf{x}=\frac{1}{N}\sum_{i=1}^{N}\left|\frac{\mathbf{X}_{i}-\mathbf{ X}_{i,Quant}}{\mathbf{X}_{i}+\mathbf{X}_{i,Quant}+\epsilon}\right|, \tag{4}\]
where \(N\) is the number of elements in the tensor.
Intuitively, \(Error\mathbf{x}\) derives the relative quantization error of a tensor \(\mathbf{X}\), where the numerator, i.e., \(\left|\mathbf{X}_{i}-\mathbf{X}_{i,Quant}\right|\) is the absolute quantization error while the denominator is the sum of \(\mathbf{X}_{i}\), \(\mathbf{X}_{i,Quant}\), and \(\epsilon\). The denominator needs the sum of the three values for two reasons: (i) a small \(\epsilon\) to avoid dividing by zero, i.e., when \(\mathbf{X}_{i}=\mathbf{X}_{i}\),\(Quant=0\). Tango chooses \(\epsilon=0.0005\). Of note, Tango does not experience \(\mathbf{X}_{i}+\mathbf{X}_{i,Quant}=0\) when \(\mathbf{X}_{i}\neq 0\) and \(\mathbf{X}_{i,Quant}\neq 0\) because our quantization is symmetric. (ii) If we use \(\epsilon\) with only either \(\mathbf{X}_{i}\) or \(\mathbf{X}_{i,Quant}\) as the denominator, we could suffer from quantization error divided by \(\epsilon\). This would lead to an extremely large relative error for a particular \(\mathbf{X}_{i}\), overshadowing the relative quantization error of other \(\mathbf{X}_{i}\)'s.
Our proposed quantization error metric in Equation 4 is a relative error thus inductive. That is, this parameter could be used to compare the quantization error across tensors. Therefore, we can tune a desired \(Error\mathbf{x}\) that is generally applicable for various tensors. The value range of \(Error\mathbf{x}\) is [0, 1]. Particularly, if \(\mathbf{X}_{i}\) has no rounding error, the corresponding error is 0. When the rounding error of \(\mathbf{X}_{i}\) is significant, the term approaches 1.
We leverage Equation 4 to select the desired number of quantization bits as follows: we compute \(Error\mathbf{x}\) of the output tensor of the first GNN layer with quantization. Note that we do not apply this metric to the input tensor of the first layer because its quantization error can be recovered by learning from the graph structure (Wang et al., 2017). We also want to mention that the training process could potentially amend the quantization error when the bit count is even lower. Our bit count derivation metric derives a lower bound bit count that could maintain the training accuracy.
As shown in Figure 1(a), our heuristic demonstrates that when \(Error\mathbf{x}\)\(\sim\) 0.3, Tango can maintain the accuracy requirement across various datasets. Therefore, we let \(Error\mathbf{x}\) = 0.3 across all datasets.
Figure 1(b) shows that the desired number of bits for ogbn-arxiv, Pubmed, and ogbn-products are 8, 6, and 8, respectively.
The benefit of our metric is as follows: because our lightweight rule calculates the \(Error_{\mathbf{X}}\) solely for the first layer during the initial epoch. In contrast, determining the accuracy loss typically necessitates training the model until convergence (i.e., all epochs). The effectiveness of our approach is demonstrated by our empirical findings presented in Fig 1(a), which shows that \(Error_{\mathbf{X}}<=0.3\) is a general threshold to maintain the accuracy across datasets.
**Novel quantization-aware matrix multiplication with scaling factor computation.** Since the majority of the tensor primitives in GNN are either dense matrix multiplication or a variant of it, the accuracy analysis would be similar across these primitives. We hence restrict our accuracy analysis to dense matrix multiplication (i.e., GEMM) for brevity.
For two reasons, the resultant matrix of a quantized matrix multiplication has to be of higher precision. First, the result of a multiplication operation between two 8-bit integers could go beyond the value range of an 8-bit integer. Second, the subsequent accumulation of the multiplied values can again push the value beyond the range of an 8-bit integer.
Figure 2(b) shows that the desired number of bits for ogbn-arxiv, Pubmed, and ogbn-products are 8, 6, and 8, respectively.
The benefit of our metric is as follows: because our lightweight rule calculates the \(Error_{\mathbf{X}}\) solely for the first layer during the initial epoch. In contrast, determining the accuracy loss typically necessitates training the model until convergence (i.e., all epochs). The effectiveness of our approach is demonstrated by our empirical findings presented in Fig 1(a), which shows that \(Error_{\mathbf{X}}<=0.3\) is a general threshold to maintain the accuracy across datasets.
**Novel quantization-aware matrix multiplication with scaling factor computation.** Since the majority of the tensor primitives in GNN are either dense matrix multiplication or a variant of it, the accuracy analysis would be similar across these primitives. We hence restrict our accuracy analysis to dense matrix multiplication (i.e., GEMM) for brevity.
For two reasons, the resultant matrix of a quantized matrix multiplication has to be of higher precision. First, the result of a multiplication operation between two 8-bit integers could go beyond the value range of an 8-bit integer. Second, the subsequent accumulation of the multiplied values can again push the value beyond the range of an 8-bit integer.
Figure 3 presents this problem when we perform \(\mathbf{H}^{\{l-1\}}\cdot\mathbf{W}\) in quantized mode. After quantization, the first row of \(\mathbf{H}^{\{l-1\}}_{Quant}\), i.e., [1 58 101 28] multiplies with the first column of \(\mathbf{W}_{Quant}\), i.e., [-104 12 85 93]\({}^{T}\) experiences both issues mentioned above. In fact, all the entries in the resultant matrix (\(\mathbf{H}^{\{l-1\}}_{Quant}\cdot\mathbf{W}_{Quant}\))\({}_{int32}\) exceed the 8-bit range of [127, 127]. Therefore, we opt for a 32-bit data format to store the result to avoid this overflow problem. The good news is that storing the results in 32-bit integers introduces negligible overheads on commodity GPUs. Also, note that recent tensor core units on NVIDIA GPUs force the resultant matrix to be a 32-bit integer matrix for the input of two 8-bit integer matrices.
To reduce the quantization overheads, Tango directly dequantizes the GEMM results, i.e., \(\mathbf{H}^{\prime}\) into FP32 after computing the resultant matrix with our optimizations. In the meantime, Tango also derives the scaling factor \(\mathbf{spr}=166.26\) during the quantized GEMM operation, as shown in Figure 3. This design avoids a dedicated dequantization kernel, a scaling factor computation kernel, and the associated expensive global memory accesses.
**Full precision weight update.** To combat the round-off error, we update the model weights with dequantized FP32 gradients. The reason is that the magnitude of the model weight is often significantly larger than the gradients. In addition, the small learning rate further amplifies the difference. Previously, existing projects use shared exponent [27], Flexpoint [25], or delayed updates [50] to tackle the round-off error. Unfortunately, these designs could suffer from shortcomings of delayed convergence, unavailability on commodity GPUs, being slow to implement on GPUs, or multiple of these shortcomings [51, 52]. Of note, although the updated FP32 weights are quantized into 8-bit integers in the next iteration, quantizing the updated weights into 8-bit integers is often better than directly updating the quantized weights with quantized gradients as elaborated below.
Assuming \(W_{full}=W_{quant}+W_{round\_off}\) and \(\Delta W_{full}=\Delta W_{quant}+\Delta W_{round\_off}\), where \(W_{full}\) and \(\Delta W_{full}\) are weights and update values (e.g. gradients) of full precision, respectively. \(W_{quant}\) and \(\Delta W_{quant}\) are the output from the quantization function \(Q\). \(W_{round\_off}\) and \(\Delta W_{round\_off}\) are the correspondingly round-off errors. Below is our analysis:
\[Q(W_{full})+Q(\Delta W_{full})=W_{quant}+\Delta W_{quant}. \tag{5}\]
If we add the full precision before quantization, we arrive at:
\[\begin{split} Q(W_{full}+\Delta W_{full})&\sim W_{ quant}+\Delta W_{quant}\\ &+Q(W_{round\_off}+\Delta W_{round\_off}).\end{split} \tag{6}\]
One can observe that Equation 6 offers higher accuracy than Equation 5 as \(Q(W_{round\_off}+\Delta W_{round\_off})\) curbs the round off error.
**Full precision for the layer before Softmax.** The Softmax layer amplifies the quantization error due to its exponential operations. For simplicity, we consider a layer before Softmax with two outputs \(z_{0}\) and \(z_{1}\). The difference in Softmax score (\(D\)) between the two outputs is:
\[D=\frac{\frac{exp(z_{0})}{exp(z_{1})+exp(z_{1})}}{\frac{exp(z_{1})}{exp(z_{1})} }=\frac{exp(z_{0})}{exp(z_{1})}. \tag{7}\]
Once the quantization error \(e_{i}\) is introduced to \(z_{i}\), the perturbation of output difference follows:
\[D^{\prime}=\frac{exp(z_{0}+e_{0})}{exp(z_{1}+e_{1})}=D\cdot\frac{exp(e_{0})}{exp( e_{1})}=D\cdot\underbrace{exp(e_{0}-e_{1})}_{\text{Amplified error}}. \tag{8}\]
This analysis suggests that the exponential function applied to \((e_{0}-e_{1})\) will rapidly make the difference \(D\) either bigger or smaller, departing from the desired faithful \(D\). Therefore, we propose to use full precision to compute the layer before the Softmax.
Figure 3. Quantization for GEMM of step 1 in Figure 0(a).
Figure 2. The accuracy for different \(Error_{\mathbf{X}}\) and the required number of bits to retain the desired \(Error_{\mathbf{X}}\) for ogbn-arxiv, Pubmed, and ogbn-products datasets.
### Quantization accelerated training
**GEMM with on-the-fly quantization.** Figure 4 illustrates our GEMM with on-the-fly quantization and scaling factor (i.e., s) computation. Our quantized GEMM includes four steps: First, we quantize while loading the tiles of input matrices from global memory to shared memory (i.e., Tiles **A** and **B**). Note that the input matrices are usually needed for backward computation. Therefore, we store the quantized tiles back in global memory while computing, eliminating the round-trip memory latency in the naive design. Second, we store the resultant block in registers to minimize the write latency. Third, during computation, we pack four 8-bit integers into a 32-bit register and use one **DP4A** instruction for four multiply-accumulate operations between two packed registers. Fourth, we dequantize the resultant 32-bit integers in registers to floating-point. We also fuse the computation of parameter \(s\) in the kernel for the following primitives.
We develop a data tiling strategy to hide the data access latency behind the computations. First, when loading from global memory, we choose an appropriate tile size, i.e., \(128\times 32\) which is the ideal tile size to balance the computation capability and memory throughput on V100 GPUs. Below is our analysis: assuming the sizes of Tiles **A** and **B** are \(M\cdot k\) and \(N\cdot k\), to pipeline the loading of Tiles **A** and **B** with the computation, we hide the loading latency by arithmetic operations, \(loadLatency=arithmeticLatency\). We denote that the latency of loading one FP32 value from global memory is \(Latency_{global}\) and the latency of performing one multiply-accumulate operation is \(Latency_{compute}\). We arrive at \(loadLatency=(M\cdot k+N\cdot k)\cdot Latency_{global}\), and \(arithmeticLatency=(M\cdot N\cdot k)\cdot Latency_{compute}\). This leads to \(\frac{M+N}{MN}=\frac{Latency_{compute}}{Latency_{global}}\). On V100 GPU, we find \(Latency_{global}\approx 400\), \(Latency_{compute}\approx 4\). Without loss of generality, we let \(M=N\). We hence arrive at \(M=N\approx 200\). Since the size of \(M\) and \(N\) should be the power of two, we find \(M=N=128\) offer the best performance. Second, at the warp level, we let each warp load two blocks from Tiles **A** and **B** to compute four adjacent blocks in Tile **C** as shown in Figure 4. We derive the optimal block size that can hide the latency of accessing shared memory. Particularly, in each iteration, a thread loads 32 packed INT8 as the eight blocks colored on the right side of Figure 4. Then the 16 **DP4A** instructions cover the 18 cycles latency of loading for the next iteration.
For computation, we carefully schedule the threads to improve the computation intensity. First, when loading and quantizing Tile **A** from global memory to shared memory, we transpose the Tile because the access is in column while Tile **A** is row-major in global memory. Second, to avoid bank conflict, a warp stores Block **A** in shared memory with a 16-byte offset in each column. Each warp works on \(2\times 2\) C blocks to reuse Block **A** and **B**. Third, we schedule the threads as shown in the left side of Figure 4, mapping 32 threads within a warp to block **C** to increase shared memory throughput. For example, threads 0,1,2,3 access the same address in block **A** so the loaded data can be broadcast to 4 threads.
Of note, there exist frameworks that can generate GEMM kernels with efficient tiling and scheduling, but none of them can be used to implement GEMM with _on-the-fly quantization_. On the one hand, template-based frameworks, such as AutoTVM (Xu et al., 2018) and Ansor (Xu et al., 2018), optimize kernels by enumerating the combinatorial choices of optimizations (e.g., tile layout, tile size, and parallelization). Searching the design space is time-consuming, and the generated kernels are not guaranteed to be optimal. On the other hand, on-the-fly quantization is not supported by the existing templates. Moreover, the kernel generated by these frameworks, e.g., triton compiler (Xu et al., 2018), uses too many registers, resulting in unsatisfied performance.
**Incidence matrix-based adaptive SPMM.** Tango performs quantization in a separate kernel for SPMM. We use quantization to reduce the memory traffic for SPMM since quantization leads the node and edge feature matrices to a smaller size. Unlike GEMM, which performs sequential memory access for the input matrices, SPMM experiences random memory accesses. In this case, performing on-the-fly quantization would lead to random memory access for input matrices of 32-bit floating-point data type. Further, because of unpredictable access patterns, on-the-fly quantization could potentially lead to repeated quantization of the same data. Instead, a dedicated quantization kernel would read 32-bit input floating-point matrices sequentially once and write the 8-bit quantized matrices out, again, sequentially and once. Therefore, during SPMM, we perform random memory access to input matrices of 8-bit as opposed to 32-bit.
Tango further introduces two SPMM variant optimizations, that is, incidence matrix-based SPMM and adaptive SPMM to improve the quantized training performance.
**Incidence matrix-based SPMM** accelerates the SPMM variant, i.e., step 1 of Figure 0(b). Particularly, this SPMM computes the gradients of node features by aggregating the incoming edge features for each node. Using node \(v_{3}\) as an example, as shown in Figure 4(a), because \(v_{3}\) contributes to the edge features for \(e_{3}\) and \(e_{4}\), its gradient is the partial derivative of \(e_{3}\) and \(e_{4}\), that is, \(\partial v_{3}=\partial e_{3}+\partial e_{4}\). However, because DGL uses the adjacency matrix format for the graph, shown in Figure 4(a), we need three matrices for the SPMM, that is, this graph, \(\partial e\), and the node features with all "1"s.
Figure 4. Tango GEMM with on-the-fly quantization and scaling factor s computation.
The drawback of this design is two-fold: First, one needs to allocate and access the all "1" node feature matrix which is redundant and expensive. Second, although the operation in Figure 4(a) can be formulated as an SPMM operation, it includes three inputs, which is not supported by the state-of-the-art cuSPARSE library.
In Figure 4(b), Tango formulates this, computation as an incidence matrix-based SPMM. Particularly, the incidence matrix is a \(V\times E\) matrix where \(V\) and \(E\) are, respectively, the number of nodes and edges in the graph. Each row of the incidence matrix contains the incoming edges of each node by marking the associated entry as 1. This design allows us to compute step, by multiplying two matrices, i.e., the incidence matrix and edge feature. Because we only need two input matrices, Tango can now adopt high-performance cuSPARSE SPMM kernels for step, which is significantly faster than DGL's three matrices-based SPMM.
**Kernel count-based adaptation.** Chances are certain SPMM computations still involve three matrices, such as step in Figure 0(a). In Figure 6, we demonstrate how one could transform a three-matrix-based SPMM kernel into a collection of two-matrix-based SPMM kernels or, to an extreme, Sparse Matrix-Vector multiplication (SpMV) kernels. Once that transformation is completed, one could directly rely on cuSPARSE to perform each two-matrix-based SPMM or SpMV. Note, we prefer cuSPARSE over DGL primitives because our evaluation shows that a single cuSPARSE SPMM kernel is significantly faster than DGL's native two-matrix-based SPMM across various configurations.
However, the benefits brought by cuSPARSE kernels diminish along with the increment of the number of kernels, as kernel invocation cost soars when too many kernels are launched. In summary, neither DGL nor transformed cuSPARSE beats the other across all configurations. We hence adaptively leverage these two solutions to achieve the best performance of both worlds.
Figure 5(a) depicts the SPMM with both edge and node features as matrices. In this design, the first head of the node feature is scaled by the first element of the edge feature and similarly for the second head. We can use multiple optimized cuSPARSE SPMM kernels to replace the native kernel used in DGL. In this case, the two heads can be finished by two SPMM kernels. Figure 5(b) assumes we have four heads in step, of Figure 6. In this case, we arrive at four SpMV kernels for the original three-matrix-based SPMM.
**SDDMM with on-the-fly dequantization.** Similar to our SPMM design, we first perform sequential memory access to quantize the input matrices. During SDDMM, we perform random memory access on those quantized matrices of smaller sizes. Briefly, the existing SDDMM performs one round of random access on full precision matrices. In contrast, Tango performs one round of sequential access on full precision matrices and one round of random access to the low precision matrices. This will lead Tango to have a shorter turnaround time. Since SDDMM might perform addition or subtraction operations, one cannot directly compute the quantized values. This leads to our SDDMM with on-the-fly dequantization.
We use step of Figure 0(a) to explain the reason. This SDDMM computes the edge feature by, assuming the scaling factor \(s\) for \(\mathbf{S}\) and \(\mathbf{D}\) are, respectively, \(\mathbf{s}\) and \(s_{\mathbf{D}}\). The addition in quantized format should be \(\mathbf{S}[q_{i}]+\mathbf{D}[q_{j}]\approx s_{\mathbf{S}}\cdot\mathbf{S}_{ Quant}[v_{i}]+s_{\mathbf{D}}\cdot\mathbf{D}_{Quant}[v_{j}]\). Because \(\mathbf{s}\) and \(s_{\mathbf{D}}\) are often not equal, one cannot directly add the quantized values \(\mathbf{S}_{Quant}[v_{i}]\) and \(\mathbf{D}_{Quant}[v_{j}]\). Therefore, Tango loads the quantized data to enjoy reduced memory traffic, subsequently on-the-fly dequantize the loaded values for addition/subtraction computation.
If SDDMM performs multiplication and division, we can conduct SDDMM directly on the quantized value. Using step, of Figure 0(b) as an example, one needs to compute \(\partial\mathbf{\alpha}[\mathbf{e}_{0}]=\partial\mathbf{\Pi}^{(I)}[v_{0}]\cdot\mathbf{H} ^{\prime}[v_{1}]\). In this case, assuming the scaling factor of \(\partial\mathbf{\Pi}^{(I)}[v_{0}]\) and \(\mathbf{\Pi}^{\prime}[v_{1}]\) are \(s_{0}\) and \(s_{1}\). The computation can be approximated as \(\partial\mathbf{\alpha}[\mathbf{e}_{0}]\approx(s_{0}\cdot\partial\mathbf{\Pi}^{(I)}[v_{0}]) \cdot(s_{1}\cdot\mathbf{\Pi}^{\prime}[v_{1}])\). We can further arrive at \(\partial\mathbf{\alpha}[\mathbf{e}_{0}]\approx(s_{0}\cdot s_{1})\cdot\partial\mathbf{\Pi} ^{(I)}[v_{0}]\cdot\mathbf{H}^{\prime}[v_{1}]\). This allows Tango to perform quantized multiplication directly. Division can also directly work on the quantized values.
**Inter-primitive optimization**. Noticing that the follow-up operators can reuse some quantized tensors, Tango caches these quantized tensors to reduce the quantization overhead. In general, there exist two caching opportunities: (1) caching forward pass for backward, (2) caching prior operators for subsequent operators. Tango develops a detection algorithm that runs on the computation graphs to automatically derive these reuse cases.
First, the backward computation can reuse the quantized tensors from the forward pass. For example, the GEMM of step, in Figure 1
Figure 5. Incidence matrix-based SPMM.
Figure 6. Transforming three-matrix-based SPMM, e.g., step, in Figure 0(a) into a collection of: (a) two-matrix-based SPMM and (b) two-matrix-based SpMV.
has the forward computation \(\mathbf{H}^{\prime}=\mathbf{H}^{(I-1)}\cdot\mathbf{W}\). The corresponding backward step contains the gradient computation for weight, that is, \(\partial\mathbf{W}=\mathbf{H}^{(I-1)}\cdot\partial\mathbf{H}^{T}\), and the gradient computation of node features, i.e., \(\partial\mathbf{H}^{(I-1)}=\partial\mathbf{H}^{\prime}\cdot\mathbf{W}^{T}\), as shown in step 1 in Figure 1b. Clearly, the quantized matrices \(\mathbf{H}^{(I-1)}\) and \(\mathbf{W}\) are used in both forward and backward computations. We thus save the quantized input \(\mathbf{H}^{(I-1)}\) and \(\mathbf{W}\) during the forward pass for the backward pass to avoid repeated quantization. Second, when two operators share the same tensor as input, we can cache the quantized tensor from the former operator and use it for the latter. For example, in Figure 1b, the SPMM in step 1 and SDMM in 1 both need the quantized \(\partial\mathbf{H}^{(I)}\). This way, we cache the quantized input tensor. We also intentionally schedule the computation orders such that the cached tensors can be reused.
We derive the caching opportunity on the computation graph, i.e., Figure 1a as follows. The computation graph consists of tensors as nodes and operators as edges. For nodes with more than one out edge, we can quantize once for multiple operators. For example, the tensor \(\partial\mathbf{H}^{(I)}_{\text{Quant}}\) in Figure 1b has two operators as out-edges, so we cache this tensor. Then we reverse the edges in the computation graph for the backward pass. In this backpropagation graph, we will check if the to-be-quantized tensors are already quantized in the forward graph in order to facilitate quantization sharing.
**Quantization overhead vs. benefit analysis.** While quantization helps reduce computation and data movement, it also brings overheads. Mainly, quantization introduces two types of overheads: parameter computing and data type casting. First, the parameter s in Equation 1 is derived by reducing the elements with the maximum absolute value. For a \(N\times N\) matrix, the reduction needs \(N^{2}\) operations to derive the absolute maximum values. Second, quantizing or dequantizing an element requires two operations: multiply and data type casting. Therefore, the quantization before the primitive performs \(4N^{2}\) floating-point operations.
For GEMM with the input matrices at sizes of \(M\times K\) and \(K\times N\), we perform \(4K(M+N)\) and \(2MN\) operations for quantization and dequantization, respectively. For GEMM, we reduce the number of multiply-accumulate instructions from \(MNK\) to \(\frac{MNK}{4}\), which is often significantly higher than the overheads. Sparse primitives quantize the node and edge feature matrices. We assume \(D\) as the size of features. Given a graph with \(N\) nodes and \(E\) edges, in SPMM, the node and edge features require \(4D(N+E)\) operations for quantization. Later, only the resultant node features are dequantized with \(2ND\) operations. Quantization in SDDMM performs \(4ND\) operations for node features. Dequantizing resultant edge features needs \(2ED\) operations. Regarding the benefits of sparse primitives, sparse primitives enjoy a better cache access pattern brought by quantization which reduces the sizes of the input matrices.
## 4. Experiments
### Experimental setup
**Datasets.** We conduct experiments on five graph datasets as shown in Table 1. The _obgn-arxiv_(Wang et al., 2017) and _Pubmed_(Wang et al., 2018) are citation graphs whose nodes represent papers and edges are the citations. The task is to predict the categories of papers. The _ogbn-products_(Wang et al., 2018) dataset depicts an product co-purchasing network, where nodes represent products sold in Amazon, and edges between two products indicate that they are purchased together. The task is to predict the category of a product. The _DBLP_ dataset is a co-authorship network where nodes are authors and edges are co-authorship (Wang et al., 2018). The _Amazon_ dataset contains products as the nodes and edges represent co-purchase (Wang et al., 2018). The tasks of _DBLP_ and _Amazon_ are to predict if a link exists between two nodes. We add the reverse edges for the directed graphs and self-connects edges to ensure the SPMM operation works for every node.
**Models.** We evaluate GCN (He et al., 2017) and GAT (He et al., 2017) from the example implementations of DGL and the models are trained with the same number of epochs and hyperparameter settings. Both models use the hidden size of 128 and two GNN layers; GAT has four attention heads. For node classification, the model generates the node embedding as the set probabilities of each category. For link prediction, we perform dot-product between two node embeddings as the score of edge existence. The training epochs for Pubmed, ogbn-arxiv, and ogbn-products are 30, 500, and 150.
**Implementation details.**_For ease of use, we integrate Tango in DGL. Therefore, all the models on DGL can enjoy the performance benefits brought from Tango without any changes._ We provide our optimized quantized CUDA kernels to replace the corresponding primitives in DGL. DGL employs the GEMM function from the cuBLAS library and sources its sparse primitives either directly from cuSPARSE (Wang et al., 2018) or through its own implementations. DGL's interface is designed in Python and its primitives are integrated as PyTorch functions. To ensure a fair comparison, the kernels of Tango are also invoked via PyTorch's auto-differential engine during training (Wang et al., 2018). Furthermore, DGL supports multiple graph data structure formats, and Tango leverages DGL's heuristics to determine the most efficient format for its primitives.
**Evaluation platforms.** We use Python 3.6.10 and CUDA 11.7 on six V100S GPUs and Intel(R) Xeon(R) Gold 6244 @ 3.60GHz CPU. The model is trained with PyTorch 1.13.0 and DGL 0.8. We also have access to a single A100 GPU for a limited time. We use that to compare GEMM on INT8 tensor core vs FP16 tensor core.
### Tango vs. state-of-the-art
We compare Tango with DGL (Wang et al., 2018) and EXACT (Wang et al., 2018). DGL trains the model in full precision, and EXACT trains the model with quantized tensors. EXACT aims to save memory by quantizing the saved tensors, and it has no optimization in computation. We set EXACT to use 8-bit quantization. Of note, the GNN models are implemented through PyTorch, which experiences significant high-level language overheads when calling Tango primitives. As a result, we observe significantly smaller model-level speedups than the primitive-level comparisons (detailed in Section 4.3).
**Training speed.** We evaluate the training speedup of Tango on GCN and GAT models. We train each model 5 times and report the average elapsed time achieving the same accuracy as the baseline,
including the forward and backward computations. As shown in Figure 8, Tango has \(1.2\times\) and \(1.5\times\) speedup on average on GCN and GAT models compared with DGL, respectively. The GAT model enjoys more benefits because it contains more quantized primitives than GCN. Further, larger graphs have more speedup for the GCN model, except the DBLP dataset, which has the smallest average degree among the five graphs. Overall, Tango achieves an average speedup of \(2.9\times\) on GCN and \(4.1\times\) on GAT compared with EXACT. The key takeaway is that applying quantization without appropriate optimizations will lead to a significant slowdown (e.g., EXACT).
**Accuracy study.** Figure 7 studies the accuracy impact of the techniques in Section 3.2 for GCN and GAT. In particular, we evaluate Tango, Tango with quantized layer before Softmax (**Test1**), and Tango without stochastic rounding (**Test2**). For clarification, the training crashes without quantization-aware matrix multiplication and full precision weight update. The baseline models are trained in FP32 with the same number of epochs as the quantized training.
Overall, Tango could achieve \(>\)99% accuracy of the full precision training with the same number of epochs. When quantizing the layer before Softmax (**Test1**), the models show noticeable accuracy loss except for the DBLP dataset, GCN model on Pubmed, and GAT model on ogbn-arxiv. The average relative accuracy drop is 9.7%. Despite the similar final accuracy, GAT on Pubmed and GCN on ogbn-arxiv converge slower than the baseline by 18 and 35 epochs, respectively. For quantization without stochastic rounding (**Test2**), we observe for GCN on Pubmed and both models on DBLP and Amazon, the quantization error changes the optimization direction in some epochs. Thus the models take more epochs to recover. Moreover, the models on ogbn-arxiv and ogbn-products suffer from significant accuracy drops. As shown in Figure (a)a, although the model can achieve convergence, the training process shows more instability than that with stochastic rounding.
**Multi-GPU training.** Figure 9 studies Tango's impact on multi-GPU training. We directly adopt DGL's mini-batch multi-GPU training. That is, each GPU trains the model on a batch of sampled subgraphs per epoch. Then, the gradients of all GPUs are updated by an all-reduce operation. _We compare the training speed between full precision baseline and Tango using the same number of GPUs_.
Tango achieves speedup over the full precision baseline via transferring the quantized node features and gradients. Since we perform stochastic rounding-based quantization, this process will introduce nontrivial turnaround time. In Tango, we overlap the feature quantization with the subgraph sampling. The overall trend is that more GPUs would enjoy higher speedup as the Peripheral Component Interconnect Express (PCI-E) congestion is better alleviated by our quantization. Particularly, the speedup increases from \(1.1\times\) to \(1.5\times\), and \(1.2\times\) to \(1.7\times\) from two to six GPUs on GCN and GAT, respectively.
### Turnaround time analysis
**Caching the quantized tensors.** Figure 10 shows the performance of caching the quantized tensor in forward for backward reuse. We test the GEMM primitive on different datasets. We test with two hidden sizes, \(D=128\) and \(D=256\). The result shows \(1.7\times\) and \(1.6\times\)
Figure 8. The speedup of training the GNN models with Tango and EXACT compared with DGL.
Figure 7. The convergence analysis of GCN and GAT with Tango. Test1 denotes Tango with quantized layer before Softmax. Test2 denotes Tango using nearest rounding instead of stochastic rounding.
Figure 9. Tango’s impact on multi-GPU training. The X-axis is the number of GPUs.
on average when \(D=128\) and \(D=256\), respectively. The saving is related to the data size; smaller graphs, such as Pubmed, enjoy more time savings.
**GEMM.** Figure (a)a shows the speedup of our quantized GEMM over cuBLAS GEMM with the hidden size \(D=256\) and \(D=512\). Of note, we include the quantization cost in Tango GEMM time. Our quantized GEMM primitive has \(2.2\times\) and \(2.5\times\) speedup on average when \(D=256\) and \(D=512\), respectively. The trend also suggests that quantization offers more speedup on the GEMM operator when the hidden size increases. In addition, we also compare our quantized INT8 GEMM with FP16 GEMM on A100 Tensor Core GPUs. Figure (b)b shows that our quantized GEMM primitives have \(1.9\times\) for \(D=256\) and \(1.8\times\) for \(D=512\). Since both baseline and Tango use Tensor Core, the speedup over baseline is smaller than using CUDA core because the performance difference of computing in INT8 and FP16 is \(2\times\) on A100 tensor cores. Further, we observe a speedup drop for a bigger \(D\) because our quantization needs to scan through a bigger tensor to extract the scaling factor \(s\).
Figure (a)a shows the profiling results of quantized GEMM. We profile the ratio of achieved computation throughput (operation/s), memory throughput (GB/s), Instruction Per Cycle (IPC), and the number of instructions compared with cuBLAS FP32 GEMM [64]. The average computation and memory throughput ratios are \(2.1\times\) and \(2.2\times\), respectively. Memory throughput is higher because our quantized GEMM writes the quantized matrix out. However, since GEMM is computation-intensive, our increased computation throughput dominates the performance impacts. Our further investigation into IPC and \(\bullet\) of instructions in Figure (b)b explains how Tango doubles the computation throughput. Our average IPC is \(\sim\)70% of the baseline, with the instruction number reduced to \(\sim\)31%. Together, we can roughly double the throughput of the baseline.
**SPMM.** Figure (a)a shows the performance of using incidence matrix SPMM for edge aggregation compared with DGL SPMM kernels. We set the edge features size ranging from 4 to 20. All dataset has an average \(2.1\times\) speedup. The ogbn-arxiv dataset has the best speedup of \(5.5\times\) on average because of the poor performance of its baseline kernel. That is, the randomness of the incidence matrix is much lower than that of the adjacency matrix from the baseline. Table 2 shows the achieved memory throughput using our incidence-based SPMM and baseline when the feature size is 16. The irregular access for the baseline on the ogbn-arxiv dataset leads to low memory throughput. Using the incidence matrix alleviates the irregularity because the edges incidents to a node are stored adjacent in memory.
Figure (b)b shows the performance of using multiple SPMM's with a small edge feature dimension in a multi-head graph attention operation compared with DGL SPMM kernels. We set the node feature dimension as (\(H\times D\)), and the edge features dimension as (\(H\times 1\)), where \(H\) represents the number of heads and \(D\) represents the hidden size of each head. Ours has \(2.1\times\), \(1.9\times\), \(2.0\times\), and \(1.8\times\) speedup over DGL's primitive on average respectively for (\(2\times 128\)), (\(4\times 128\)), (\(2\times 256\)), and (\(4\times 256\)). Increasing the number of heads leads to a smaller speedup when the hidden size is fixed because of the increased overhead of more kernel launches. The hidden size has little impact on speedup with the same head number.
Figure 14 shows the performance of using multiple cuSPARSE SpMV with a large edge feature dimension on ogbn-arxiv graph.
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline Dataset & ogbn-arxiv & ogbn-products & Pubmed & DBLP & Amazon \\ \hline Ours (GB/s) & 344.06 & 491.72 & 353.38 & 331.57 & 342.93 \\ \hline Baseline (GB/s) & 41.26 & 244.22 & 131.89 & 297.67 & 105.88 \\ \hline \end{tabular}
\end{table}
Table 2. The achieved memory throughput using incidence-based SPMM and DGL baseline.
Figure 11. Tango GEMM vs cuBLAS GEMM.
Figure 12. The hardware profiling of quantized GEMM.
Figure 10. The speedup of caching the quantized tensors.
We test the feature size ranging from 2 to 12. When the size is smaller than 6, ours has a 1.6\(\times\) speedup on average over DGL's implementation. The result shows the increased turnaround time as the number of kernels increases.
**SDDMM.** Figure 15 shows the performance of quantized SDDMM compared with DGL SDDMM kernels. We evaluate two SDDMM variants, including the row-wise dot-product (step in Figure 1b) and element-wise addition (step in Figure 1a), denoted as _SDDMM dot_ and _SDDMM add_. The node features are matrices with the size of (4, 64). Our SDDMM add and SDDMM dot achieves 1.9\(\times\) and 1.6\(\times\) speedups over DGL, respectively.
### Speed impact for # of quantization bits
Because neither cuSPARSE nor DGL provides SPMM kernels of INT4, this section only studies INT4 GEMM and SDDMM which are implemented by Tango. Figure 16a shows the SDDMM performance using INT4 compared with full precision DGL primitives. The addition and dot-product kernels achieve, on average, 3.3\(\times\) and 1.8\(\times\), respectively. Dense graphs like _ogbn-arxiv_ and _ogbn-products_ enjoy more benefits from reduced memory traffic because the node embeddings are more likely to be reused by cache hit.
Figure 16b shows the GEMM performance using INT8, and INT4 compared with cuBLAS. Note that we run the tests on an A100 GPU with INT4 hardware support. Using INT8 and INT4 leads to 5.4\(\times\) and 6.2\(\times\) average speedup when hidden size \(D=256\). For \(D=512\), the average speedup is 8.1\(\times\) and 10.1\(\times\), respectively. Using fewer bits shows marginal improvement because the sub-byte access under-utilizes the shared memory bandwidth.
## 5. Related Work
Recent years have seen a surge of efforts on GNN (Zhu et al., 2017; Wang et al., 2018; Wang et al., 2019; Wang et al., 2019). For a comprehensive study about the history and advancements in quantization for DNNs and GNNs, we refer the readers to two surveys (Wang et al., 2018; Wang et al., 2019). In addition to the related work in Section 1, this section further discusses GNN primitives and inference.
**GNN operator optimization** projects often focus on improving the SPMM and SDDMM kernels in GNN. GE-SpMM (Wang et al., 2019) and DA-SpMM (Wang et al., 2019) propose optimizations for implementing SPMM on GPU for GNN workloads. QGTC (Wang et al., 2019) accelerates quantized GNN operations using Tensor Cores on GPU by representing the adjacency matrix as a 1-bit sparse matrix and quantizing node features in any bits, which can be computed using the 1-bit computation function on Ampere Tensor Cores. GE-SpMM, QGTC, and DA-SpMM do not support models with multi-edge features like GAT, while Tango revamps SPMM to support such models. In addition, Tango also supports quantized GEMM and SDDMM. FeatGraph (Wang et al., 2019) uses tensor compilers, providing a flexible programming interface to generate SPMM and SDDMM for various GNN operations. FeatGraph aims to exploit parallelism for customized operations between features. However, FeatGraph does not support quantization.
**GNN inference quantization** has received significant attentions recently (Wang et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019). Unfortunately, none of these can achieve a shorter turnaround time for _training_ than not quantized GNN models. Particularly, (Wang et al., 2019) quantizes the GNN into binary with knowledge distillation to reduce accuracy loss. Since knowledge distillation needs to train a teacher model and a student model, the training time increases. SGQuant (Wang et al., 2019) is a quantization scheme aiming to reduce memory consumption. It assigns different bits to embeddings and attention tensors on different levels, but the mismatch of datatype incurs extra conversion overhead. In contrast, Tango introduces a variety of framework and primitive-level system optimizations, leading to a shorter turnaround time during quantized GNN training.
## 6. Conclusion
Tango identifies both the challenges and opportunities brought by quantization to GNN training. Particularly, Tango makes the following three major contributions. First, Tango introduces various lightweight rules to maintain the accuracy for quantized GNN training. Second, we design and implement quantization-aware primitives and inter-primitive optimizations to reduce the turnaround time for quantized GNN training. Third, we integrate Tango into DGL and evaluate it across a variety of GNN models and datasets to demonstrate the superior performance of Tango.
## Acknowledgement
We would like to thank the anonymous reviewers for their helpful suggestions. This work was in part supported by the NSF CRI Award No. 2331536, CAREER Award No. 2326141, and NSF 2212370, 2319880, 2328948, 2319975, 2331301 and Semiconductor Research Corporation (SRC) Artificial Intelligence Hardware program. Any opinions, findings conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the funding agencies.
Figure 16. The turnaround time impacts by varying # of bits.
Figure 14. The performance of using multiple cuSPARSE SPMV with high edge feature dimension.
Figure 15. The performance of SDDMM operators. |
2306.08960 | Neural Network Compression using Binarization and Few Full-Precision
Weights | Quantization and pruning are two effective Deep Neural Networks model
compression methods. In this paper, we propose Automatic Prune Binarization
(APB), a novel compression technique combining quantization with pruning. APB
enhances the representational capability of binary networks using a few
full-precision weights. Our technique jointly maximizes the accuracy of the
network while minimizing its memory impact by deciding whether each weight
should be binarized or kept in full precision. We show how to efficiently
perform a forward pass through layers compressed using APB by decomposing it
into a binary and a sparse-dense matrix multiplication. Moreover, we design two
novel efficient algorithms for extremely quantized matrix multiplication on
CPU, leveraging highly efficient bitwise operations. The proposed algorithms
are 6.9x and 1.5x faster than available state-of-the-art solutions. We
extensively evaluate APB on two widely adopted model compression datasets,
namely CIFAR10 and ImageNet. APB delivers better accuracy/memory trade-off
compared to state-of-the-art methods based on i) quantization, ii) pruning, and
iii) combination of pruning and quantization. APB outperforms quantization in
the accuracy/efficiency trade-off, being up to 2x faster than the 2-bit
quantized model with no loss in accuracy. | Franco Maria Nardini, Cosimo Rulli, Salvatore Trani, Rossano Venturini | 2023-06-15T08:52:00Z | http://arxiv.org/abs/2306.08960v2 | # Neural Network Compression using Binarization and Few Full-Precision Weights
###### Abstract
Quantization and pruning are known to be two effective Deep Neural Networks model compression methods. In this paper, we propose _Automatic Prune Binarization_ (APB), a novel compression technique combining quantization with pruning. APB enhances the representational capability of binary networks using a few full-precision weights. Our technique jointly maximizes the accuracy of the network while minimizing its memory impact by deciding whether each weight should be binarized or kept in full precision. We show how to efficiently perform a forward pass through layers compressed using APB by decomposing it into a binary and a sparse-dense matrix multiplication. Moreover, we design two novel efficient algorithms for extremely quantized matrix multiplication on CPU, leveraging highly efficient bitwise operations. The proposed algorithms are \(6.9\times\) and \(1.5\times\) faster than available state-of-the-art solutions. We perform an extensive evaluation of APB on two widely adopted model compression datasets, namely CIFAR-10 and ImageNet. APB shows to deliver better accuracy/memory trade-off compared to state-of-the-art methods based on i) quantization, ii) pruning, and ii) combination of pruning and quantization. APB outperforms quantization also in the accuracy/efficiency trade-off, being up to \(2\times\) faster than the \(2\)-bits quantized model with no loss in accuracy.
Deep Neural Networks, Model Compression, Matrix Multiplication, Image Classification.
## 1 Introduction
Deep Neural Networks (DNNs) had an unprecedented impact on computer vision and achieve state-of-the-art performance in many different tasks, such as image classification [40], semantic segmentation [15], and object detection [35]. Indeed, DNNs huge computational requirements pose severe challenges to their pervasive application. Model compression is the field of Deep Learning devoted to decreasing the computational requirements of Deep Neural Networks (DNNs) by leveraging their largely proven over-parametrization [2].
Quantization techniques reduce the number of bits required to represent the network parameters, thus offering compelling properties in terms of space saving and inference speedup. In this line, binarization, i.e., 1-bit quantization, reduces the memory burden of \(32\times\) with respect to its full-precision counterpart and converts floating-point multiplications into cheap bitwise operations [34]. Despite the work done in this field [33, 46, 48], binary networks still struggle to match the performance of the corresponding full-precision model. At the same time, \(2\) or \(3\)-bits quantization approaches are closing the gap with full precision models [24, 47], but do not ensure the compelling properties of binary networks, especially in terms of inference speedup on CPU.
Pruning techniques remove network parameters to produce sparse models as accurate as their original dense versions [11, 13, 38]. Pruning allows for consistent memory savings as only the non-zero parameters and their positions need to be saved. Despite the reduction of Floating Points OPerations (FLOPs), pruning a neural network does not imply a remarkable inference speedup until extreme sparsity levels are achieved, i.e., \(>95\%\), as CPUs and GPUs are optimized for dense computation [49]. To summarize, binarization techniques allow for fast neural inference but do not match the performance of the equivalent full-precision models. On the other hand, pruning techniques can produce highly effective sparse networks but do not deliver consistent speedup until extreme sparsity ratios are reached. Observe that quantization and pruning adopt complementary approaches to compress a neural network. Quantization maintains network parameters' cardinality while shrinking the parameters' representation, i.e., the number of distinct values a parameter can assume. Instead, pruning reduces
Fig. 1: Graphical illustration of Automatic Prune Binarization (APB). The weights in the binarization interval are converted to \(\{-\alpha,\alpha\}\) according to their sign while the remaining values are kept in full precision.
the cardinality of non-trivial parameters but it allows the surviving ones to span in the original representation, i.e., real numbers represented using \(32\) bits. Our aim is to blend these approaches together to leverage the properties of quantization, especially binarization, while empowering its representational capability with few full-precision entries.
In this paper, we propose Automatic Prune Binarization (APB), a novel compression method that combines low-bit quantization (binarization) with pruning. APB allows each parameter to assume either a binary or a full-precision representation (Figure 1). APB jointly maximizes the accuracy achieved by the network while minimizing its memory impact by identifying an optimal partition of the network parameters among these two sets. In detail, it works by identifying a binarization interval (centered in \(0\)), and the parameters falling in this interval are represented using one bit. As done by other state-of-the-art binarization approaches, according to their sign, binarized parameters are converted to \(\{\neg\alpha,\alpha\}\), where \(\alpha\) is a learned layer-wise scalar value (Figure 1). On the other hand, parameters outside the binarization interval are kept in full precision. By doing so, APB produces two different overlapping networks, i.e., a binary network and an extremely sparse full-precision network. Their combined expressiveness allows for improving the performance of binary networks without the need of doubling the bits required to represent the weights, as done in \(2\)-bits quantization. This goal can be achieved when the number of full-precision weights is sufficiently low. In this case, APB also offers compelling inference properties. In fact, we experimentally show that the overhead given by the sparse network is negligible at our sparsity ratios. Moreover, we develop and present two novel matrix multiplication algorithms for CPU in extreme quantization scenarios. Our approach works by remapping a \(q\)-bits matrix in a set of \(q+1\) binary matrices. Then, these matrices can be efficiently multiplied by leveraging cheap bitwise operations. We provide a high-performance implementation of these novel algorithms on CPU and show that our implementation can be much faster than currently available, general-purpose solutions [21] for quantized matrix multiplication. To the best of our knowledge, these are the first techniques tailored for extremely quantized matrices on CPU, except for the binary case. This allows the evaluation of the efficiency of highly quantized networks on CPU for the first time. Overall, we experimentally show that APB outperforms the performance of state-of-the-art quantization approaches in terms of memory/accuracy and efficiency/accuracy trade-offs.
In detail, the novel contributions of this work are:
* we introduce Automatic Prune Binarization (APB). This novel compression framework jointly maximizes the accuracy achieved by the network while minimizing its memory impact by deciding whether each weight should be binarized or kept in full precision. We show how the problem of partitioning the set of weights can be addressed using Stochastic Gradient Descent.
* we address the problem of the efficiency of quantized networks on CPU. We first show that the forward pass through a layer compressed using APB can be decomposed into a binary multiplication and a sparse binary multiplication. We discuss the efficiency of matrix multiplication as a function of the operand's bit width. We then design two novel matrix multiplication routines based on efficient bitwise operations available on CPU for extreme quantization scenarios. The source code of APB and our novel bitwise matrix multiplication routines will be released upon publication of the manuscript.
* we provide an extensive experimental evaluation on two public datasets used in model compression, i.e., CIFAR-10 [22] and ImageNet [5]. Experiments show that APB offers better accuracy/memory trade-off compared to any state-of-the-art compression method based on i) quantization, ii) pruning, iii) the combination of quantization and pruning.
* we evaluate the performance of our novel low-bits matrix multiplication routines when employed for the forward pass of neural networks on ImageNet. Our methods are \(6.85\times\) and \(1.5\times\) faster than available state-of-the-art solutions. Moreover, APB shows superior performance even in the accuracy/efficiency trade-off, being \(2\times\) faster than the \(2\)-bits quantized model with no loss in accuracy.
The rest of the paper is organized as follows: in Section 2 we present the related work in binarization, low-bits quantization, pruning, and efficient inference on CPU for neural networks. In Section 3 we present our novel APB compression framework mixing full-precision and binary weights. In Section 4 we discuss the efficiency of matrix multiplication at different bit widths and we introduce our novel low-bits matrix multiplication routines. Section 5 presents an experimental evaluation of APB. First, we discuss the memory compression performance of APB (Section 5.2). Then, we evaluate the performance of our matrix multiplication algorithms and compare the execution time of the quantized networks against APB (Section 5.3). Finally, Section 6 concludes the work and draws some future lines of research.
## 2 Related Work
Our method lies at the intersection of two families of compressors, i.e., _binarization/low-bit quantization_[4] and _pruning_[23, 14]. In the following, we review the main contributions in these lines along with the ones investigating _efficient inference_ algorithms on quantized networks [31].
**Binarization**. Pioneering work on binarization are BinaryConnect [4] and XNOR-Net [34]. These methods rely on the use of the _sign_ function to constrain the weights in \(\{\neg 1,1\}\) and to scale them by the mean of their absolute value. More recent work leverages advanced techniques to train highly-accurate binary models. Recently, in that line, some works show that maximizing the entropy of the binarized weights is an effective approach [25, 33]. Xu _et al._ show the importance of _latent weights_, i.e., full-precision weights used during backpropagation and weight update. The authors focus on _dead weights_, i.e., weights that are rarely updated due to their distance to the origin. Authors show that these weights are responsible for hampering the
training process and they propose a tailored Rectified Clamp. Unit to reviy those weights. Liu _et al._ tackle the problem of frequent weight flipping, i.e., weights changing their sign, by employing two learnable scaling gradient factors for the activations, one for each of the binary states (\(\{\cdot 1,1\}\)) [27]. Another family of approaches proposes architectural changes to the network to quantize. Bi-Real Net adds a double skip-connection on the ResNet architecture to sum the real-valued input with the features obtained after a binary convolution [29]. A well-known solution in this field is ReActNet [28]. Here, the authors duplicate the input channels of convolutional layers, introduce tailored activation functions for binary networks and employ knowledge distillation to enhance the training phase. Hu _et al._ introduce _Squeeze_ and _Expand_ layers aiming at combining input and output activations [18].
**Low-bits Quantization**. Quantization techniques often rely on the Straight-Through Estimator (STE) [1] to propagate the gradients through non-differentiable quantization functions. Lee _et al._ propose to overcome the limits of STE by exploiting an element-wise gradient correction method [24]. Their approach, named Element-Wise Gradient Scaling (EWGS), scales the gradient of each full-precision weight accordingly to three factors: i) its sign, ii) the gap between full-precision and quantized value, iii) a scaling factor learned through the approximation of the Hessian. SLB employs a continuous relaxation strategy to overcome the gradient mismatch problem [48]. Each weight of the network is mapped to a probability distribution that represents the values it can assume with a \(q\)-bits representation. The values associated with the highest probabilities are selected as quantization values. Yamamoto proposes a non-uniform quantization method for pre-activations ResNet models [44]. The approach is based on learnable functions that wrap the quantization process to allow effective tune the quantization levels [47].
**Pruning**. Pruning techniques effectively sparsify neural networks with small/no accuracy degradation.1 Recently, a plethora of different pruning methods has been developed. For a comprehensive discussion, we recommend the reading of dedicated surveys [26]. We highlight that the importance of high absolute-value weights in neural networks was first discovered in pruning techniques. In fact, magnitude-based heuristics save high absolute-value weights while zeroing out the others. Han _et al._ are the first to apply magnitude-based pruning in conjunction with re-training of the network to mitigate the possible performance degradation [12]. Lately, magnitude pruning has been improved by introducing re-winding, namely reassigning the surviving parameters to their initialization values, showing that it outperforms fine-tuning on several network architectures and datasets [36]. A lot of effort has been spent in training sparse networks (or pruning them at initialization), rather than pruning after training. This interest is motivated by the so-called "Lottery Ticket Hypothesis", namely the existence of highly effective sparse networks in randomly initialized dense networks [8].
Footnote 1: with pruning, we always refer to element-wise pruning. See Liang _et al._[26] for a complete analysis of the difference between element-wise and structured pruning.
**Combination of Pruning and Quantization** Several works explore the combination of pruning and quantization, to fruitfully exploit the advantages provided by both techniques [6, 12, 25, 41, 42]. Han _et al._[12] apply clustering algorithm on the weights surviving the pruning phase, then optimize the value of the centroids using the average of the weight gradients. Bayesian Bits [42] is an approach where mixed precision quantization is combined with pruning. In particular, network weights are either quantized to a power-of-two-bit width or zeroed out in a data-driven fashion. In "Multi-Prize Lottery Ticket" (MPT) [6], Diffenderfer _et al._ mix pruning and binarization by i) discovering highly-effective sub-networks in neural networks, ii) binarizing the surviving values.
**Efficient Inference**. Nurvitadhi _et al._ study the efficiency of binary multiplication on different hardware platforms. Authors estimate a speedup of \(2\times\) of binary over single-precision multiplication on a CPU equipped with \(64\)-bit bitwise instructions [31]. Regarding efficient inference of binary network on CPU, a major contribution is BitFlow, a binary convolution algorithm (_Pressed-Conv_) based on bit-packing on the channel dimension [19]. BitFlow provides \(1.8\times\) speedup compared to naive binary convolution. In this line, DaBnn is an efficient inference framework for binary neural networks on mobile platforms powered by ARM processors [50]. Our work is the first one studying the efficiency of low-bits quantized neural networks on CPU.
**Our Contribution**. APB approach is orthogonal to other techniques mixing pruning and quantization. As an example, in works combining binarization and pruning [6], parameters are either zero or binary (\(\{\cdot 1,+1\}\)). APB, instead, enriches the expressiveness of binary networks with full-precision parameters in the same framework that effectively mixes binarization and pruning. This means that weights in APB-compressed networks are either binary or _full-precision_. To the best of our knowledge, we are the first to jointly optimize binary and full-precision parameters in a novel and end-to-end compression framework.
## 3 Automatic Prune Binarization
We now describe APB, our novel compression framework that adds a few full-precision values to binary networks. First, we introduce the role of binarization. Second, we show how large absolute-value weights play different roles in binarization and pruning. Third, we formally introduce APB and show how its parameters can be optimized by leveraging Stochastic Gradient Descent (SGD). Finally, we show how to decompose the matrix multiplication into dense-dense and sparse-dense matrix multiplications.
**Binarization**. Let us consider \(W\in\mathbb{R}^{n}\) as the set of weights of a neural network. The scope of binarization is to employ a single bit to store each weight \(w\in W\), forcing \(w\) to be \(\{\cdot 1,+1\}\). Previous studies show that it is possible to enhance the expressiveness of the model by scaling the weights with a scalar \(\alpha\)[34] so as to remap them to \(\{\cdot\alpha,+\alpha\}\). We rely on re-scaled binarization that asks for the definition of a Bin operator defined as follows:
\[\texttt{Bin}(w)=\alpha\cdot\texttt{sign}(w)=\left\{\begin{array}{ll}+\alpha &\text{if }w\geq 0\\ -\alpha&\text{if }w<0.\end{array}\right. \tag{1}\]
The Bin operator is defined as a function of \(sign(w)\), whose derivative is zero almost everywhere. This hinders the usage of gradient-based approaches for training the model. For this reason, gradients are approximated with the Straight Through Estimator (STE) [1]. STE works by using a surrogate differentiable function \(g(w)\) that approximates \(sign(w)\). Hence, we can use the derivative of \(g()\) in place of the derivative of \(sign()\). In practice, STE imposes that:
\[\frac{\partial\texttt{Bin}(\texttt{w})}{\partial w}=\frac{\partial g(w)}{ \partial w}. \tag{2}\]
This derivative is used to update the full-precision weights \(W_{l}\), which are referred as _latent weights_[46] in the context of binarization. The final binary matrix \(W\) is then obtained by simply applying the Bin operator over \(W_{l}\).
**Large Absolute-value Weights**. Given the STE training strategy, the binary weight \(w\) is updated only when there is a flip of sign in the corresponding latent weight \(w_{l}\in W_{l}\), as a result of the gradient update. In a recent work [46], authors identified the problem of large absolute-value parameters that diverge from the zero-centered Laplacian distribution characterizing latent weights, namely _dead weights_. These weights interfere with the optimization phase giving a reduced chance to change their sign. In practice, they freeze part of the network, hindering the training process.
Even in pruning, large absolute-value weights play a central role. Here, network layers are sparsified by zeroing out less important weights, and the importance of each parameter is chosen according to a heuristic. Interestingly, a simple yet very effective heuristic to determine the importance of a parameter is its magnitude [13, 11]. Several works show that a small portion (\(<10\%\)) of large absolute-value weights is enough to match the performance of the dense model.
Large absolute-value weights thus play contrasting roles in binarization and pruning techniques. In binarization, they hamper and slow down the training process due to the low likelihood of changing the sign. In pruning, they synthesize the expressiveness of the overall model. We build our approach on this discrepancy: we leverage the advantages that large-absolute weights provide in pruning while mitigating the drawbacks associated with binarization. This is the key intuition underlying APB, our novel method for effective compression of neural networks. APB keeps high absolute-value weights in full precision and it bianarizes the remaining ones. For this purpose, APB defines a symmetric binarization interval around zero (Figure 1): weights falling outside this interval are kept in full-precision, while the others are mapped to \(\{-\alpha,+\alpha\}\) according to their sign. In this regard, APB can be interpreted as a pruning technique where small absolute-value weights are binarized instead of being zeroed out [13]. The amplitude of the binarization interval and the value of the scalar \(\alpha\) are optimized during training. As shown in Figure 2, APB compresses the network by applying pruning and binarization in parallel, i.e., each weight is either full-precision or binary. Conversely, in classical approaches mixing pruning and quantization/binarization, each parameter is either zeroed-out or quantized/binarized [12, 41, 42]. Within APB, two different networks coexist during the training: a binary and a sparse network.
**APB**. Given the considerations on the role of large absolute-value weights, APB employs weight magnitude to determine whether weight should be binarized or kept in full precision. Our approach is partially inspired by [45], which introduces a trainable threshold to determine if a weight should be set to 0 or 1. The operator APB on a weight \(w\) is defined as:
\[\texttt{APB}(\texttt{w})=\left\{\begin{array}{ll}\texttt{sign}(w)\,\alpha &\text{ if }|w|\leq\alpha+\delta\\ w&\text{ otherwise,}\end{array}\right. \tag{3}\]
with \(\delta\) being the amplitude of the binarization interval exceeding \(\alpha\). Figure 3 graphically depicts how APB works. The weights whose absolute value ranges in \([0,\alpha+\delta]\) are binarized (orange area), while the parameters falling outside this interval are kept in full precision (green area).
If \(w\) is within the binarization interval, APB is not differentiable. In this case, we apply STE by employing the identity function, i.e., \(id(x)=x\), as \(g(w)\)[1]. Thus, the derivative becomes:
\[\frac{\partial\texttt{APB}}{\partial w}=\left\{\begin{array}{ll}g^{\prime}(w )&\text{ if }|w|\leq\alpha+\delta\\ 1&\text{ otherwise.}\end{array}\right. \tag{4}\]
Assuming \(\delta\geq 0\), we can re-write
\[|w|\leq\alpha+\delta\quad\Rightarrow\quad\frac{|w|-\alpha}{\delta}\leq 1. \tag{5}\]
To ease the notation, we define \(\hat{w}=\frac{|w|-\alpha}{\delta}\). This entails:
\[\texttt{APB}(w)=\left\{\begin{array}{ll}\texttt{sign}(w)\,\alpha&\text{ if }\hat{w}\leq 1\\ w&\text{ otherwise.}\end{array}\right. \tag{6}\]
We define the indicator function of the set of binarized weight as \(\chi_{B}:=\mathds{1}(\hat{w}\leq 1)\). We can now define the gradients of \(\alpha\) and \(\delta\) by unrolling the derivatives of the loss function \(\mathcal{L}\). We introduce the indicator function to constrain \(\alpha\) and \(\delta\) to depend exclusively on the binarized weights and by leaving them independent from those weights that APB lefts full-precision.
\[\frac{\partial\mathcal{L}}{\partial\delta}=\frac{1}{n}\sum_{\alpha}\frac{ \partial\mathcal{L}}{\partial\hat{w}}\frac{\partial\hat{w}}{\partial\delta} \chi_{B}=\frac{1}{\delta^{2}n}\sum\frac{\partial\mathcal{L}}{\partial\hat{w}}( \alpha-|w|)\chi_{B}. \tag{7}\]
Figure 2: Network weights representation in binarization (left), pruning (center), APB (right). In APB, binary and full-precision weights coexist in the same matrix. For full-precision entries, we require to store also the index inside the weight matrix.
\[\frac{\partial\mathcal{L}}{\partial\alpha}=\frac{1}{n}\sum\frac{\partial\mathcal{L} }{\partial\bar{w}}\frac{\partial\hat{w}}{\partial\alpha}=-\frac{1}{\delta n} \sum\frac{\partial\mathcal{L}}{\partial\bar{w}}\chi_{B}. \tag{8}\]
We also observe that
\[\frac{\partial\mathcal{L}}{\partial w}=\frac{1}{\delta}\frac{\partial\mathcal{ L}}{\partial\bar{w}}\text{sign}(w). \tag{9}\]
We can compute \(\frac{\partial\mathcal{L}}{\partial\alpha}\) and \(\frac{\partial\mathcal{L}}{\partial\bar{\delta}}\) using \(\frac{\partial\mathcal{L}}{\partial w}\), which is the standard derivative of the weights w.r.t to the cost function, obtained using the backpropagation algorithm [37].
**Memory Impact**. We denote with \(\texttt{Mem}(W)\) the memory impact of the matrix \(W\in\mathbb{R}^{n}\) compressed using APB, expressed in bits. Assuming to have \(s\) full-precision surviving entries, we need to store \(n-s\) binary weights and \(s\) full-precision values with their position. However, since \(s\ll n\) and for easing the matrix multiplication, the binary matrix is fully represented. Hence:
\[\texttt{Mem}(W)=(n-s)+s(b_{v}+b_{p})\simeq n+s(b_{v}+b_{p}) \tag{10}\]
where \(b_{v}\) is the bit width of the full precision values (\(32\) for floating point) and \(b_{p}\) represents the bits needed to store their positions in the matrix. For each neural architecture under evaluation, we compute \(b_{p}\) as \(\max_{i}\log_{2}(k_{i}-1)+1\), where \(k_{i}\) is the dimension of layer \(i\).
**Inference Considerations**. We now discuss how to efficiently multiply a weight matrix \(A\), compressed using APB, against an input activation matrix \(B\).2 For every \(A_{i}\in A\), we define the mask associated with \(A\):
Footnote 2: Convolution can be converted to matrix multiplication using the _im2col_ technique [3].
\[\texttt{Mask}(A_{i})=\left\{\begin{array}{ll}1&\text{ if }A_{i}\in\{ \neg\alpha,+\alpha\}\\ 0&\text{ otherwise.}\end{array}\right. \tag{11}\]
We decompose \(A=A^{\text{bin}}+A^{\text{full}}\), where \(A^{\text{bin}}\in\{\neg\alpha,+\alpha\}^{n}\) is a binary matrix and \(A^{\text{full}}\in\mathbb{R}^{n}\) is a sparse full-precision matrix. \(A^{\text{bin}}\) is defined as \(A^{\text{bin}}_{i}=\alpha\cdot\text{sign}(A_{i})\), while \(A^{\text{full}}\) is given by
\[A^{\text{full}}_{i}=\left\{\begin{array}{ll}0&\text{ if }\texttt{Mask}(A_{i})=1\\ A_{i}-A^{\text{bin}}_{i}&\text{ otherwise.}\end{array}\right. \tag{12}\]
Given an input matrix \(B\) and the distributive property of matrix multiplication, we can write
\[C=A\cdot B=(A^{\text{bin}}+A^{\text{full}})\cdot B=A^{\text{bin}}\cdot B+A^{ \text{full}}\cdot B. \tag{13}\]
Since \(A^{\text{bin}}\) is a binary matrix, the only overhead introduced by APB is a sparse-dense matrix multiplication. Due to the extreme sparsity ratios of \(A^{\text{full}}\), the sparse-dense multiplication can be efficiently performed with tailored implementations such as LIBXSMM [17].
## 4 Bitwise Matrix Multiplication
The efficiency of neural inference heavily relies on the efficiency of matrix multiplication (MM). MM has been widely studied [10, 20, 43] due to its paramount role in many scientific applications. Here we discuss the efficiency of MM at different quantization levels and we introduce our novel routines for optimized matrix multiplication in low-bit configurations. In the following, we employ the notation w/a to specify the bit width of the weights and the activations matrices, respectively. In particular, we show how to implement efficient \(1/2,2/2\) MM using logical and bitwise operators. To the best of our knowledge, we are the first to investigate the efficiency of such low-bits configurations on CPU.
**Efficiency of Matrix Multiplication**. Matrix multiplication is known to be a memory-bounded problem. Indeed, several techniques have been developed to mitigate this aspect, and, nowadays, its efficiency mostly depends on the available computational power.3 The core operation of MM is the _update_ function \(c\gets c+ab\), which is recursively applied on portions of the input matrices \(a,b\) to incrementally compute the output \(c\). In the context of neural inference, \(c\), \(a\), and \(b\) represent the output, the weights, and the input of each layer, respectively. Modern CPUs allow performing the operation above with the _fused-multiply add_ (fma) instruction, which computes the update with the same latency and throughput of an add instruction [7]. Furthermore, modern CPUs can rely on instruction-level parallelism (SIMD) which allows the processing of multiple inputs at the same time. As an example, the _mm512_fmadd_ps instruction computes the operation \(c\gets c+ab\) on three vectors of \(16\) full-precision values each. The theoretical peak performance (\(tpp\)) measures how many update functions (up) can be carried out per second (sec) in an ideal scenario assuming that the memory cost is negligible, i.e.,
Footnote 3: [https://en.algorithmica.org/hpc/algorithms/matmul/](https://en.algorithmica.org/hpc/algorithms/matmul/)
\[tpp=cf\cdot tp\cdot v\ \frac{\texttt{up}}{\text{sec}}, \tag{14}\]
where \(cf\) is the clock frequency, \(tp\) is the throughput of the fma instruction and \(v\) is the number of operands that can be stored on a CPU register. \(v\) is computed by dividing the width of the SIMD register, e.g., \(512\) for avx-512, by the number of bits of the operand.
Network quantization exploits the performance gain obtained by reducing the bit width of weights and activations, hinging on the extreme flexibility of neural networks to model compression. Observe that every time that the operand bit width halves, twice the data fit into the same CPU register (\(v\)). Consequently, \(tpp\) doubles each time the
Fig. 3: Automatic Prune Binarization (apB) applied on the network parameters \(W\). The width of the binarization interval (orange area) is defined by \(\alpha+\delta\). A weight \(w_{i}\in W\) is binarized if \(|w_{i}|\leq\alpha+\delta\), otherwise it is kept in full-precision (green area).
operand bit width halves. However, on modern CPUs the fma instruction exists only for double/single/half-precision float and \(8\)-bits integer values. For smaller bit widths, e.g., \(2\) or \(3\) bits, the use of the fma instruction requires to upcast the operands to the closest supported data type, i.e., \(8\) bits. The situation is different for a binary network as it allows to implement MM by leveraging addition/subtraction or bitwise operations. This motivates us to investigate fast bitwise matrix multiplication techniques for efficient neural inference on CPU. We now discuss the \(1/32\) and the \(1/1\) cases, then we present our novel bitwise matrix multiplication routines for the \(1/2\) and the \(2/2\) scenarios.
**1/32**. In this quantization schema, weights \(w\) are constrained to assume values in \(\{\text{-}1,1\}\). Conversely, activations \(a\) are kept in full precision, i.e., they are represented by using 32-bit floating point. This configuration is explored in several state-of-the-art quantization works [9, 33, 51], as it features a \(32\times\) memory saving compared to the single-precision representation. In principle, binary weights convert multiplication into additions and subtractions [34]. We will now show that this conversion does not deliver remarkable speedup over classical multiplication. Given \(w\in\{\text{-}1,1\}\), in Equation 15 we show how to rewrite the dot product between \(w\) and \(a\) into a series of additions and subtractions. We can write,
\[w\cdot a=\sum_{i=1}^{n}w_{i}a_{i}=\sum_{j\in\mathcal{I}_{+}}a_{j}-\sum_{j\in \mathcal{I}_{-}}a_{j}. \tag{15}\]
where \(\mathcal{I}_{+}=\{i\mid w_{i}=1\}\) and \(\mathcal{I}_{-}=\{i\mid w_{i}=\text{-}1\}\). In detail, \(I_{+}\) is the set of indexes of positive weights (\(w_{i}=1\)), while \(I_{-}\) keeps track of negative weights (\(w_{i}=\text{-}1\)). In this formulation, we simply accumulate the activations in correspondence with positive weights and then subtract the sum of the activation in correspondence with negative weights. Indeed, the fma operation achieves the same latency and throughput as the add/sub operations on modern CPUs.4 This means that the update function \(c\gets c+ab\) (Equation 15) costs \(2\times\) the fma-based full-precision matrix multiplication. We can think of a more efficient approach that reduces the update function to a single addition operation. Let us define
Footnote 4: [https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html](https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html)
\[T=\sum_{i=1}^{n}a_{i},\quad I_{+}=\sum_{j\in\mathcal{I}_{+}}a_{j},\quad I_{-}= \sum_{j\in\mathcal{I}_{-}}a_{j}. \tag{16}\]
with \(T=I_{+}+I_{-}\) by construction. \(T\) is the sum of the activations along the columns and it does not depend on the weights \(w\). This means that \(T\) can be computed at the beginning of the multiplication and then re-used for all the columns of \(a\). Moreover, it can be computed in \(\Theta(n^{2})\) time as it requires the sum of \(n\) values along \(n\) columns. Its impact is negligible compared to \(\Theta(n^{3})\) -- time needed to run the matrix multiplication. \(T\) can be exploited to avoid the computation of one between \(I_{+}\) or \(I_{-}\). For example, we can obtain \(I_{-}\) as \(I_{-}=2\,I_{+}-T\). Thus, we can compute \(w\cdot a\) as
\[w\cdot a=I_{+}-I_{-}=2\,I_{+}-T.\]
The cost of multiplying by \(2\) and subtracting \(T\) is negligible, as it does not depend on the size of the input. The cost of multiplying \(w\in\{\text{-}1,1\}\) and \(a\) is dominated by the computation of \(I_{+}\). It can be efficiently implemented by leveraging a masked floating point addition, where the mask identifies the \(i\in I_{+}\). On recent Intel's instruction set AVX512, the masked floating point addition can be implemented by using the _mm512_add_ps instruction. Anyway, this instruction has the same latency and throughput of fma. As a consequence, it does not offer any computational advantage compared to 32-bit floating point MM. Moreover, the binary representation for the weights is useless in this case, as they require to be converted into 32-bit float numbers when performing the vectorized _mm512_add_ps on CPU. This can be done in two ways: i) the representation of the binary weights is expanded in memory with the consequence of achieving the same memory footprint of full-precision computation, or ii) they are directly expanded to \(32\) bits when moved to the CPU registers with the consequence of having an overhead due to bit extraction. We conclude that the \(1/32\) scenario does not offer strong computational advantages compared to \(32\) floating point implementation. The observations we draw also hold if activations are quantized to \(8\) or \(16\) bits.
**1/1**. Several works investigate how to effectively train fully binary neural networks [18, 25, 33, 34, 46, 27], where both weights \(w\) and activations \(a\) can assume two values, i.e., \(\{\text{-}1,+1\}\). Besides the memory savings achieved, \(1/1\) also allows fast inference. In fact, given two binary vectors \(u,v\in\{\text{-}1,1\}^{n}\), their dot product can be computed as
\[u\cdot v=n-2\cdot\text{popcount}(\text{xor}(u,v)). \tag{17}\]
We observe that Equation 17 holds for every \(u^{\gamma}=\{\text{-}\beta,\gamma\}^{n}\), \(v^{\beta}=\{\text{-}\beta,\beta\}^{n}\), as \(u^{\gamma}\cdot v^{\beta}=\gamma\beta(u\cdot v)\). It is, in fact, a common practice to add scalar multipliers to enrich the expressiveness of binary networks.
We compute the \(tpp\) for binary multiplication to compare it with full-precision computation. This requires estimating the optimal execution flow for the three avx-512 operators (xor, popcount, add) required to perform the update function. We assume to work on a modern CPU such as the Intel SkyLake architecture, equipped with \(512\)-bit registers and the popcount instruction that enables the computation of the popcount operation on \(512\)-bit registers. Recall that modern architectures have different execution units, each associated with a different port.5 Instructions employing different ports can be executed during the same clock cycle. Moreover, if an instruction uses \(k\) different ports, it is executed \(k\) times in the same clock cycle. In this architecture, xor and add work on port \(\{0,5\}\) (throughput \(2\)), while popcount works exclusively on port \(5\) (throughput \(1\)) [7]. Hence, even in a perfectly-pipelined execution flow, the computation of the update function in a single clock cycle, as done in the dense case using the fma operation, is unfeasible. This would be possible only if all the operations (xor, popcount, and add) were assigned to different ports. In the best-case scenario, we can perform \(2\) complete updates (\(6\) instructions, \(1024\) values) every \(3\) clock cycle (\(2\) instructions per clock cycle). The number of values processed per clock cycle, on average, is\(\frac{1024}{3}\simeq 341\). The same processor, equipped with avx-512 registers and the
fma instruction with throughput \(2\), can process \(\frac{512}{32}=16\) 32-bits floating point values per port, namely \(32\) floating point values per clock cycle. This means that the theoretical speedup offered by binary multiplication is \(10.5\times\). We show that our implementation of binary multiplication reaches the theoretical speedup compared to a state-of-the-art library for matrix multiplication, oneDNN.6 In practice, the empirical speedup over the equivalent single-precision MM can be up to \(16\times\) on rectangular matrices. In fact, it has been shown that dense multiplication does not reach the \(tpp\) on non-squared matrices [30], while our bitwise routine can. We will detail this aspect in our experimental evaluation, Section 5.3.
Footnote 6: [https://github.com/oneapi-src/oneDNN](https://github.com/oneapi-src/oneDNN)
**1/2**. This quantization schema achieves more accurate models than \(1/1\). State-of-the-art quantization approaches such as SLB [48] and EWGS [24] have shown that \(1/2\) delivers up to \(9\) points of Top1 accuracy gain on the Image Classification task on ImageNet compared to the \(1/1\) quantization. Despite the effectiveness improvements achieved, no efficient MM routines have been proposed for this configuration on CPU. In this regard, we design a novel matrix multiplication routine for \(1/2\) quantization that exploits bitwise operations as in the \(1/1\) case.
Let us consider a binary vector \(w\in\{\neg,\gamma\}^{n}\) and a 2-bits uniformly quantized vector \(a=\{a_{0},a_{1},a_{2},a_{3}\}^{n}\), with \(a_{p}=ps\), with \(p\in\{0,1,2,3\}\) and where \(s\) represents the distance between quantization levels and \(p\) is the index of the quantized value. We want to efficiently compute \(w\cdot a\) using logical and bitwise operators, as in Equation 17. \(a\) stores the activations of a neural model, so it is generally quantized to positive values (asymmetric quantization7). This is due to the choice of the ReLU as an activation function, which zeros-out negative inputs. For the purpose of our technique, it is useful to re-map \(a\) as a zero-centered symmetric distribution. We scale \(a\) by its mean. We obtain \(\bar{a}\) as
Footnote 7: [https://pytorch.org/blog/quantization-in-practice/](https://pytorch.org/blog/quantization-in-practice/)
\[\bar{a_{i}}=a_{i}-\mu=a_{i}-\frac{b_{0}+b_{3}}{2}=a_{i}-\frac{3}{2}s\]
\(\forall i=0,\ldots,n\). This converts the computation of \(w\cdot a\) into
\[w\cdot a=\sum_{i=1}^{n}w_{i}(\bar{a}_{i}+\mu)=\sum_{i=1}^{n}w_{i}\bar{a}_{i}+ \sum_{i=1}^{n}w_{i}\mu.\]
The last term is the sum along the rows of the matrix weight scaled by \(\mu\). Thus, it can be computed _offline_, before the matrix multiplication starts, as both terms are known a priori. Let us consider the update function \(c\gets c\ +ab\). We can initialize the matrix \(c\) with these pre-computed row-wise terms, rather than with zeros, to avoid any impact of this computation on the performance. Observe that \(\bar{a_{i}}\in\{\frac{3\pi}{2},\frac{s}{2},\frac{s}{2},\frac{3\pi}{2}\}\). Given that the multiplication by \(\gamma\) and \(s\) do not impact the performance, we need to multiply a \(\{\neg,1,1\}\) binary vector against a 2-bits vector whose value are in \(\mathcal{D}_{a}=\{-\frac{3}{2},-\frac{1}{2},\frac{1}{2},\frac{3}{2}\}\). \(\mathcal{D}_{a}\) is composed of two pairs of zero-symmetric values, namely \(\{-\frac{3}{2},\frac{3}{2}\}\) and \(\{-\frac{1}{2},\frac{1}{2}\}\). This suggests the possibility to employ a tailored representation made of multiple binary vectors. Hence, we represent \(a\) using three binary vectors, computed using the transformation
\[T:\{\mathcal{D}_{a}\}^{n}\rightarrow\{0,1\}^{n\times 3} \tag{18}\] \[a\rightarrow\{t,h,m\}\]
The definition of \(T\) is reported in Table I. Observe that, from now on, we will represent binary vectors using \(\{0,1\}\) instead of \(\{-1,1\}\). This will ease the explanation of our approach and it is also coherent on how binary vectors are actually represented during computation. The transformation \(T\) generates two binary vectors \(t,h\), and a binary mask \(m\). The names of the vectors suggest that \(t\) (three halves) refers to \(\{-\frac{3}{2},\frac{3}{2}\}\), while \(h\) (one \(h\)alf) refers to \(\{-\frac{1}{2},\frac{1}{2}\}\).
Two binary vectors, i.e., \(t\) and \(h\), are built according to the following rules.
* a bit set to \(1\) uniquely identifies the entry of the original vector \(a\). This means that if \(t_{i}=1\Rightarrow a_{i}=\frac{3}{2}\) and if \(h_{i}=1\Rightarrow a_{i}=\frac{1}{2}\).
* a bit set to \(0\) is intentionally ambiguous : \(t_{i}=0\Rightarrow a_{i}\in\{-\frac{3}{2},-\frac{1}{2},\frac{1}{2}\}\), \(h_{i}=0\Rightarrow a_{i}\in\{\frac{3}{2},-\frac{1}{2},\frac{3}{2}\}\).
The mask \(m\) is introduced to determine the uncertain entries of \(t\) and \(h\). We have that
* \(m_{i}=1\wedge t_{i}=0\Rightarrow a_{i}=\neg 3/2\);
* \(m_{i}=0\wedge h_{i}=0\Rightarrow a_{i}=\neg 1/2\).
To ease the notation, we define the set of _active entries_ for \(t\) as \(E_{t}=\{i\mid m_{i}=1\}\). The set of active entries for \(h\) is \(E_{h}=\{i\mid\bar{m}_{i}=1\}\). We highlight that \(T\) is a bijective function that permits uniquely matching \(a\) with its ternary representation \(\{t,h,m\}\). By using the mask \(m\), we introduce a memory overhead of one bit per each value of \(a\). On the other hand, \(m\) is necessary to split the \(1/2\) multiplication into two \(1/1\) multiplications. The idea is to multiply the binary vector \(w\) with \(t\) and \(h\) respectively by involving in the computation only the active entries identified by the mask \(m\). The considerations above inherently define a ternary operator \(\texttt{mbm}(x,y,z)\), which we define on two generic binary vectors \(x,y\), and a binary mask \(z\). In this case, \(x\) plays the role of the weight vector, \(y\) represents one between \(t\) and \(h\), while \(z\) is the mask corresponding to \(y\), i.e., \(m\) and \(\bar{m}\) when masking \(t\) and \(h\), respectively. The set of active entries of \(y\) is \(E_{y}=\{i\mid z_{i}=1\}\).
We now discuss how to implement the mbm operator. First, let us focus on Equation 17. It works by subtracting from the total number of bits involved in the computation, i.e., \(n\), the number of times the bits of \(u\) and \(v\) present different values, multiplied by \(2\). Our approach to compute \(\texttt{mbm}(x,y,z)\) is to multiply \(x\) and \(y\) using Equation 17, and then exploit the mask \(z\) to fix the result. The are two corrections to be applied. The first one is on the number of bits actually involved in the computation that is identified by the cardinality of the active entries: \(|E_{y}|=\sum_{i}z_{i}\), which can be computed with a \(\texttt{popcount}(z)\). Now we need to count how many times \(x\) and \(y\) present different values in correspondence of active entries. First, we compute \(d=\texttt{xor}(x,y)\)
\(d_{i}=1\) implies \(x_{i}\neq y_{i}\). These entries should be taken into account only if \(z_{i}=1\), i.e., only in correspondence with active entries of the vector \(y\). This can be computed with \(\mathtt{popcount}(\mathtt{and}(\mathtt{xor}(x,y),z))\).
Given these considerations, \(\mathtt{mbm}\) is defined as
\[\mathtt{mbm}(x,y,z)=\mathtt{popcount}(z)-\\ 2\ \mathtt{popcount}(\mathtt{and}(\mathtt{xor}(x,y),z)). \tag{19}\]
The overall \(1/2\) routine based on bitwise operations consists in applying the \(\mathtt{mbm}\) routine twice, first on the active entries of \(t\) identified by \(m\), and second on the active entries of \(h\) identified by \(\bar{m}\).
\[w\cdot a=\mathtt{mbm}(w,t,m)+\mathtt{mbm}(w,h,\bar{m}). \tag{20}\]
As mentioned in Section 4, \(\mathtt{mbm}\) can be efficiently implemented using the \(\mathtt{\_mm}\mathtt{512\_ternarylogic\_epi64}\) instruction. This instruction allows to compute any ternary logic function, i.e., any logic function with three inputs, and has the same latency and throughput of the \(\mathtt{xor}\) instruction. That said, the cost of executing the \(\mathtt{mbm}\) operator is the same as the one of computing Equation 17. \(\mathtt{popcount}(z)\) is the sum along the columns of a bit matrix. It runs in \(\Theta(n^{2})\) and its cost is negligible with respect to the cost of matrix multiplication, i.e., \(\Theta(n^{3})\). For this reason, we estimate that \(tpp_{1/2}\simeq 2\,tpp_{1/1}\).
**2/2.** This quantization schema almost fills the effectiveness gap with full-precision networks [24, 48, 51, 9]. In this quantization schema, both the weight vector and the activation vector are \(2\)-bit quantized vectors. The naive \(2/2\) MM can be obtained via an up-cast to \(8/8\) that allows the use of the \(\mathtt{fma}\) instruction as discussed before.
We design an efficient alternative solution, where both the weights and the activations are decomposed into three binary vectors. The first step is to zero-center the two vectors by following the procedure described for the \(1/2\) case. By doing that, we obtain the following \(w\) and \(a\).
\[w=s_{w}\{\cdot\frac{3}{2},\cdot\frac{1}{2},\frac{1}{2},\frac{3}{2}\},\quad a=s _{a}\{\frac{3}{2},\cdot\frac{1}{2},\frac{1}{2},\frac{3}{2}\}\]
where \(s_{w}\) and \(s_{a}\) represent the quantization step for \(w\) and \(a\), respectively. The idea behind the \(2/2\) routine is to apply the \(1/2\) routine twice. Thus, the transformation \(T\) is applied to both \(w\) and \(a\). We obtain \(T(w)=\{t_{w},h_{w},m_{w}\}\) and \(T(a)=\{t_{a},h_{a},m_{a}\}\). The multiplication is decomposed as follows
* the active entries of \(t_{w}\) identified by \(m_{w}\), are multiplied by the active entries of \(t_{a}\), identified by \(m_{a}\);
* the active entries of \(t_{w}\), identified by \(m_{w}\), are multiplied by the active entries of \(h_{a}\), identified by \(\bar{m}_{a}\);
* the active entries of \(h_{w}\), identified by \(\bar{m}_{w}\), are multiplied by the active entries of \(t_{a}\), identified by \(m_{a}\);
* the active entries of \(h_{w}\), identified by \(\bar{m}_{w}\), are multiplied by the active entries of \(h_{a}\), identified by \(\bar{m}_{a}\).
Finally, the results are summed together. We observe that for each one of the four bullets above, we have two different sets of active entries. The actual set of active entries is given by their intersection. This is implemented by a logical and between the two masks involved. To summarize, our novel routine for \(2/2\) MM allows to compute \(w\cdot a\) as
\[w\cdot a= \mathtt{mbm}(t_{w},h_{w},m_{w}\wedge m_{a})+\] \[\mathtt{mbm}(t_{w},h_{a},m_{w}\wedge\bar{m}_{a})+\] \[\mathtt{mbm}(h_{w},t_{a},\bar{m}_{w}\wedge m_{a})+\] \[\mathtt{mbm}(h_{w},h_{a},\bar{m}_{w}\wedge\bar{m}_{a}).\]
Hence, \(tpp_{2/2}\simeq 4\,tpp_{1/2}\simeq 8\,tpp_{1/1}\).
## 5 Experimental Evaluation
In this section, we present our experimental evaluation of \(\mathtt{APB}\). Our empirical analysis consists of two different steps. In the former one (Section 5.2), we assess the memory compression capabilities of \(\mathtt{APB}\) by comparing it against i) pruning, ii) quantization, and iii) pruning combined with quantization/binarization. In the latter (Section 5.3), we perform an efficiency analysis by evaluating the performance of our novel low-bits matrix multiplication algorithms against existing solutions on CPU. Moreover, we leverage our routines to compare the execution time of neural networks at different bit-width and compare them to \(\mathtt{APB}\)-compressed models.
### _Experimental Setup_
**Datasets.** We comprehensively evaluate \(\mathtt{APB}\) against several state-of-the-art competitors on two widely adopted datasets for Image Classification, namely CIFAR-10 [22] and ImageNet [5]. The CIFAR-10 dataset consists of a set of \(60\)K \(32\times 32\) images split into \(50\)K and \(10\)K training/test samples respectively, labeled with \(10\) classes. The ImageNet dataset consists of \(1.2\)M training images and about \(50\)K test images labeled with \(1\),\(000\) classes.
**Network Architectures**. We evaluate the performance of \(\mathtt{APB}\) in compressing ResNet-18/20/56 [16] and VGG-Small [39] on the CIFAR-10 dataset, and ResNet-18/34/50 and WideResNet-50 on ImageNet, which are the benchmark architectures for quantization methods. We apply \(\mathtt{APB}\) on all the layers of the networks, except the first the last, and the downsampling layers, unless differently specified. The effectiveness of the models is measured in terms of Top1 classification accuracy.
**Training Details**. We apply \(\mathtt{APB}\) after initializing the network weights with pre-trained models. For each layer \(i\), we set \(\alpha_{i}=\mu_{i}\) and \(\delta_{i}=3\sigma_{i}\), where \(\mu_{i}\) is the mean of \(|w_{i}|\) and \(\sigma_{i}\) is the standard deviation of \(w_{i}\). Under the assumption that \(w_{i}\sim\mathcal{N}(\mu,\,\sigma^{2})\)[13], this ensure high compression rate at initialization. We employ an SGD optimizer with cosine annealing learning rate scheduler. On CIFAR-10, we train for \(500\) epochs, with a batch size of \(128\) and a learning rate of \(1\)e-\(2\). On ImageNet, we train for \(100\) epochs when using real-values activations and \(150\) for \(2\)-bits activations, with batch size \(256\) and learning rate \(1\)e-\(3\). We freeze the value of \(\alpha\) and \(\delta\) when reaching half of the epochs of each training to allow fine-tuning the surviving values. \(\mathtt{APB}\) is implemented in PyTorch [32].
**Weight Decay**. The compression aggressiveness of \(\mathtt{APB}\) can be controlled with the weight decay \(\lambda\), namely the \(L_{2}\) penalty applied on the weights of a neural network at
training time. The larger the weight decay, the lower (on average) the absolute value of the trained parameters. As we do not apply the weight decay to \(\alpha\) and \(\delta\), the amplitude of the binarization interval is not directly affected by its value. As a consequence, increasing \(\lambda\) will force more weights to fall into the binarization interval, thus incrementing the percentage of binary values w.r.t to full-precision ones and delivering higher compression.
### _Compression Performance of_Apb _with 32-bits activations_
In this section, we evaluate the performance of APB as a memory compression framework. For this purpose, we consider models whose activations are kept in \(32\)-bits. Recall that, even when weights are binarized, maintaining activations in full precision prevents any inference advantage, as discussed in Section 4, paragraph "\(1/32\)".
We compare APB against several different techniques, namely i) quantization, ii) pruning, iii) combination of pruning and binarization. Compression methods adopt two main approaches. The first one, which is adopted for example by pruning techniques, compresses all the network layers. We call these methods _All-Layers Compressors_ (AL). The second one, which is adopted by binarization, low-bits quantization, and pruning + binarization, works by leaving in full precision the first, the last, and the downsample layers if any. We name these solutions _Convolutional Layers Compressors_ (CL).
As already mentioned in Section 5.1, APB belongs to CL methods. Despite that, in our experiments, we compare it against methods belonging to both families. To ease the comparison, we report the results in two different tables. Table II reports the results for AL methods, while Table III reports the results for CL methods. In Table II, we report the CL weights bit-width and the AL weight bit-width to allow a direct comparison between the two tables. Both for AL and CL, we measure the memory impact as average _weights bit width_, namely the ratio between the number of bits required to store a model and its parameters. When comparing against CL methods, the first, the last, and the downsample layers of the network are excluded from the computation, while these are included when comparing against AL methods.
on CIFAR-10 and ImageNet and report the results in Table 2. All the low-bits quantization techniques belong to the CL category described above. For each quantization scheme, we report the best-performing competitor. The state-of-the-art on CIFAR-10 is SLB [48] for all quantization schemes except the \(1/32\) case on ResNet-20, where the best performance is achieved by IR-Net [33]. On ImageNet, the state-of-the-art solution is EWGS [24] for both ResNet-18 and ResNet-34.
Results show that APB outperforms all other state-of-the-art quantization approaches by providing a superior solution in terms of space and accuracy. Indeed, APB achieves higher accuracy than the \(2\)-bits quantization while saving up to \(2\times\) memory on CIFAR-10 and \(1.4\times\) on ImageNet. In comparison to \(1\)-bit quantization, APB presents a negligible memory overhead on CIFAR-10. In fact, the surviving full-precision values for these models account for about \(0.1\%\) of the total weights. On ImageNet, the number of surviving full-precision weights required by APB is larger, as demonstrated by the \(0.4\) more bits per weight. However, this memory overhead is strongly counterbalanced by \(1.9\) and \(1.0\) Top1 accuracy points of gain for ResNet-18 and ResNet-34 respectively, compared to \(1\)-bit quantization. On ResNet-18, our models even outperform \(2\)-bits quantization by \(0.4\) Top1 accuracy points, despite its reduced memory footprint.
**Comparison with combinations of pruning and binarization.** Recently, some works explore the combination of pruning & binarization [25, 6]. These methods adopt an orthogonal approach with respect to APB. While with APB weights are either binary or full-precision, with existing approaches combining pruning and binarization, the weights are either binary or _zero_. We compare the performance of APB with respect to Multi Prize Ticket (MPT) [6] in terms of memory compression. MPT works by discovering highly effective sparse sub-networks in sufficiently over-parameterized neural networks, without the need for further training. Moreover, they apply the _sign_ function to binarize the surviving weights, thus providing a sparse and binary network.
We argue that sparsification does not provide memory advantages if the non-zero weights are binary. Consider a tensor of size \(n\) with \(nnz\) non-zero entries. The cost of storing a non-zero entry is given by the number of bits to its value \(b_{v}\) plus the number of bits to save its position \(b_{p}\). In this case, i.e., \(b_{v}=1\), as surviving weights are binarized. In MPT [6], the authors propose a sparsification ratio of \(80\%\) for this architecture. This means that one weight out of five is stored. Storing \(5\) weights in pure binary format only costs \(5\) bits, so if \(b_{p}>4\), the sparse-binary format does not provide memory footprint advantages compared to the pure binary one. Due to the large size of network layers, \(4\) bits are not enough to store the indexes of non-zero values. A more memory-efficient approach would be to store one bit for every weight to indicate whether the corresponding entry is pruned. In this case, the total memory impact would be \(n+nnz\), which we approximate with \(n\) for simplicity.
Table 2 reports the evaluation results of APB against MPT. APB outperforms the performance of MPT both on ResNet-18 for CIFAR-10, and for WideResNet-50 on ImageNet. For ResNet-18 in CIFAR-10, APB matches the memory compression of MPT and also delivers a slight (\(0.2\)) effectiveness improvement. For WideResNet-50 on ImageNet, the model learned with APB achieves more than \(3\) points of Top1 Accuracy improvement w.r.t. MPT. with a memory overhead of only \(0.1\) bits per weight.
#### 5.2.2 Compression of All Layers (AL)
We now compare APB to methods compressing the whole network. These approaches also tackle the first, the last, and the downsample convolutional layers. Batch Normalization layers are left in full precision as their impact is negligible, as they account for less than \(0.5\%\) of network parameters in all the evaluated architectures.
Table 2 shows that the impact of the layers left out by CL approaches can be considerable. On ImageNet, we observe that the non-compressed parameters account for an overhead of \(2\) bits per weight for ResNet-18 and \(6.2\) for ResNet-50. These numbers are obtained as the difference between the CL and the AL value in the "Weights Bit width CL (AL)" column of Table 2. On ResNet-18, we observe that the \(70\%\) of the overhead depends on the last fully-connected layer. We experimentally verify that these last fully-connected layers can be quantized to 8-bit integers with a simple post-training quantization (PTQ) solution, that converts the weights from a floating point 32 representation to 8-bit integers, 8 without harming the Top1 accuracy of the model. This halves the overhead of the non-compressed layers, reducing it to \(0.9\) bits per weight. For ResNet-50, quantizing the last fully-connected layer to 8-bits allows reducing the overhead to \(4.3\) bit per weight.
Footnote 8: [https://pytorch.org/docs/stable/quantization.html](https://pytorch.org/docs/stable/quantization.html)
**Comparison with pruning.** We compare the memory compression capabilities of APB against state-of-the-art pruning techniques, namely magnitude-based pruning with learning-rate rewinding [36]. We report the results of this comparison in Table 3. In the original article [36], the authors express the compression rate as the inverse of the sparsity ratio, i.e., \(5\%\) sparsity corresponds to \(20\times\) compression rate. Indeed, this does not account for the space required to store the indexes of the nonzero entries. We recompute the weights bit width by taking it into account. We compare APB against pruning on ResNet-20, ResNet-56 on CIFAR-10, and on ResNet-50 on ImageNet. On CIFAR-10, APB vastly outperforms pruning in terms of memory/accuracy trade-off. When compressing ResNet-20, APB delivers a model which is \(3.4\times\) smaller but \(1.2\) points more accurate. Regarding ResNet-56, APB can deliver the same level of accuracy of a sparsified model with a \(3.3\times\) memory saving, or the same level of memory compression with \(1.7\) points of Top1 accuracy improvement. On ImageNet, we observe that APB combined with Post Training Quantization (PTQ) on the last layer allows matching the performance of pruning using ResNet-50 as the backbone. Furthermore, we perform an experiment where we apply APB on the downsample convolutional layer of this architecture, marked with \(\dagger\) in Table 3. For this model, the downsample convolutional layers account for the \(10\%\) of the total memory impact. Thus, we can obtain a model which only suffers from \(0.5\) accuracy degradation compared to pruning, but allows saving up to \(2.5\times\) memory footprint.
**Comparison with combinations of pruning and quantization.** We compare with Bayesian Bits (BB) [42], a state-of-the-art compression approach mixing quantization at different bit widths with pruning. In detail, we compare in terms of bit width of their methods against our models learned with APB. As for the other experiments, we use ResNet-18 on Imagenet as the model for comparison and report the results in Table III. As mentioned, we also apply PTQ on the final fully-connected layer of ResNet18. Compared to BB, APB can deliver the same accuracy but allows us to save up \(2.1\times\). For the sake of fairness, we point out that activations in BB are quantized to mixed precision (\(2/4\) bits), which can decrease the accuracy of the model. We also compare to a model compressed with APB whose activations are quantized to 2-bits, marked with the \(\ddagger\) symbol in Table III. Even if employing reduced bit width for activations, this model gains a \(1.5\times\) memory savings with reduced performance degradation.
### _Efficiency Evaluation_
In this Section, we provide an extensive efficiency evaluation of our low-bits matrix multiplication routines and of APB-compressed networks. First, we assess the performance of our low-bits matrix multiplication routines, comparing them against state-of-the-art dense matrix multiplication frameworks. Then, we show that our APB compressed networks outperform existing quantization methods in terms of efficiency-accuracy trade-off.
**Low-bits Matrix Multiplication.** We now compare the efficiency of our Matrix Multiplication (MM) routines, i.e., \(1/1,1/2\), \(2/2\), against state-of-the-art highly-optimized MM CPU libraries. In detail, we measure the achieved GFLOPs at different sizes of the matrices and we compare them to their \(tpp\). \(tpp\) is computed according to Equation 14 and it is reported in Figure 4 as dotted lines. Experiments are conducted on an Intel Xeon Gold 5318Y CPU, clocked at \(3.4\) GHz and equipped with the AVX-512 instruction set. Our novel inference routines are written in C++ and compiled with the -O3 option using GCC 11.2.0, with single-threaded execution. Figure 4 reports an experimental comparison of square matrices. We report the size of the matrices under multiplication, i.e., on the \(x\)-axis. We include in the comparison the performance of dense matrix multiplication as implemented in the oneDNN library. This library is the state-of-the-art solution for dense multiplication, employing industrial-level optimizations such as Just In Time (JIT) code compilation. Figure 4 shows how large-enough matrices allow all the tested algorithms, i.e., our bitwiese MM routines and the oneDNN library for dense MM, to achieve their best performance and get close to their \(tpp\). The experimental results show that the \(1/1\), \(1/2\), and \(2/2\) routines reach up to \(93\%\), \(88\%\), and \(85\%\) of their theoretical peak performance, respectively. For large enough matrices, oneDNN GFLOPs range between the \(86\%\) and the \(90\%\) of the \(tpp\) of dense MM. The results reported in Figure 4 confirm that our bitwise multiplication routines are properly implemented, given that the gap with \(tpp\) is within the \(15\%\). Further optimizations may be employed, such as blocking strategies or micro-kernel parameters optimization according to the CPU architecture. These are optimization strategies that are commonly applied in the dense case [10]. Although interesting, these optimizations go beyond the scope of this work and we leave their investigation as future work.
Square matrices allow MM routines to get close to their \(tpp\) at any bit width. Indeed, in DNN inference, rectangular matrices are much more common than squared ones. To evaluate the efficiency of our novel matrix multiplication algorithms on a real use case, we test them on the shapes obtained by applying the _im2col_ transformation on the layers of ResNet-like architectures. We include the \(8/8\) MM in our analysis by employing the implementation provided by the FBGEMM [21] library.
Results are reported in Figure 5, where the \(x\)-axis indicates the shapes of the matrices under evaluation and the \(y\)-axis reports the speedup compared to the dense MM. We adopt two different implementations for dense MM. The first one is OneDNN, as in Figure 4. BLIS [20] is an open-source GEMM library, with assembly-level architecture-tailored optimizations, and presents an optimization level closer to our C++ implementation. Observe that, on the shapes under evaluation, oneDNN is 60% faster than BLIS, which is a remarkable speedup considering that they employ the same algorithm for matrix multiplication.
Figure 5 shows that our novel MM routines are significantly faster than dense multiplication. For example, the \(1/1\) routine delivers up to \(15\times\) speedup compared to oneDNN and up to \(25\times\) compared to BLIS. In this case, we witness to a speedup larger than the theoretical estimated one, i.e., \(10.5\times\). This is caused by the poor performance of dense MM on rectangular-shaped matrices [30]. In fact, rectangular matrices do not allow fully masking the memory-bounded nature of the MM operation.9 This is evident in Figure 5, where, on the shapes under evaluation, oneDNN reaches between the \(50\%\) and the \(70\%\) of \(tpp\). Conversely, \(1/1\), \(1/2\) and \(2/2\) deliver respectively at least the \(85\%\), \(80\%\) and \(75\%\) of their theoretical peak performance.
Footnote 9: [https://en.algorithmica.org/hpc/algorithms/matmul/](https://en.algorithmica.org/hpc/algorithms/matmul/)
Overall, the forward pass on ResNet-18 achieves \(6.85\times\) and \(1.5\times\) speedup compared to FBGEMM, the state-of-the-art MM library for quantized models on CPU, when employing the \(1/2\) and \(2/2\) MM routines, respectively. To the best of our knowledge, FBGEMM is the best available solution for MM at low bit widths, excluding our novel routines. Our solution allows, for the first time, to efficiently
Fig. 4: GFLOPs performance analysis of our novel bitwise MM routines against the oneDNN library used for dense MM. The analysis is performed on square matrices, i.e., \(M\) = \(K\) = \(N\), where \(M\) is the number of rows of the first operand, \(K\) is the shared dimension, and \(N\) is the number of columns of the second operand.
exploit the plethora of different quantization methods for \(1/2\) and \(2/2\) directly on CPU. Also, the involved algorithms are novel and can be implemented on all the available computing platforms, such as GPUs or FPGAs. We also observe that the performance ratio between different low-bits implementation precisely matches the predictions we made in Section 4: the \(1/2\) quantization schema is \(2\times\) slower than the \(1/1\), while the \(2/2\) is \(8\times\) slower.
**Efficiency/Accuracy Trade-offs.** We now compare the efficiency-accuracy trade-off that different low-bits quantization methods deliver compared to APB. We perform the analysis using the ResNet-18 architecture on the ImageNet dataset. We compute the inference time as the total MM time. The time spent on the first and the last layer, as well as on the batch-normalization and down-sampling layers, is ignored as these layers are not compressed in any of the approaches -- as in Table II. The comparison involves networks quantized with the \(1/1\), \(1/2\), and \(2/2\) configurations. The \(1/1\) accuracy is achieved by using SA-BNN [27], while \(1/2\) and \(2/2\) are achieved by using EWGS [24]. Both methods are state-of-the-art low-bits quantization approaches. We do not include methods such as ReActNet [28] or ElasticLink [18] that leverage custom architectures. We observe that APB could be easily employed in conjunction with these approaches, which is also part of our future work. In this analysis, APB is used to compress the weights of the network, while activations are quantized to \(2\) bits using the EWGS quantizer. We also experiment APB with \(1\)-bit quantized activations and we achieved worse performance than \(1/2\) quantization. This aspect is detailed in the subsequent paragraph.
The inference time reported for APB is the sum of the time for \(1/2\) MM and the time for sparse-dense MM, as shown in Equation 13. LIBXSMM [17] is used for sparse-dense matrix multiplication.
Figure 6 reports the results of the comparison in terms of speedup w.r.t. both BLIS and oneDNN (\(x\)-axis) and Top1 accuracy (\(y\)-axis). APB largely dominates over \(2/2\) quantization provided by EWGS. Our models can deliver \(1.9\times\) speedup with \(0.7\) Top1 accuracy improvement or \(2.0\times\) speedup at the same accuracy level of the \(2/2\) quantized model. The sparsity of surviving full-precision values per layer spans in \(96\)-\(99\%\). In this range, we experimentally verify that sparse MM is always faster than the \(2/2\) routine, it matches the performance of \(1/2\) at about \(97\%\), and it is faster than \(1/1\) at \(99\%\). Moreover, thanks to our novel MM routine, the \(1/2\) models offer more than \(11\times\) speedup with a performance degradation of \(8\%\) w.r.t. to the full-precision model. Regarding \(1/1\), its extreme speedup asks for a significant accuracy degradation, i.e., \(7\) points of Top1 accuracy compared to the original full-precision model.
**Comparison with pure binarization.** We experiment with the combination of APB with \(1\)-bit activations. We observe that the performance of our models outperforms by margin pure binary networks, by reaching \(63.0\) of Top1 accuracy. Anyway, this effectiveness improvement comes at the price of an elevated number of surviving full-precision weights, reaching \(8\%\) in some cases. The cost of these full-precision weights at inference time matches or sometimes surpasses the cost of binary multiplication. In practice, this means that the \(1/2\) scenario offers superior performance in terms of efficiency/accuracy tradeoff. This is coherent with the network quantization literature, where different works prove that reducing the precision of the activation worsens the effectiveness of the models more than reducing the precision of the weights
## 6 Conclusions and Future Work
We proposed APB, a novel compression technique that merges binarization and pruning together to exploit the benefits provided by these two orthogonal techniques. We showed that APB jointly maximizes the accuracy achieved by the network while minimizing its memory impact by identifying an optimal partition of the network parameters among these two sets. Furthermore, we presented two novel matrix multiplication algorithms for extreme low-bits configurations, namely \(1/2\) and \(2/2\), where \(1/2\) refers to binary weights and \(2\)-bits activations, while in the \(2/2\) configuration both weights and activations are quantized to \(2\) bits. We performed a comprehensive experimental evaluation on two widely-adopted benchmark datasets, i.e., CIFAR-10 and ImageNet. Experiments show that APB
Fig. 5: Comparison between matrix multiplication routines at different quantization levels. \(M\) is the number of rows of the first operand, \(K\) is the shared dimension, and \(N\) is the number of columns of the second operand.
Fig. 6: Efficiency/Accuracy trade-offs of ResNet-18 on ImageNet using different quantization schema.
achieves better accuracy/memory trade-off w.r.t. to state-of-the-art compression methods based on i) quantization, ii) pruning, iii) the combination of pruning and quantization. Our novel matrix multiplication routines deliver a major speedup compared to the existing solution for low-bits matrix multiplication on CPU, ranging from \(6.9\times\) for the \(1/2\) configuration to \(1.5\times\) for the \(2/2\) configuration. Moreover, the experimental results show that APB is \(2\times\) faster than the 2-bit quantized model with no loss in accuracy. On the one hand, our novel matrix algorithms open up to exploiting quantized networks on CPU. Also, they may boost the investigation of \(1/2\) quantization scenario, given that a very fast inference engine is available. On the other hand, we show that APB-compressed networks, where binary and full-precision weights are mixed in the same weight tensor, allow for better performance compared to fixed quantization, e.g., \(2\)-bits. The importance of a reduced portion of full-precision weights, evidenced by pruning techniques in previous work, is stressed again in this new hybrid format.
As future work, we plan to extend APB to automatically identify the optimal quantization schema for each layer activation, so as to improve the efficiency/accuracy trade-off. This would be possible due to the availability of efficient matrix multiplication routines covering several quantization schemas. Moreover, we are interested in applying APB in conjunction with highly-effective custom architectures, such as ElasticLink [18].
|
2310.02573 | Robust Collision Detection for Robots with Variable Stiffness Actuation
by Using MAD-CNN: Modularized-Attention-Dilated Convolutional Neural Network | Ensuring safety is paramount in the field of collaborative robotics to
mitigate the risks of human injury and environmental damage. Apart from
collision avoidance, it is crucial for robots to rapidly detect and respond to
unexpected collisions. While several learning-based collision detection methods
have been introduced as alternatives to purely model-based detection
techniques, there is currently a lack of such methods designed for
collaborative robots equipped with variable stiffness actuators. Moreover,
there is potential for further enhancing the network's robustness and improving
the efficiency of data training. In this paper, we propose a new network, the
Modularized Attention-Dilated Convolutional Neural Network (MAD-CNN), for
collision detection in robots equipped with variable stiffness actuators. Our
model incorporates a dual inductive bias mechanism and an attention module to
enhance data efficiency and improve robustness. In particular, MAD-CNN is
trained using only a four-minute collision dataset focusing on the highest
level of joint stiffness. Despite limited training data, MAD-CNN robustly
detects all collisions with minimal detection delay across various stiffness
conditions. Moreover, it exhibits a higher level of collision sensitivity,
which is beneficial for effectively handling false positives, which is a common
issue in learning-based methods. Experimental results demonstrate that the
proposed MAD-CNN model outperforms existing state-of-the-art models in terms of
collision sensitivity and robustness. | Zhenwei Niu, Lyes Saad Saoud, Irfan Hussain | 2023-10-04T04:18:23Z | http://arxiv.org/abs/2310.02573v3 | # Robust Collision Detection for Robots with Variable Stiffness Actuation by Using MAD-CNN:
###### Abstract
Ensuring safety is paramount in the field of collaborative robotics to mitigate the risks of human injury and environmental damage. Apart from collision avoidance, it is crucial for robots to rapidly detect and respond to unexpected collisions. While several learning-based collision detection methods have been introduced as alternatives to purely model-based detection techniques, there is currently a lack of such methods designed for collaborative robots equipped with variable stiffness actuators. Moreover, there is potential for further enhancing the network's robustness and improving the efficiency of data training. In this paper, we propose a new network, the Modularized Attention-Dilated Convolutional Neural Network (MAD-CNN), for collision detection in robots equipped with variable stiffness actuators. Our model incorporates a dual inductive bias mechanism and an attention module to enhance data efficiency and improve robustness. In particular, MAD-CNN is trained using only a four-minute collision dataset focusing on the highest level of joint stiffness. Despite limited training data, MAD-CNN robustly detects all collisions with minimal detection delay across various stiffness conditions. Moreover, it exhibits a higher level of collision sensitivity, which is beneficial for effectively handling false positives, which is a common issue in learning-based methods. Experimental results demonstrate that the proposed MAD-CNN model outperforms existing state-of-the-art models in terms of collision sensitivity and robustness.
Under consideration at Pattern Recognition Letters
\({}^{a*}\)Zhenwei Niu \({}^{b*}\)Lyes Saad Saoud, \({}^{c}\)Irfan Hussain
\({}^{*}\) The authors contributed equally to this work.
Khalifa University Center for Robotics and Autonomous Systems (KUCARS)
Department of Mechanical Engineering, Khalifa University, PO Box 127788, Abu Dhabi, United Arab Emirates.
\({}^{a}\)[email protected], \({}^{b}\)[email protected], \({}^{c}\)[email protected]
Human-robot interaction Robot collision detection Deep learning for robot
## 1 Introduction
To enhance industrial productivity and facilitate daily human activities, physical collaboration between humans and robots has grown considerably. Ensuring safety in collaborative tasks is crucial to realize widespread and effective human-robot collaboration [1]. Effectively addressing unexpected collisions during physical human-robot interaction (pHRI) tasks holds great significance.
In collaborative tasks, external sensors are utilized for collision avoidance [2, 3, 4]. However, relying solely on collision avoidance may not ensure safety during pHRI due to rapid and unpredictable relative motions between humans and robots. Thus, rapid and accurate collision detection and response are essential to maintain safety. Haddadin _et al._[5] introduced the collision event pipeline to minimize injury risks during physical contact. The pipeline's crucial first step is rapid and accurate collision detection. Currently, collision detection techniques fall into three main categories: artificial skin-based, model-based, and data-driven methods.
Artificial skins, equipped with deformable materials and tactile sensors, offer accurate collision detection and localization [6; 7; 8; 9]. However, their application to cover the entire robot can be complex and costly. In contrast, model-based approaches utilize identified dynamic models to estimate external torque on robots. The generalized momentum observer (MOB) method has gained significant attention [10; 11; 5; 12]. MOB relies solely on proprioceptive sensors, eliminating the need for acceleration computation. Yet, accurate identification of the dynamic robot model remains a challenge due to nonlinearity and uncertainties like friction, backlash, and elasticity [13; 14]. Moreover, adjusting user-defined thresholds requires considerable effort.
Learning-based collision detection techniques offer an alternative to overcome challenges in purely model-based methods. By training neural network models with collision and/or collision-free data, uncertainties and unmodeled effects within the dynamic model can be accounted for without requiring a user-defined threshold [13]. These techniques involve residual and uncertainty regression or end-to-end approaches, providing binary outcomes indicating collision occurrence [15; 16]. Some studies also utilize joint position and velocity data for uncertainty regression, leading to more precise estimates of external torque by subtracting these uncertainties from the residual [17; 18].
Learning-based collision detection methods are widely used in robots with rigid joints but their suitability for collaborative robots (cobots) with variable stiffness actuation is not fully validated. Cobots with variable stiffness enable close human collaboration, emphasizing the need for accurate and rapid collision detection. However, data collection across all conditions is challenging due to the dynamic changes in stiffness during tasks. The ideal network should be robust to stiffness variations and achieve excellent performance when trained using data from a single stiffness condition, providing reliable collision detection across various scenarios involving cobots.
Collecting sufficient collision data for training the network is challenging and hazardous for both human users and robots. Thus, a data-efficient solution is crucial for robot collision detection, given the highly imbalanced features in the collected dataset. The number of collision cases is substantially lower than that of collision-free cases, underscoring the importance of optimizing data efficiency. Efficient classification methods, like the modularized neural network (MNN) introduced in [14], accurately distinguish between collision and collision-free cases even with limited collision information. However, it's worth noting that training MNN still requires a minimum of 1 hour of collision data and 2 hours of free-motion data.
In this paper, we introduce MAD-CNN, a novel network for collision detection in robots with variable stiffness actuators. MAD-CNN incorporates joint modularization and dilated convolutional neural network (CNN), along with an attention module, as shown in Figure 1. Joint modularization separates individual joints by using joint-specific variables, reducing the parameter search space and enabling efficient training with limited data. Dilated convolution allows the network to capture long-range dependencies in the input signal, enhancing collision detection sensitivity. The attention module
Figure 1: The proposed MAD-CNN architecture for robot collision detection encompasses a comprehensive structure aimed at improving collision sensitivity, robustness, and data efficiency, particularly for robots equipped with variable stiffness actuation. MAD-CNN integrates a dual inductive bias mechanism that incorporates joint modularization and a dilated convolutional neural network, along with an attention module. It is well suited for robots equipped with variable stiffness actuation, as well as for scenarios involving hard-to-collect and imbalanced collision data.
prioritizes relevant components within the joint collision features, further improving the network's sensitivity and robustness to collisions.
### Main Contributions and Organization of the Paper
The main contributions of this papers are illustrated as following:
* Our proposed MAD-CNN integrates dual inductive bias and an attention module to enhance collision sensitivity and robustness in robots with variable stiffness actuators. Remarkably, the model achieves 100% detection accuracy with minimal delay across all stiffness conditions, trained only on data from a single stiffness setting.
* MAD-CNN demonstrates remarkable data efficiency, requiring only 4 minutes of collision data from the highest stiffness setting for training. Despite the limited data, the pre-trained model successfully detects all 516 collisions occurring at random links during the robot's execution of different stiffness level motions within 30 minutes.
* Experimental results validate MAD-CNN's superior performance compared to state-of-the-art methods for collision detection, even with limited training information. The network exhibits enhanced collision sensitivity and robustness, effectively handling false positives that often challenge learning-based approaches. The conducted ablation study provides empirical evidence supporting the efficacy of MAD-CNN's network structure.
The rest of this paper is organized as follows: Section 2 provides an introduction to the platform used in this study. The proposed MAD-CNN model is elaborated in detail in Section 3. Data collection and input processing are described in Section 4. In Section 5, we present the comprehensive evaluation of MAD-CNN, including the ablation study and a comparison with state-of-the-art models. Additionally, the experimental results showcasing the integration of MAD-CNN with the continuous filter (CF) validate the collision detection sensitivity and robustness of our proposed model. Finally, Section 6 concludes this study.
## 2 Platform Introduction
This study utilizes a robot platform equipped with discrete variable stiffness actuators (DVSAs), as depicted in Fig. 2. The working principle of the DVSA is illustrated in Fig. 3, with each joint offering four different levels of physical stiffness. The stiffness of each joint can be swiftly adjusted to predefined levels by engaging or disengaging the springs. Table 1 provides a comprehensive overview of the specific stiffness level values.
In our previous work [19], we introduced a novel collision handling pipeline incorporating DVSA to enhance safety during human-robot interaction. One reaction strategy involves rapidly switching the joint's physical stiffness to the lowest level (stiffness level 1 in Table 1) to mitigate the impact of unexpected collisions. Experimental validation was conducted to evaluate the effectiveness of this strategy.
Figure 2: Experiments platform: a robotic manipulator with discrete variable stiffness actuators (DVSAs). It has four levels of intrinsic physical stiffness, and the stiffness can be rapidly adjusted online by controlling the engagement of the springs.
This study specifically focuses on collision detection for collaborative robots equipped with variable stiffness actuation, and our proposed model is based on this platform to examine the effectiveness of collision detection in this context.
## 3 The Proposed MAD-CNN Model
The proposed network structure, MAD-CNN, depicted in Fig. 1, is specifically designed to enhance data training efficiency and collision detection robustness for collaborative robots equipped with variable stiffness actuation. Efficient utilization of collision data is crucial due to the costly and laborious nature of its collection process. Additionally, the highly imbalanced features in the dataset, where collision cases are significantly fewer than collision-free cases, highlight the importance of optimizing data efficiency. Therefore, employing efficient classification methods to accurately distinguish between collision and collision-free cases, even with limited collision information [14], is essential.
Collecting collision data for all stiffness conditions is particularly challenging and costly when dealing with robots equipped with variable stiffness actuators. Thus, there is a need to develop a network architecture that remains robust to changes in the physical stiffness of the joint. MAD-CNN addresses these challenges and aims to enhance collision detection performance while effectively utilizing limited data.
### Network Structure
MAD-CNN uses dual inductive bias in the network structure. The first dual inductive bias, which is the modularized joint network structure, enables the extraction of collision information for each joint through the utilization of local joint variables. This bias in the network structure achieves the decoupling of individual joints by independently inputting joint-specific variables into their corresponding modular networks, which can decrease the search space for network parameters [14].
An additional inductive bias used in this study is the utilization of dilated convolutional neural networks. Dilated convolution, as described in [20], is a variant of the convolutional layer that offers an expanded receptive field while maintaining the same number of parameters. The concept is illustrated in Fig. 4, where the gaps within the convolutional kernel introduce an increased spacing between sampled input pixels. As a result, the network becomes capable of capturing long-range dependencies within the input signal without sacrificing resolution or computational efficiency. This emphasis on discriminating between these two classes enhances the network's sensitivity to detecting collisions,
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline Spring with high stiffness (\(K_{h}\)) & Spring with medium stiffness (\(K_{m}\)) & Stiffness & Equivalent stiffness (\(K_{eq}\)) \\ & & & [N.m/rad] \\ \hline \hline
0 & 0 & 1 & 6 \\ \hline
0 & 1 & 2 & 50 \\ \hline
1 & 0 & 3 & 160 \\ \hline
1 & 1 & 4 & 246 \\ \hline \end{tabular}
\end{table}
Table 1: Joint Stiffness Levels of the Experiment Setup
Figure 3: The working principle of the DVSA involves adjusting the stiffness by manipulating the number of active springs through the engagement or disengagement of the clutches.
thereby further improving its collision detection capabilities. The effects of both inductive bias are further validated in Section 5.1.
Following the application of modularized dilated joint networks, the resulting extracted features are combined through concatenation and subsequently inputted into a multi-head self-attention module. The self-attention module represents a critical architectural component within the Transformer framework, as highlighted in [21]. In the context of collision detection, the utilization of the self-attention module allows for the prioritization of the most pertinent components within the concatenated joint collision features. This prioritization aids in the classification process by assigning higher weights to the relevant parts of the collision features. For a given input \(I\in\mathbb{R}^{d\times n}\), where \(d\) represents the input dimension and \(n\) denotes the input length, the self-attention mechanism begins by performing a linear projection to derive the components of query (\(Q\)), key (\(K\)) and value (\(V\)), as outlined below:
\[Q=W_{q}I,\quad K=W_{k}I,\quad\text{and}\quad V=W_{V}I, \tag{1}\]
where \(W_{q}\) and \(W_{k}\in\mathbb{R}^{s1\times d}\), \(W_{V}\in\mathbb{R}^{s\times d}\) are learnable weight matrices. Then the scaled Dot-product is processed to get the attention score:
\[Z=\text{softmax}(\frac{QK^{T}}{\sqrt{d_{k}}})V, \tag{2}\]
where \(d_{k}\) corresponds to both the query and key dimensions and it serves the purpose of mitigating the issue of vanishing gradients that may arise when applying the softmax function. After that, by concatenating multi-head information and doing the linear projection, several levels of correlation information can be extracted and used for the final classification.
### Network Structure Details
The modularized joint network consists of two 1D dilated convolutional layers. The first layer has 16 filters, and the second layer has 32 filters. Both layers use a \(3\times 3\) filter size and a stride of 1. The dilation factor is set to 4 for the first layer and 8 for the second layer. Max pooling layers follow each convolutional layer. The outputs from the max-pooling layers are flattened and connected to a fully connected layer with 32 hidden neurons, using the Gaussian error linear unit (GELU) activation function for improved accuracy. The resulting outputs from this layer, representing each joint, are concatenated to form a combined representation with 64 hidden neurons. This concatenated representation then goes through a self-attention module with 1 head. The output of the self-attention module is connected to another fully connected layer with 64 hidden neurons, employing the GELU activation function. The output layer uses the softmax function for classification, providing prediction scores between 0 and 1 for the "no collision" and "collision" classes. The proposed network structure can be computed in only 0.046 \(\mu s\), enabling real-time collision detection on hardware such as Intel i9-19700H with Nvidia GeForce RTX 3070 Ti.
Figure 4: An example of the 1D dilated convolution layers with dilation 1, 2, and 4, respectively.
## 4 Data Collection and Network Training
### Data Collection
The data collection process utilized the robot platform described in Section 2 for conducting experiments. During this process, the robot executed random point-to-point acceleration-deceleration movements within its joint space without implementing any collision reaction strategy. A human operator intentionally collided with the robot at various random positions and instances using a collision tool (Fig. 5), specifically a limit switch, to record the ground truth data. A value of 1 represents a collision, and 0 indicates no collision. The joint space dataset (joint torque \(\tau\), joint velocity \(\dot{\theta}\)) was recorded at a sampling rate of 1000 Hz. The data collection considered two scenarios: collision and collision-free, each having three cases representing different levels of physical stiffness (stiffness levels 2, 3, and 4). The lowest stiffness level (stiffness level 1) is exclusively assigned to the safety configuration, following our previous work [19], and was not involved in the data collection process.
Table 2 presents the durations of the training and testing data scenarios. For training, only collision data from the highest stiffness level (level 4) lasting four minutes was used. The remaining data collected from different stiffness levels were used to evaluate the network's robustness to stiffness changes. The testing phase involved a total of 30 minutes of collision motions (516 collisions) and 45 minutes of collision-free motions.
### Input Processing
All the collected data is normalized to fit within the range of [0, 1]. The time-series data is segmented using a shifting time window that includes the current data point and the ten past points, with a sampling interval of \(t_{I}=0.01\). This results in a time window duration of \(t_{w}=(11-1)\times 0.01=0.1s\). The utilization of segmented time-series data with a time window helps mitigate the influence of sensor noise, as suggested by Kim _et al._[14]. The choice of time step eleven is determined through an evaluation process where the time step is progressively increased until a relatively superior performance is achieved.
The input to MAD-CNN is shaped as \(X=[X^{1},X^{2}]\), where \(X^{i}\) represents the input data corresponding to the \(i\)th joint. These \(X^{i}\) data are constructed by combining the segmented data, as depicted in Figure 6, where \(x^{i}(t)\) includes the normalized observed signals, consisting of the joint torque \(\tau\) and joint velocity \(v\):
\[X^{i}=[x^{i}(t-100),x^{i}(t-90),...,x^{i}(t-10),x^{i}(t)], \tag{3}\]
\[x^{i}(t)=[\tau,v]. \tag{4}\]
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline Data Type & States & Joints Stiffness & Time \\ \hline Training & Collision & Highest level & 4min \\ \hline \multirow{4}{*}{Testing} & \multirow{4}{*}{Collision} & Highest level & 10min \\ \cline{3-4} & & 3rd level & 10min \\ \cline{3-4} & & 2nd level & 10min \\ \cline{2-4} & \multirow{4}{*}{No Collision} & Highest level & 15min \\ \cline{3-4} & & 3rd level & 15min \\ \cline{3-4} & & 2nd level & 15min \\ \hline \end{tabular}
\end{table}
Table 2: Training and testing dataset and the corresponding duration
Figure 5: The limit switch used to collide with the manipulator and record the ground truth collision labels. It outputs binary results: 1 - collision, 0 - no collision.
### Network Training Details
Training data in Table 2 are randomly shuffled prior to training. The network structure is implemented in Python with PyTorch 1.13.1. The mini-batch size is 1000, and the number of epochs is set to 30. The optimizer is Adam with default parameters (\(\beta_{1}\) = 0.9, \(\beta_{2}\) = 0.999, \(\epsilon\) = 1e-07) and the learning rate of \(lr\) = 1e-3. We use binary cross-entropy loss (BCELoss) as the loss function.
\[\text{BCELoss}=-(y*log(\hat{y})+(1-y)*log(1-\hat{y})), \tag{5}\]
where \(y\) represents the ground truth collision label, while \(\hat{y}\) denotes the network's prediction.
## 5 Results
To evaluate the proposed approach and conduct a fair comparison with state-of-the-art methods, we use three criteria: detection failure number (DFn), detection delay (DD) in milliseconds (ms), and false positive number (FPn), consistent with the evaluation framework in [14]. The collision tool generates a "1" signal when a collision occurs, and the network predicts collisions with an output threshold of 0.5. DFn represents the count of collisions not accurately classified by the network, while FPn indicates instances where the network wrongly classifies non-collision events as collisions. Minimizing FPn is crucial for operational efficiency, as false positives may trigger unnecessary stop actions. Successive FPs within the time step are considered as one FP.
To address FP, a continuous filter (CF) is commonly used, as highlighted in [14; 13]. While CF can reduce FP, it may increase DFn and DD, creating a trade-off between collision detection sensitivity and false positive occurrence. Section 5.2 investigates the impact of integrating CF into MAD-CNN to assess its efficacy in enhancing collision detection robustness. An ablation study in Section 5.1 evaluates the contributions of various components within MAD-CNN, verifying the superiority of the proposed network architecture. Additionally, in Section 5.3, we perform a comparative analysis of collision detection performance between MAD-CNN and state-of-the-art models.
Figure 6: The time-series signals \(\mathbf{x}(t)\) are segmented by dividing them into consecutive non-overlapping windows of size 101, with a sampling interval of 10. These segmented windows of \(\mathbf{x}(t)\) are then sampled and arranged as the input data fed into the network.
Figure 7: Explanation of Detection Delay (DD), Detection Failure (DF) and False Positive (FP). The dots represents the samples
### Ablation Study
The ablation study investigates the effects of modularization, dilated convolution, and the attention module on collision detection performance. Five cases are examined:
* MAD-CNN: Incorporates all components, including modularization, dilated convolution, and the attention module.
* M-CNN: Utilizes joints modularization without dilation and the attention module.
* MD-CNN: Excludes the attention module while retaining modularization and dilated convolution.
* MA-CNN: Excludes dilated convolution but retains both modularization and the attention module.
* AD-CNN: Eliminates joints modularization and uses all joint signals together as input to the network.
Table 3 presents the corresponding results, highlighting the individual contributions of these components to the overall effectiveness of the model.
* MAD-CNN outperforms other models with its combination of modularization, dilated convolution, and attention module, resulting in superior collision sensitivity. It achieves zero detection failure and minimal detection delay. Although DD increases with decreasing stiffness, MAD-CNN shows a smaller increase
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|} \hline \multicolumn{2}{|l|}{Ablation Study} & \multicolumn{1}{l|}{**MAD-CNN**} & \multicolumn{1}{l|}{**M-CNN**} & \multicolumn{1}{l|}{**MD-CNN**} & \multicolumn{1}{l|}{**MA-CNN**} & \multicolumn{1}{l|}{**AD-CNN**} \\ \hline \multicolumn{2}{|l|}{Modularization} & \multicolumn{1}{l|}{\(\checkmark\)} & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \\ \hline \multicolumn{2}{|l|}{Dilation} & \multicolumn{1}{l|}{\(\checkmark\)} & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \\ \hline \hline \multirow{4}{*}{Joints Stiffness} & \multirow{2}{*}{Highest level} & DFN & **0/172** & 0/172 & 0/172 & 4/172 & 0/172 \\ \cline{3-7} & & DD(ms) & **11.3095** & 15.3491 & 15.1065 & 18.1818 & 13.6488 \\ \cline{3-7} & \multirow{2}{*}{3rd level} & FPn & 183 & **42** & 74 & 543 & 158 \\ \cline{3-7} & & DFN & **0/172** & 2/172 & 1/172 & 10/172 & 0/172 \\ \cline{3-7} & \multirow{2}{*}{3rd level} & DD(ms) & **12.4093** & 17.6 & 16.3508 & 19.7592 & 14.4011 \\ \cline{3-7} & & FPn & 261 & **102** & 116 & 135 & 167 \\ \cline{3-7} & \multirow{2}{*}{2nd level} & DFN & **0/172** & 24/172 & 1/172 & 30/172 & 6/172 \\ \cline{3-7} & & DD(ms) & **12.4319** & 18.6283 & 16.7894 & 21.4647 & 16.0304 \\ \cline{3-7} & & FPn & 137 & **78** & 94 & 120 & 123 \\ \hline \multirow{4}{*}{Total} & DFN & **0/516** & 26/516 & 2/516 & 44/516 & 6/516 \\ \cline{3-7} & & DD(ms) & **12.0502** & 17.1924 & 16.0822 & 19.8019 & 14.6934 \\ \cline{1-1} \cline{3-7} & & FPn & 581 & **222** & 284 & 798 & 448 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results for ablation study of MAD-CNN
Figure 8: The study of continuous filter (CF) integrated with MAD-CNN. We explore various CF durations ranging from 0ms to 27ms to assess the robustness of MAD-CNN for collision detection.
compared to other models. However, it does have a higher false positive rate, which can be effectively handled with continuous filters (CF), as discussed in Section 5.2..
* Modularization reduces the parameter search space, making training more data-efficient. It also helps in minimizing detection failures and detection delay, as shown in the performance comparison with MAD-CNN and AD-CNN. However, using modularization alone (M-CNN) does not achieve satisfactory collision sensitivity performance.
* Dilation significantly improves collision detection in MAD-CNN compared to MA-CNN, enhancing sensitivity to collisions. This is due to the capability of dilated convolution to extract discriminative features, aiding in robust collision detection. However, dilated convolution may increase false positives at lower stiffness levels, possibly due to reflecting system noise more prominently. Similar observations are seen in comparisons between M-CNN and MD-CNN.
* The attention module improves collision detection by reducing DFn and DD, enhancing the network's sensitivity to collisions by prioritizing relevant features. However, it also increases false positives, likely due to sensitivity to system noise. Combining modularization and attention without dilation results in unfavorable performance compared to other network structures examined.
### False Positive Handling with Continuous Filter
In the previous section, MAD-CNN outperforms other network structures in terms of detection failure number and detection delay but results in a higher false positive number. To address this issue, the current section focuses on investigating the impact of employing a continuous filter (CF) in effectively handling false positives. The CF can be understood as a temporal filtering mechanism in which the collision label is assigned only when the network prediction extends beyond a specific duration. The utilization of a CF has demonstrated its efficacy in reducing FPs while typically resulting in increased DFn and DD. For instance, in the study conducted by Kim _et al_. [14], the implementation of a 3-millisecond CF in their MNN model led to a significant 20.5% increase in DD. Our proposed model, MAD-CNN, exhibits superior performance in terms of DFn and DD compared to other models but does have a higher FP rate. In this study, we explore various CF durations ranging from 0ms to 27ms to assess the robustness of MAD-CNN for collision detection and determine an optimal CF duration for our model. Figure 8 illustrates the combined results across all stiffness levels.
As the duration of the continuous filter (CF) increases, the number of false positives (FP) decreases significantly. Notably, when using a large CF duration of 27ms, there are only 2 false positives observed. However, it is important to highlight that these two false positives occur when the manipulator is switched on, which can be easily avoided through pre-programming. One interesting finding is that the increase in CF duration does not lead to a dramatic rise in detection delay (DD) and detection failure number (DFn). The maximum increase in DD is only 3.9%, observed when using a 1ms CF duration. Additionally, even with a considerably longer CF duration of 27ms, the level of DD remains relatively stable at approximately 12.7ms.
On the other hand, in terms of DFn, MAD-CNN achieves perfect detection accuracy when using CF durations ranging from 0ms to 15ms. With a continuous increase in CF duration, there is an associated increase in DFn. Nevertheless, the model still maintains a decent level of detection accuracy, with only 6 detection failures out of 512 collisions (resulting in 98.8% detection accuracy), even when employing a significantly longer CF duration of 27ms. The findings indicate that MAD-CNN exhibits excellent collision sensitivity and demonstrates robustness when utilized in conjunction with continuous filters. In the subsequent section, we adopt a 15ms CF duration, which enables us to achieve a favorable performance across all three evaluation criteria.
### Performance Comparison to Existing Methods
In this section, we compare MAD-CNN with two existing learning-based approaches: MNN and the 1D CNN method. MNN has shown excellent performance in collision detection for the Doosan robot M0609, and it can potentially be fine-tuned for other robots of the same type. The 1D CNN method proposed by Park _et al_. represents an end-to-end learning-based technique for collision detection. We evaluate the three models using a continuous filter (CF) value of 15 ms to assess their sensitivity to collisions and examine the robustness of CF in the evaluation process. By conducting this comparative analysis, we aim to demonstrate the superiority of MAD-CNN in collision detection for collaborative robots with variable stiffness actuators. The evaluation focuses on key metrics such as detection failure number (DFn), detection delay (DD), and false positive number (FPn) to assess each model's sensitivity and robustness. The results will provide insights into MAD-CNN's effectiveness, showcasing its advantages over existing methods and highlighting its potential for real-world applications in collaborative robot environments.
Table 4 presents the results of the comparative analysis among MAD-CNN, MNN, and the 1D CNN method. MAD-CNN outperforms the other methods by successfully detecting all collisions with a minimal detection delay of 12.0503 ms. The 1D CNN method also shows reasonably good collision sensitivity, missing only 26 out of 516 collisions and exhibiting a detection delay of 23.5966 ms. However, MNN demonstrates the poorest performance across all evaluation criteria, likely due to insufficient training data. While MNN requires at least 1 hour of collision data and 2 hours of non-collision data for training, MAD-CNN was trained with only 4 minutes of collision data. The utilization of CF leads to a reduction in false positives (FPs) for all three methods. However, it also results in an increase in detection failure number (DFn) for MNN and a notable increase in both DFn and DD for the 1D CNN method. In contrast, MAD-CNN shows greater robustness when incorporating CF. Even with a large CF duration, MAD-CNN successfully detects all collisions and experiences only a 6.4% increase in DD. Additionally, MAD-CNN exhibits a substantial decrease in Fpn with the use of CF.
One remarkable finding is that despite being trained with a limited 4-minute collision dataset focused solely on the highest stiffness condition, MAD-CNN demonstrates robustness to stiffness changes. It achieves 100% detection accuracy and only a slight increase in detection delay when tested under the 3rd and 2nd stiffness conditions. Overall, these results demonstrate the superior performance and robustness of MAD-CNN compared to existing methods for collision detection in robots equipped with variable stiffness actuators. MAD-CNN's ability to achieve accurate collision detection with limited training data and its adaptability to different stiffness conditions make it a promising approach for real-world applications in collaborative robot environments.
Despite the excellent collision detection performance of MAD-CNN in robots with variable stiffness actuators, there are opportunities for future research in the following areas. Firstly, evaluating the efficacy of MAD-CNN in commercial robots equipped with multiple rigid joints would enhance its generalizability and applicability across a broader range of robotic systems. Additionally, exploring unsupervised learning methods such as autoencoders (AE) [22; 23] or generative adversarial networks (GAN) [24] could alleviate the reliance on labeled collision data during training. These directions hold promise for advancing collision detection methods for various robotic platforms.
\begin{table}
\begin{tabular}{|l|l||l||l|l|} \hline \hline \multirow{2}{*}{**Joints Stiffness**} & \multirow{2}{*}{**Methods**} & \multicolumn{2}{c|}{Collision (10mins)} & \multicolumn{2}{c|}{Collision free (15 mins)} \\ \cline{3-5} & & DEn & DD(ms) & Fpn \\ \hline \multirow{5}{*}{Highest level} & MAD-CNN & 0/172 & 11.30\% & 183 \\ \cline{2-5} & MAD-CNN w/ CF & 0/172 & 12.4404 & 14 \\ \cline{2-5} & MNN & 18/172 & 54.5498 & 6 \\ \cline{2-5} & MNN w/ CF & 144/172 & 42.2592 & 0 \\ \cline{2-5} & 1D CNN & 4/172 & 22.3113 & 58 \\ \cline{2-5} & ID CNN w/ CF & 11/172 & 22.98125 & 3 \\ \hline \multirow{5}{*}{3rd level} & MAD-CNN & 0/172 & 12.4093 & 261 \\ \cline{2-5} & MNN & 0/172 & 13.1111 & 17 \\ \cline{2-5} & MNN & 90/172 & 35.4512 & 737 \\ \cline{2-5} & MNN w/ CF & 149/172 & 21.3478 & 29 \\ \cline{2-5} & ID CNN & 6/172 & 24.6325 & 92 \\ \cline{2-5} & ID CNN w/ CF & 12/172 & 24.8062 & 0 \\ \hline \multirow{5}{*}{2nd level} & MAD-CNN & 0/172 & 12.4319 & 137 \\ \cline{2-5} & MAD-CNN w/ CF & 0/172 & 12.9053 & 1 \\ \cline{2-5} & MNN & 16/17/172 & 68.7500 & 53 \\ \cline{2-5} & MNN w/ CF & 17/172 & 2an & 0 \\ \cline{2-5} & ID CNN & 16 & 23.8461 & 72 \\ \cline{2-5} & ID CNN w/ CF & 28 & 24.2708 & 0 \\ \hline \multirow{5}{*}{Total} & MAD-CNN & 0/516 & 12.0503 & 581 \\ \cline{2-5} & MAD-CNN w/ CF & 0/516 & 12.8189 & 32 \\ \cline{1-1} \cline{2-5} & MNN & 275/516 & 52.917 & 796 \\ \cline{1-1} \cline{2-5} & MNN w/ CF & 465/516 & 31.8035 & 29 \\ \cline{1-1} \cline{2-5} & 1D CNN & 26/516 & 23.5966 & 222 \\ \cline{1-1} \cline{2-5} & 1D CNN w/ CF & 51/516 & 24.0194 & 3 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Performance of Existing Methods and MAD-CNN
## 6 Conclusions
This paper introduces MAD-CNN, a new network structure designed for collision detection in robots with variable stiffness actuators. The proposed network incorporates a dual inductive bias mechanism and an attention module to enhance the sensitivity and robustness of collision detection. Remarkably, the network achieves high performance with minimal training data requirements. Only 4 minutes of collision data from the highest stiffness condition are needed for training. Experimental results demonstrate the network's robustness to stiffness changes, as it successfully detects collisions with minimal delay across various stiffness conditions, even when trained solely on data from a single stiffness condition. The superior performance of MAD-CNN makes it well suited for robots equipped with variable stiffness actuation, as well as for scenarios involving hard-to-collect and imbalanced collision data. Moreover, comparative analysis reveals the clear superiority of MAD-CNN over state-of-the-art learning-based models in terms of collision detection sensitivity, robustness, and data efficiency.
## Acknowledgments
This work was supported by Khalifa University of Science and Technology under Award No. RC1-2018-KUCARS and FSU-2021-019.
|
2310.01683 | Commutative Width and Depth Scaling in Deep Neural Networks | This paper is the second in the series Commutative Scaling of Width and Depth
(WD) about commutativity of infinite width and depth limits in deep neural
networks. Our aim is to understand the behaviour of neural functions (functions
that depend on a neural network model) as width and depth go to infinity (in
some sense), and eventually identify settings under which commutativity holds,
i.e. the neural function tends to the same limit no matter how width and depth
limits are taken. In this paper, we formally introduce and define the
commutativity framework, and discuss its implications on neural network design
and scaling. We study commutativity for the neural covariance kernel which
reflects how network layers separate data. Our findings extend previous results
established in [55] by showing that taking the width and depth to infinity in a
deep neural network with skip connections, when branches are suitably scaled to
avoid exploding behaviour, result in the same covariance structure no matter
how that limit is taken. This has a number of theoretical and practical
implications that we discuss in the paper. The proof techniques in this paper
are novel and rely on tools that are more accessible to readers who are not
familiar with stochastic calculus (used in the proofs of WD(I))). | Soufiane Hayou | 2023-10-02T22:39:09Z | http://arxiv.org/abs/2310.01683v1 | # Commutative Width and Depth Scaling
###### Abstract
This paper is the second in the series _Commutative Scaling of Width and Depth_ (\(\overleftrightarrow{\mathbf{WD}}\)) about commutativity of infinite width and depth limits in deep neural networks. Our aim is to understand the behaviour of neural functions (functions that depend on a neural network model) as width and depth go to infinity (in some sense), and eventually identify settings under which commutativity holds, i.e. the neural function tends to the same limit no matter how width and depth limits are taken. In this paper, we formally introduce and define the commutativity framework, and discuss its implications on neural network design and scaling. We study commutativity for the neural covariance kernel which reflects how network layers separate data. Our findings extend previous results established in [55] by showing that taking the width and depth to infinity in a deep neural network with skip connections, when branches are suitably scaled to avoid exploding behaviour, result in the same covariance structure no matter how that limit is taken. This has a number of theoretical and practical implications that we discuss in the paper. The proof techniques in this paper are novel and rely on tools that are more accessible to readers who are not familiar with stochastic calculus (used in the proofs of \(\overleftrightarrow{\mathbf{WD}(I)}\))).
## 1 Introduction
The success of large language and vision models has recently amplified an existing trend of research on large size neural network. There are generally two ways to increase the size of a neural network model: increasing the width, for instance the number of neurons in hidden layers in a fully-connected network, the number of channels in a convolutional network, or the number of attention heads in a transformer architecture; and increasing the depth of the network, i.e. the number of layers. A suitable appraoch to understand the behaviour of large neural networks is by analyzing some pre-defined quantity as the width and/or depth tend to infinity. While the width limit by itself is now relatively well understood in different contexts [2, 6, 9, 15, 37], the depth limit and the interaction between the two have not been studied as much. In particular, given some pre-defined quantity of interest that depends on the network model, a basic question is: _do these two limits commute?_ (in the sense that the behaviour of the quantity of interest as width and depth go to infinity does not change depending on the order of which these limits are taken). One statistical quantity of interest is the _neural covariance_ kernel which reflects how layers in a neural network model separate input data. In this context, recent literature suggests that, at initialization,
in certain kinds of multi-layer perceptrons (MLPs) or residual neural networks (ResNets) with scaled main branch, the depth and width limits generally do _not_ commute [45, 57]; this would imply that in practice, such networks would behave quite differently depending on whether width is much larger than depth or the other way around. However, in the case of ResNets with suitably scaled residual blocks, recent work [55] showed that, to the contrary, at initialization, for a ResNet with blocks scaled the natural way so as to avoid blowing up the output, the width and depth limits _do commute_. An interesting practical implication of this result is that it justifies prior calculations that take the width limit first, then depth, to understand the behavior of deep residual networks, such as prior works in the signal propagation literature [6, 7, 33].
In this work, we introduce and formalize the framework of commutativity of the width and depth limits and generalize (and improve) existing results on the covariance from [55] for arbitrary sequences of scaling factors; these sequences are used to scale the residual blocks so as to avoid exploding behaviour as depth grows. Table 1 shows the difference between this work and the previous work in the \(\overleftarrow{\mathbf{WD}}\) series. We discuss the theoretical and practical implications of commutativity by addressing the natural question; _why should we care about commutativity at all?_ (see Section 3).
In addition to the significance of the results and the new framework, the mathematical novelty of this paper lies in the proof techniques: in contrast to [55] where the depth limit is taken first (fixing the width), followed by the width limit, we first take the width to infinity this time, which is a more conventional approach in the theory of signal propagation in deep networks. As such, the proof techniques in this paper can be seen as 'orthogonal' to the machinery developed in [55], and are more accessible to readers who are not familiar with stochastic calculus. Our results provide new insights into the behavior of deep neural networks with general depth scaling factors with implications on the design and analysis of these networks.
All the proofs are deferred to the appendix and referenced after each result. Empirical evaluations are provided to illustrate the theoretical results.
## 2 Related Work
The theoretical analysis of randomly initialized neural networks with an infinite number of parameters has yielded a stream of interesting results, both theoretical and practical. A majority of this research has concentrated on examining the scenario in which the width
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Paper & Block scaling & Neural Functions & Proof Techniques \\ \hline \(\overleftarrow{\mathbf{WD}}\)(I) ([55]) & \(1/\sqrt{depth}\) & Neural Covariance & Tools from \\ Neural Distribution & Stochastic Calculus \\ \hline \(\overleftarrow{\mathbf{WD}}\)(II) (This work) & General Block Scaling & Neural Covariance & Standard \\ & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & Concentration results \\ \hline \end{tabular}
\end{table}
Table 1: Commutative Width and Depth Scaling Series. Block scaling refers to a scaling factor in front of the residual block. Neural functions are formally defined in Section 3.
of the network is taken to infinity while the depth is considered fixed. However, in recent years, there has been a growing interest in exploring the large depth limit of these networks. In this overview, we present a summary of existing results on this topic, though it is not exhaustive. A more comprehensive literature review is provided in Appendix A.
\(\bullet\)_Infinite-Width Limit:_ The study of the infinite-width limit of neural network architectures has been a topic of significant research interest, yielding various theoretical and algorithmic innovations. These include initialization methods, such as the Edge of Chaos [5, 6, 7, 15], and the selection of activation functions [15, 35, 50, 52], which have been shown to have practical benefits. In the context of Bayesian analysis, the infinite-width limit presents an intriguing framework for Bayesian deep learning, as it is characterized by a Gaussian process prior. Several studies (e.g. [2, 9, 10, 25, 29]) have investigated the weak limit of neural networks as the width goes to infinity, and have demonstrated that the network's output converges to a distribution modeled by a Gaussian process. Bayesian inference utilizing this "neural" Gaussian process has been explored in [9, 33]. 1
Footnote 1: It is worth mentioning that kernel methods such as NNGP and NTK significantly underperform properly tuned finite-width network trained using SGD, see [51].
\(\bullet\)_Infinite-Depth Limit:_ The infinite-depth limit of randomly initialized neural networks is a less explored topic compared to the infinite-width limit. Existing results can be categorized depending on how the two limits are taken. For instance, in the case of sequential limits, the width of the neural network is taken to infinity first, followed by the depth. This limit has been extensively utilized to explore various aspects of neural networks, such as examining the neural covariance, deriving the Edge of Chaos initialization scheme ([5, 6, 7, 15]), evaluating the impact of the activation function [15, 35, 52], studying the behavior of the Neural Tangent Kernel (NTK) [23, 28], and deriving the distribution of the limiting networks at initialization [33, 54]. Another interesting limit is the proportional limit where the ratio of depth to width is fixed, and both are jointly taken to infinity. In [34], the authors showed that for a particular type of residual neural networks (ResNets), the network output exhibits a (scaled) log-normal behavior in this limit, which differs from the sequential limit in which the width is first taken to infinity followed by depth, in which case the distribution of the network output is asymptotically normal ([6, 15]). Additionally, in [45], the authors examined the neural covariance of a multi-layer perceptron (MLP) in the joint limit and proved that it weakly converges to the solution of a Stochastic Differential Equation (SDE). Other works have investigated this limit and found similar results [14, 36, 38, 40, 57]. A third interesting approach is the general limit \(\min\{n,L\}\rightarrow\infty\), where width and depth can to infinity in any order. To the best of our knowledge, this limit was only studied in [55] (\(\overleftarrow{\mathbf{WD}}(\mathbf{I})\)) where convergence of the neural covariance in this limit was established for suitably scaled ResNet, implying that the infinite width and depth limits _commute_.
\(\bullet\)_Commutativity of the limits:_ given some neural function (a function that depends on the network parameters, to be defined later), we can think about whether taking the width and depth limits result in different behaviour depending on how this limit is taken. In this context, we distinguish between two notions of commutativity: _weak commutativity_ which implies that the sequential limits "width \(\rightarrow\infty\), then depth\(\rightarrow\infty\)" and "depth \(\rightarrow\infty\), then width\(\rightarrow\infty\)" yield the same limit, and _strong commutativity_ which implies that the limit "\(\min\{\text{width, depth}\}\rightarrow\infty\)" exists and is unique. Moreover, limits are always defined in some
sense (e.g. \(L_{2}\), weak limit etc.). In this context, strong commutativity was shown in [55] for neural distribution (distribution of a neuron in the network) with Wasserstein distance and for neural covariance kernel with \(L_{2}\) distance. In a recent work by [54], the authors showed weak commutativity of the neural distribution for _controlled_ ResNets, a form of ResNets with scaling factors given by the increments from some reference function.2 In this work, we will focus on the neural covariance instead of neural distribution and show strong commutativity under general depth scaling.
Footnote 2: In [54], while the results are with weak commutativity, the proofs can be in-principle extended to show strong commutativity for the weak convergence of the neural distribution.
## 3 Setup and Definitions: Commutativity and Neural Functions
When analyzing the asymptotic behavior of randomly initialized neural networks, various notions of probabilistic convergence can be employed, depending on the context. In this work, we particularly focus on strong convergence, defined to be the \(L_{2}\) convergence as described in the following definition.
**Definition 1** (Strong Convergence).: _Let \(d\geq 1\). We say that a sequence of \(\mathbb{R}^{d}\)-valued random variables \((X_{k})_{k\geq 1}\) converges in \(L_{2}\) (or strongly) to a continuous random variable \(Z\) if \(\lim_{k\to\infty}\|X_{k}-Z\|_{L_{2}}=0\), where the \(L_{2}\) is defined by \(\|X\|_{L_{2}}=\left(\mathbb{E}[\|X\|^{2}]\right)^{1/2}\)._
With this notion of strong convergence, we are now ready to introduce the commutativity framework for general neural network models.
Notation.Throughout the paper, the width and depth of a neural network model are denoted by \(n\) and \(L\), respectively, and the input dimension is denoted by \(d\). We write \([N]:=\{1,2,\ldots,N\}\) for any \(N\geq 1\).
Let us now consider a general neural network model of width \(n\geq 1\) and depth \(L\geq 1\), given by
\[\begin{cases}Y_{0}(a)=W_{in}a,\quad a\in\mathbb{R}^{d}\\ Y_{l}(a)=\mathcal{F}_{l}(W_{l},Y_{l-1}(a)),\;l\in[L],Y_{l}(a)\in\mathbb{R}^{n}, \end{cases} \tag{1}\]
where \(\mathcal{F}_{l}\) is a mapping that defines the nature of the \(l^{th}\) layer and \(W_{in}\in\mathbb{R}^{n\times d},W_{l}\in\mathbb{R}^{n\times n}\) are model weights. For the sake of simplification, we omit the dependence of \(Y_{l}\) on \(n\) and \(L\) in the notation. We refer to the vectors \(\{Y_{l},l=0,\ldots,L\}\) as _pre-activations_. Let \(\theta_{n,L}=(W_{in},W_{1},\ldots,W_{l})\) be the model weights and assume that \(\theta_{n,L}^{0}\sim\mu_{n,L}^{0}\), where \(\theta_{n,L}^{0}\) are the weights at initialization and \(\mu^{0}\) is a distribution that (naturally) depends on network width \(n\) and depth \(L\). Let us now define the notion of neural functions.
**Definition 2** (Neural Function).: _Given a general neural network model (Eq. (1)) of width \(n\) and depth \(L\), a set of network inputs \(\mathbf{a}=(a_{1},a_{2},\ldots,a_{k})\in(\mathbb{R}^{d})^{k}\), a neural function \(T\) is any function of the form \(T(n,L,\mathbf{a})=\mathcal{G}(\theta_{n,L}^{0},\mathbf{a})\), where \(\mathcal{G}\) is a general mapping with output in \(\mathbb{R}\).3_
Footnote 3: This definition of neural functions can be extended to general mappings \(\mathcal{G}\) with outputs in \(\mathbb{R}^{p}\) for some \(p\geq 1\). This is not required in this paper since we will be focusing on neural covariance kernel which has output in \(\mathbb{R}\).
Note that (almost) any quantity of interest in the training process of neural networks can be represented as a neural function. This remark was first observed in the series of Tensor Programs [37] where the result of any neural computation can be seen as a random quantity where the randomness is inherited from the initialization weights. The training dataset is considered deterministic in this case and consists of a sequence of inputs \((a_{1},a_{2},\ldots,a_{k})\). In this paper, we particularly think of neural functions as _proxy_ functions that track some behaviour of the network as we scale width and depth with the goal of providing insights on scaling strategies (see below for a specific choice of the neural function). With this definition of neural functions, we now formalize the notion of commutativity of the width and depth limits.
**Definition 3** (Commutativity).: _Given a neural function \(T\),4 we say that \(T\) satisfies universality for the width and depth limits if for any set of inputs \(\mathbf{a}=(a_{1},a_{2},\ldots,a_{k})\), \(T(n,L,\mathbf{a})\) converges in \(L_{2}\) in the limit \(\min\left\{n,L\right\}\to\infty\)._
Footnote 4: Note that by definition, a neural function is associated with a network model. When we consider a neural function \(T\), the underlying model is assumed to be fixed.
We can define a weak notion of commutativity where only sequential limits are considered, i.e. \(n\) or \(L\) limits are taken in a sequential order.
**Definition 4** (Weak Commutativity).: _Given a sequence of neural functions \(T=(T_{n,L})_{n,L\geq 1}\), we say that \(T_{n,L}\) satisfies commutativity for the width and depth limits if for any set of inputs \((a_{1},a_{2},\ldots,a_{k})\), both \(\lim\limits_{L\to\infty}\lim\limits_{n\to\infty}T_{n,L}(a_{1},a_{2},\ldots,a_{ k})\) and \(\lim\limits_{n\to\infty}\lim\limits_{L\to\infty}T_{n,L}(a_{1},a_{2},\ldots,a_{ k})\) exist in \(L_{2}\) and are equal._
Weak commutativity is trivially implied by commutativity. Intuitively, weak commutativity only deals with the 'extreme' scenarios \(L\gg n\gg 1\) and \(n\gg L\gg 1\) and does not consider the cases where for instance \(L\approx n\gg 1\).
Implications of Commutativity.Naturally, one might ask why we should care about commutativity at first. Commutativity of width and depth limits in neural networks holds significant importance for several compelling reasons:
1. _Unification of Width and Depth Scaling:_ when we aim to scale a neural network for improved performance, we often encounter scenarios where we must decide whether to increase the network's width or depth. Each of these choices generally lead to different design considerations, including variations in initialization schemes, activation functions, and learning rates. However, commutativity of the width and depth limits for some neural function \(T\) ensures that regardless of how we scale the network--whether by increasing width before depth, growing both width and depth proportionally, or taking width to infinity before depth--the resulting limiting behavior remains consistent. This means that once an effective scaling strategy is identified for a specific scenario with large width and depth, it remains a viable choice as long as both width and depth are large, simplifying the scaling process.
2. _Robust Scaling:_ as a result of commutativity, scaling the width and depth becomes robust to extreme changes in neural functions. This allows some flexibility in the
scaling procedure; in practice, one might want to increase width significantly while fixing depth, or the opposite, while preserving desirable properties captured by the neural function.
3. _Transfer of Insights:_ commutativity facilitates the transfer of insights from simplified theoretical settings to practical applications. When dealing with neural networks of large width and depth, it can be challenging to analyze their behavior directly. However, commutativity allows us to explore different limits, such as taking width to infinity first and then depth or vice versa, to gain a better understanding of the network's behaviour. Because the limit is universal (no matter how width and depth go to infinity), the insights we get in the simplified setting (e.g. sequential limit) transfers to all settings (e.g. when depth is of the same order as width).
4. _Commutativity is Feasible in Practice:_ we show that by introducing a simple scaling factor in front of the residual block in ResNets, commutativity holds for the neural covariance function at initialization (defined below). This neural function is used as a measure of how network layers separate input data, and led to many interesting practical methods (initialization schemes, neural network Gaussian process, choice of the activation function etc.) [6, 9, 15]. An in-depth discussion on this topic is provided below.
Neural Covariance.In this paper, we focus on neural functions given by the covariance/correlation functions _at initialization_. Given two inputs \(a,b\in\mathbb{R}^{d}\backslash\{0\}\),5 the neural covariance and correlation kernels at layer \(l\) are given by
Footnote 5: Here, we assume that the inputs are non-zero, other all the pre-activations \(Y_{l}\) are zero, and the correlation is undefined in this case. All the results in this paper are trivial if \(a=0\) or \(b=0\). We will therefore always assume that \(a,b\neq 0\).
\[\begin{cases}q_{l,n}(a,b)=\frac{\langle Y_{l}(a),Y_{l}(b)\rangle}{n}\\ c_{l,n}(a,b)=\frac{\langle Y_{l}(a),Y_{l}(b)\rangle}{\|Y_{l}(a)\|\|Y_{l}(b)\|}, \end{cases}\]
where the correlation is only defined when \(\|Y_{l}(a)\|,\|Y_{l}(b)\|\neq 0\).
Note that in general, if commutativity holds for the covariance kernel, then it holds for the neural correlation kernel, and vice-versa. This is true as long as pre-activations norms \(\|Y_{l}(a)\|\) are non-zero with high probability, which is generally satisfied, see Lemma 5 for a rigorous proof of this result. Hereafter, we will interchangeably discuss commutativity for neural covariance and correlation, while stating the theoretical results only for neural covariance. The results on the convergence of neural covariance are stated for two inputs \(a,b\), but they can be readily generalized to the case of multiple inputs \(a_{1},a_{2},\ldots,a_{k}\in\mathbb{R}^{d}\), where we can define the neural covariance matrix at layer \(l\) by
\[\mathbf{q}_{l,n}(a_{1},a_{2},\ldots,a_{k})=\begin{pmatrix}q_{l,n}(a_{1},a_{1}) &\ldots&q_{l,n}(a_{1},a_{k})\\ \vdots&\ddots&\vdots\\ q_{l,n}(a_{k},a_{1})&\ldots&q_{l,n}(a_{k},a_{k})\end{pmatrix}.\]
Why neural Covariance/Correlation?In the literature on signal propagation, there is a significant interest in understanding the covariance/correlation between the pre-activation vectors \(Y_{[tL]}(a)\) and \(Y_{[tL]}(b)\) for two different inputs \(a,b\in\mathbb{R}^{d}\). A natural question in this context is: _Why should we care about this covariance function?_
It is well-established that even with properly initialized multi-layer perceptrons (MLPs), the network outputs \(Y_{L}(a)\) and \(Y_{L}(b)\) become perfectly correlated (correlation=1) in the limit of "\(n\to\infty\), _then_\(L\to\infty\)" [5, 6, 15, 21]. This can lead to unstable behavior of the gradients and make the model untrainable as the depth increases and also results in the inputs being non-separable by the network6. To address this issue, several techniques involving targeted modifications of the activation function have been proposed [35, 52]. In the case of ResNets, the correlation still converges to 1, but at a polynomial rate [7]. A solution to this problem has been proposed by introducing well-chosen scaling factors in the residual branches, preventing the correlation kernel from converging to 1. This analysis was carried in the limit "\(n\to\infty\), then, \(L\to\infty\)" in [33], and recently extended in [55] to the case where "\(\min(n,L)\to\infty\)", showing that commutativity holds in this case. Some of these works have provided empirical evidence showing an association between favorable characteristics of the neural covariance/correlation and good trainability properties of deep networks.7
Footnote 6: To see this, assume that the inputs are normalized. In this case, the correlation between the pre-activations of the last layer for two different inputs converges to 1. This implies that as the depth grows, the network output becomes similar for all inputs, and the network no longer separates the data. This is problematic for the first step of gradient descent as it implies that the information from the data is (almost) unused in the first gradient update.
Footnote 7: By favorable characteristics of the neural covariance, we refer for instance to non-degeneracy as \(L\to\infty\) as reported in [33].
## 4 Overview of Existing Results
In this section, we present corollaries of existing results showing different scenarios where commutativity is satisfied or not for the neural covariance. The aim of this section is show that commutativity depends on the neural architecture.
### Non-Commutativity in MLPs
Let \(d,n,L\geq 1\), and consider a simple MLP architecture given by the following:
\[\begin{cases}Y_{0}(a)=W_{in}a,\quad a\in\mathbb{R}^{d}\\ Y_{l}(a)=W_{l}\phi(Y_{l-1}(a)),\;l\in[L],\end{cases} \tag{2}\]
where \(\phi:\mathbb{R}\to\mathbb{R}\) is the ReLU activation function, \(W_{in}\in\mathbb{R}^{n\times d}\), and \(W_{l}\in\mathbb{R}^{n\times n}\) is the weight matrix in the \(l^{th}\) layer. We assume that the weights are randomly initialized with _iid_ Gaussian variables \(W_{l}^{ij}\sim\mathcal{N}(0,\frac{2}{n})\),8\(W_{in}^{ij}\sim\mathcal{N}(0,\frac{1}{d})\). While the activation function is only defined for real numbers (1-dimensional), we abuse the notation and write \(\phi(z)=(\phi(z^{1}),\ldots,\phi(z^{k}))\) for any \(k\)-dimensional vector \(z=(z^{1},\ldots,z^{k})\in\mathbb{R}^{k}\) for any \(k\geq 1\). We refer to the vectors \(\{\phi(Y_{l}),l=0,\ldots,L\}\) as _post-activations_.
In the case of the joint limit \(n,L\to\infty\) with \(n/L\) fixed, it has been shown that the covariance/correlation between \(Y_{[tL]}(a)\) and \(Y_{[tL]}(b)\) becomes similar to that of a Markov chain that incorporates random terms. However, the correlation still converges to 1 in this limit.
**Proposition 1** (Correlation, [15, 45]).: _Consider the MLP architecture given by Eq. (2) and let \(a,b\in\mathbb{R}^{d}\) such that \(a,b\neq 0\). Then, in the limit "\(\eta\to\infty\), then \(L\to\infty\)" or the the joint limit "\(n,L\to\infty\), \(L/n\) fixed", the correlation \(\frac{\langle Y_{L}(a),Y_{L}(b)\rangle}{\|Y_{L}(a)\|\|Y_{L}(b)\|}\) converges9 weakly to 1._
Footnote 9: Note that weak convergence to a constant implies also convergence in probability.
The convergence of the correlation to 1 in the infinite depth limit of a neural network poses a significant issue, as it indicates that the network loses all of the covariance structure from the inputs as the depth increases. This results in degenerate gradients (see e.g. [6]), rendering the network untrainable. To address this problem in MLPs, various studies have proposed the use of depth-dependent shaped ReLU activations, which prevent the correlation from converging to 1 and exhibit stochastic differential equation (SDE) behavior. As a result, the correlation of the last layer does not converge to a deterministic value in this case.
**Proposition 2** (Correlation SDE, Corollary of Thm 3.2 in [45]).: _Consider the MLP architecture given by Eq. (2) with the following activation function \(\phi_{L}(z)=z+\frac{1}{\sqrt{L}}\phi(z)\) (a modified ReLU). Let \(a,b\in\mathbb{R}^{d}\) such that \(a,b\neq 0\). Then, in the joint limit "\(n,L\to\infty\), \(L/n\) fixed", the correlation \(\frac{\langle Y_{L}(a),Y_{L}(b)\rangle}{\|Y_{L}(a)\|\|Y_{L}(b)\|}\) converges weakly to a nondeterministic random variable.10_
Footnote 10: In [45], the authors show that the correlation of \(\frac{(\phi_{L}(Y_{L}(a)),\phi_{L}(Y_{L}(b)))}{\sqrt{\|\phi_{L}(Y_{L}(a))\|} \sqrt{\|\phi_{L}(Y_{L}(b))\|}}\) converges to a random variable in the joint limit. Since \(\phi_{L}\) converges to the identity function in this limit, simple calculations show that the correlation between the pre-activations \(\frac{\langle Y_{L}(a),Y_{L}(b)\rangle}{\|Y_{L}(a)\|Y_{L}(b)\|}\) is also random in this limit.
The joint limit, therefore, yields non-deterministic behaviour of the covariance structure. It is easy to check that even with shaped ReLU as in Proposition 2, taking the width to infinity first, then depth, the result is a deterministic covariance structure. The main takeaway from this section is the following:
**Corollary 1**.: _With MLPs (Eq. (2)), the width and depth limits do not commute for the neural covariance/correlation._
### Commutativity with Scaled Residual Networks
Using the same notation as in the MLP case, consider the following ResNet architecture of width \(n\) and depth \(L\)
\[\begin{split} Y_{0}(a)&=W_{in}a,\quad a\in\mathbb{ R}^{d}\\ Y_{l}(a)&=Y_{l-1}(a)+\frac{1}{\sqrt{L}}W_{l}\phi(Y_{ l-1}(a)),\,l\in[1:L],\end{split} \tag{3}\]
where \(\phi:\mathbb{R}\to\mathbb{R}\) is the ReLU activation function. Assume that the weights are randomly initialized with _iid_ Gaussian variables \(W_{l}^{ij}\sim\mathcal{N}(0,\frac{1}{n})\), \(W_{in}^{ij}\sim\mathcal{N}(0,\frac{1}{d})\). If we consider the
set of scaling factors of the form \(L^{-\gamma}\) for \(\gamma>0\), then the choice of \(\gamma=1/2\) is the smallest value of \(\gamma\) such that the network output do not explode in the infinite-depth limit (see Lemma 1). Therefore, in some sense, this scaling is the 'optimal' amongst uniform scalings (meaning all residual branches are scaled with the same factor) for two reasons: it stabilizes the network as depth increases, and it does not result in trivial behaviour (see discussion after Proposition 3).
With the ResNet architecture Eq. (3), we have the following result for the covariance kernel, which establishes commutativity in this case.
**Proposition 3** (Thm 2 in [55]).: _Let \(a,b\in\mathbb{R}^{d}\) such that \(a,b\neq 0\) and \(a\neq b\). Then, we have the following_
\[\sup_{t\in[0,1]}\left\|q_{[tL],n}(a,b)-q_{t}(a,b)\right\|_{L_{2}}\leq C\left( \frac{1}{\sqrt{n}}+\frac{1}{\sqrt{L}}\right)\]
_where \(C\) is a constant that depends only on \(\|a\|\), \(\|b\|\), and \(d\), and \(q_{t}(a,b)\) is the solution of the following differential flow_
\[\begin{cases}\frac{dq_{t}(a,b)}{dt}&=\frac{1}{2}\frac{f(c_{t}(a,b))}{c_{t}(a, b)}q_{t}(a,b),\\ c_{t}(a,b)&=\frac{q_{t}(a,b)}{\sqrt{q_{t}(a,a)}\sqrt{q_{t}(b,b)}},\\ q_{0}(a,b)&=\frac{\langle a,b\rangle}{d},\end{cases} \tag{4}\]
_where the function \(f:[-1,1]\to[-1,1]\) is given by_
\[f(z)=\frac{1}{\pi}(z\arcsin(z)+\sqrt{1-z^{2}})+\frac{1}{2}z.\]
This result suggests that commutativity for the neural covariance depends on the architecture, and holds in this particular case. More importantly, with this residual architecture, taking the width and depth limits to infinity yield a non-trivial limit of the neural covariance given by the function \(q_{t}\). In [33], it was shown that \(q_{t}\) is a universal kernel, meaning that, it is not only non-trivial, but one can approximate any sufficiently smooth function on some compact set with features from the kernel \(q_{t}\). This has a number of implications, especially in the context of neural network Gaussian processes. We invite the reader to check [33] for a more in-depth discussion. Another recent result showed that trivial behaviour can be avoided by scaling the main branch of the ResNet. The neural covariance converges weakly to a random variable in the proportional limit, which implies that such scaling breaks commutativity.
**Proposition 4** (Corrollary of Thm in [57]).: _Conider a ResNet where the hidden layers are of the form \(Y_{l}(a)=\beta Y_{l-1}(a)+\sqrt{1-\beta^{2}}W_{l}\phi_{L}(Y_{l-1}(a))\), where \(\beta\in(0,1)\) is a constant, and \(\phi_{L}\) is the shaped ReLU (defined in Proposition 2). Then, the width and depth limits for the covariance kernel do not commute in this case._
Scaling the main branch of the residual network results in a similar behaviour to the MLP case. Intuitively, with the factor \(\beta\), the direct contribution of any layer to the main branch decreases exponentially with depth, hence simulating the'multiplicative' nature of MLPs. Note that the use of shaped ReLU is essential with this scaling in order to avoid
degeneracy problems; with ReLU, the correlation converges to 1 in the proportional limit. In the same paper, the authors show a similar result for Transformers which is a more modern residual architecture.
With the background information provided above, we are now able to present our findings. In the next section, we demonstrates commutativity of the width and depth limits for a general class of ResNet architectures, extending the results of [55].
## 5 Main Results: Commutativity under General Scaling
In this section, we present our main results regarding commutativity of the width and depth limits under general scaling rules. All the proofs are deferred to the Appendix. We first define the _sequence of scaling factors_, a notion that will be frequently used in the paper.
**Definition 5** (Sequence of Scaling Factors).: _A sequence of scaling factors is an infinite triangular array of non-negative real numbers. It has the form \(\alpha=(\alpha_{l,L})_{l\in\{1,\dots,L\},L\geq 1}\)._
Visually, one can think of \(\alpha\) as an infinite object of the form
\[\alpha=\begin{cases}\alpha_{1,1}&\\ \alpha_{1,2}&\alpha_{2,2}\\ \vdots&\vdots\ddots\\ \alpha_{1,L}&\dots&\alpha_{L,L}\\ \vdots&\dots&\dots\end{cases}\]
The use of such notation will come handy when we scale up the depth of a neural network. Such sequences will be used to define a scaling strategy as network depth grows.
Setup.Recall the previously introduced notation, the width and depth of the network are denoted by \(n\) and \(L\), respectively, and the input dimension is denoted by \(d\). Let \(n,L,d\geq 1\), and consider the following neural network model with skip connections
\[\begin{cases}Y_{0}(a)=W_{in}a,\quad a\in\mathbb{R}^{d},\\ Y_{l}(a)=Y_{l-1}(a)+\alpha_{l,L}W_{l}\,\phi(Y_{l-1}(a)),\quad l\in[L],\end{cases} \tag{5}\]
where \(\phi\) is the ReLU activation function,11\(W_{in}\in\mathbb{R}^{n\times d}\) is the input layer weight matrix, and \(W_{l}\in\mathbb{R}^{n\times n}\) is the weight matrix in the \(l^{th}\) layer. We assume that the weights are randomly initialized as \(W_{d}^{ij}\sim\mathcal{N}(0,1/d)\), and \(W_{l}^{ij}\sim\mathcal{N}(0,1/n)\) for \(l\in[L]\), \(a\neq 0\) is an arbitrary input in \(\mathbb{R}^{d}\), \(\alpha=(\alpha_{l,L})_{L\geq 1,l\in[L]}\) is a sequence of scaling factors. For the sake of simplification, we only consider networks with no bias, and we omit the dependence of \(Y_{l}\) on \(n\) and \(L\) in the notation. For a vector \(Z\in\mathbb{R}^{k}\), we write \(Z=(Z^{1},Z^{2},\dots,Z^{k})\in\mathbb{R}^{k}\) to denote its entries. Hereafter, we consider two inputs \(a,b\in\mathbb{R}^{d}\) satisfying \(a,b\neq 0\) and \(\langle a,b\rangle\neq 0\).12
As depth increases, the pre-activations might grow arbitrarily large, depending on the choice of the sequence \(\alpha\). The next result fully characterizes sequences that guarantee stability in terms of the \(L_{2}\) norm.
**Lemma 1**.: _For all \(L\geq 1,l\in[L],i\in[n]\)_
\[\mathbb{E}\left[Y_{l}^{i}(a)^{2}\right]=\frac{\|a\|^{2}}{d}\prod_{k=1}^{l} \left(1+\frac{\alpha_{k,L}^{2}}{2}\right).\]
_As a result, \(\sup_{l\in[L],L\geq 1,i\in[n]}\mathbb{E}\left[Y_{l}^{i}(a)^{2}\right]\) is bounded iff \(\sup_{L\geq 1}\sum_{l=1}^{L}\alpha_{l,L}^{2}<\infty\). 13_
Footnote 13: This stability condition was introduced in [33]. It is worth mentioning that in [54], in the context of “controlled” ResNets, the authors use an \(L_{2}\) condition on the control process to show weak commutativity for the output distribution where convergence is considered in the weak sense. This condition is similar in flavour to our stability condition.
Proof.: Simple calculations yield
\[\mathbb{E}\left[Y_{l}^{i}(a)^{2}\right]=\mathbb{E}\left[Y_{l-1}^{i}(a)^{2} \right]+\alpha_{l,L}^{2}\mathbb{E}\left[\phi(Y_{l}^{i}(a))^{2}\right].\]
To conclude, it suffices to see that \(Y_{l}^{i}(a)^{2}\) is a symmetric random variable, and therefore \(\mathbb{E}\left[\phi(Y_{l}^{i}(a))^{2}\right]=\frac{1}{2}\mathbb{E}\left[Y_{l }^{i}(a)^{2}\right]\).
The result of Lemma 1 is independent from the width \(n\). Hence, a necessary and sufficient condition so that the pre-activations do not blow up with depth (in \(L_{2}\) norm), for any width \(n\), is to have \(\sup_{L\geq 1}\sum_{l=1}^{L}\alpha_{l,L}^{2}<\infty.\) We say that such sequences of scaling factors are stable.
**Definition 6** (Stable Sequence of Scaling Factors).: _Let \(\alpha\) be a sequence of scaling factors. We say that \(\alpha\) is stable if it satisfies \(\sup_{L\geq 1}\sum_{l=1}^{L}\alpha_{l,L}^{2}<\infty\). We denote the space of stable sequences of scaling factors by \(\mathcal{S}\). For \(\alpha\in\mathcal{S}\), we define the \(\mathcal{S}\)-norm of \(\alpha\) by \(\|\alpha\|_{S}=\sqrt{\sup_{L\geq 1}\sum_{l=1}^{L}\alpha_{l,L}^{2}}\).14_
Footnote 14: If we allow negative values for \(\alpha_{l,L}\), then we can show that the space \(\mathcal{S}\), endowed with the inner product \(\langle\alpha,\beta\rangle_{S}=\sup_{L\geq 1}\sum_{l=1}^{L}\alpha_{l,L}\beta_{ l,L}\), is a complete space (Banach space). We omit these technicalities in this paper.
Stable Sequences of Scaling Factors have first appeared in [33]. In that work, the sequential limit 'infinite-width, then infinite-depth' was considered, and such sequences were proven to stabilize the gradients as well, and yield other favorable network properties regarding the neural covariance kernel and the neural tangent kernel.
In the next two (sub)sections, we show that unlike in MLPs or residual networks with scaled main branch where the neural covariance/correlation exhibits different limiting behaviors depending on how the width and depth limits are taken, under general conditions on the sequence \(\alpha\), for the ResNet architecture given by Eq. (5), the neural covariance converges strongly to a deterministic kernel, which depends on the choice of the sequence \(\alpha\), in the limit \(\min(n,L)\rightarrow\infty\) regardless of the relative rate at which \(n\) and \(L\) tend to infinity. We show different examples and recover and strengthen previous results as special cases.
### Sequence of Scaling Factors as Convergent Series
In this section, we consider sequences \(\alpha\) that "converge" to a series in a specific way. We show that in this case, the neural covariance kernel converges to the same limiting kernel with a specific convergence rate in the limit \(\min(n,L)\to\infty\), hence inducing commutativity.
**Theorem 1** (Commutativity with Quasi-Convergent Series).: _Let \(\alpha\in\mathcal{S}\). Assume that there exists a sequence \(\zeta=(\zeta_{i})_{i\geq 1}\in\ell_{2}(\mathbb{N})\) such that \(\sum_{l=1}^{L}|\alpha_{l,L}^{2}-\zeta_{l}^{2}|\to 0\) as \(L\to\infty\). Then, we have that for all \(t\in[0,1]\)_
\[\sup_{t\in(0,1]}\|q_{\lfloor tL\rfloor,n}(a,b)-q_{\infty}^{\zeta}(a,b)\|_{L_{2 }}\leq C\left(n^{-1/2}+\sum_{l=1}^{L}|\alpha_{l,L}^{2}-\zeta_{l}^{2}|+\sum_{l \geq L}\zeta_{l}^{2}\right),\]
_where \(C\) is a constant that depends only on \(\|a\|,\|b\|,d,\|\zeta\|_{S}\), and \(q_{\infty}^{\zeta}(a,b)=\lim_{L\to\infty}q_{L}^{\zeta}(a,b)\) and \(q_{L}^{\zeta}\) is given by the recursive formula_
\[\begin{cases}q_{L}^{\zeta}(a,b)=q_{L-1}^{\zeta}(a,b)+\frac{1}{2}\zeta_{L}^{2 }\frac{f(c_{L-1}(a,b))}{c_{L-1}(a,b)}q_{L-1}(a,b),\quad L\geq 1\\ c_{L}(a,b)=\frac{q_{L}(a,b)}{\sqrt{q_{L}(a,a)q_{L}(b,b)}},\\ q_{0}^{\zeta}(a,b)=\frac{\langle a,b\rangle}{d},\end{cases}\]
_where \(f:[-1,1]\to[-1,1]\) is given by_
\[f(z)=\frac{1}{\pi}(z\arcsin z+\sqrt{1-z^{2}})+\frac{1}{2}z.\]
Theorem 1 shows that the neural covariance kernel converges to the same limiting kernel no matter how the width and depth limits are taken. In the proof, provided in Appendix C, we first show the existence of the limit of \(q_{L}\), then proceed to bound the difference with the neural covariance kernel. The convergence rate depends on he properties of the series \(\zeta\) that approximates \(\alpha\) as depth grows. Notice that the limiting kernel \(q_{\infty}^{\zeta}\) does not depend on \(t\in(0,1]\). This is because the entries of \(\zeta\) do not depend on depth \(L\).
_Examples._ The conditions of Theorem 1 are satisfied by many sequences \(\alpha\). Examples include:
* "Decreasing" scaling: assume that \(\alpha_{l,L}=\zeta_{l}\) for all \(L\geq 1,l\in[L]\), where \(\zeta\in\ell_{2}(\mathbb{N})\). We call this scaling decreasing because \(\lim_{l\to\infty}\zeta_{l}=0\). This choice of scaling factors trivially satisfies the conditions of Theorem 1 and the convergence rate is given by \(\mathcal{O}(n^{-1}+\sum_{l\geq L}\zeta_{l}^{2})\). An examples of such scaling was studied in [33] and empirical results (performance of trained networks) were reported with \(\zeta=\left((l\log(l+1)^{2})^{-1/2}\right)_{l\geq 1}\).
* "Aggressive" Uniform scaling: assume that \(\alpha_{l,L}=L^{-\gamma}\) for some constant \(\gamma>1/2\). This scaling is called uniform because all the residual branches have the same scaling factor. This sequence of scaling factors satisfies the conditions of Theorem 1 with \(\zeta=0_{\ell_{2}(\mathbb{N})}\). The convergence rate is given by \(\mathcal{O}(n^{-1}+L^{-(2\gamma-1)})\), and the limiting kernel is trivial and given by \(q_{\infty}^{\zeta}=q_{0}^{\zeta}\), hence the nomenclature 'aggressive' since this scaling removes all contributions of the hidden layers in the limiting kernel. Note that this case covers the Neural ODE limit with scaling factors \(\alpha_{l,L}=L^{-1}\). In the next section, we will see that another kind of uniform scaling(non-aggressive) that yield non-trivial limits.
### Normalized Sequences of Scaling Factors
In this section, we discuss another type of sequences of scaling factors. We know from [55] that with \(\alpha_{l,L}=L^{-1/2}\), the limiting kernel is given by the solution of an ODE. In this section, we generalize this result by considering all sequences \(\alpha\) that satisfy the condition \(\sum_{l=1}^{L}\alpha_{l,L}^{2}=1\) for all \(L\geq 1\). Let us first give a formal definition of such sequences.
**Definition 7** (Normalized Sequence of Scaling Factors).: _Let \(\alpha\) be a sequence of scaling factors. We say that \(\alpha\) is normalized if it satisfies \(\sum_{l=1}^{L}\alpha_{l,L}^{2}=1\) for all \(L\geq 1\). The space of normalized sequences of scaling factors is denoted by \(\mathcal{S}_{1}\).15_
Footnote 15: Note that the constant \(1\) in this definition can be replaced by any constant \(M>0\) and all the subsequent results remain valid.
It is trivial that \(\mathcal{S}_{1}\subset\mathcal{S}\), and for all \(\alpha\in\mathcal{S}_{1},\|\alpha\|_{\mathcal{S}}=1\) (hence the subscript in \(\mathcal{S}_{1}\)). The next result establishes commutativity of the infinite width and depth limit for normalized sequences.
**Theorem 2** (Commutativity with Normalized Sequences).: _Consider a sequence of scaling factors \(\alpha\in\mathcal{S}_{1}\). Let \(h_{L}=\max_{1\leq l\leq L}\alpha_{l,L}^{2}\) and assume that \(Lh_{L}^{2}=o_{L}(1).\) Then, we have_
\[\sup_{t\in(0,1]}\|q_{\lfloor tL\rfloor,n}(a,b)-q_{t_{L}}(a,b)\|_{L_{2}}\leq C \left(n^{-1/2}+h_{L}+Lh_{L}^{2}\right),\]
_where \(C\) depends only on \(\|a\|,\|b\|,\) and \(d\), and \(t_{L}\) is given by \(t_{L}=\sum_{k=1}^{\lfloor tL\rfloor}\alpha_{k,L}^{2}\), and \(q_{t}\) is given by the solution of the following differential flow_
\[\begin{cases}\frac{dq_{t}(a,b)}{dt}&=\frac{1}{2}\frac{f(c_{t}(a,b))}{c_{t}(a,b )}q_{t}(a,b),\\ c_{t}(a,b)&=\frac{q_{t}(a,b)}{\sqrt{q_{t}(a,a)}\sqrt{q_{t}(b,b)}},\\ q_{0}(a,b)&=\frac{(a,b)}{d},\end{cases} \tag{6}\]
_where the function \(f:[-1,1]\to[-1,1]\) is given by_
\[f(z)=\frac{1}{\pi}(z\arcsin(z)+\sqrt{1-z^{2}})+\frac{1}{2}z.\]
_Moreover, assume that there exists a function \(\lambda:[0,1]\to[0,1]\) such that the sequence \(\alpha\) satisfies \(\sup_{t\in[0,1]}\left|\sum_{k=1}^{\lfloor tL\rfloor}\alpha_{k,L}^{2}-\lambda (t)\right|\leq r_{L}\) and \(\lim_{L\to\infty}r_{L}=0\). Then, we have_
\[\sup_{t\in(0,1]}\|q_{\lfloor tL\rfloor,n}(a,b)-q_{\lambda(t)}(a,b)\|_{L_{2}} \leq C^{\prime}\left(n^{-1/2}+h_{L}+Lh_{L}^{2}+r_{L}\right),\]
_where \(C^{\prime}\) depends only on \(\|a\|,\|b\|,d\)._
The result of Theorem 2 generalizes previous results from [55] to arbitrary normalized sequences. Using this theorem, we recover those results by choosing \(\alpha_{l,L}=L^{-1/2}\) and verifying the conditions in the theorem. In particular, with the new proof techniques developed in this paper, we obtain a stronger convergence rate for depth.
**Corollary 2** (Normalized Uniform Scaling).: _Assume that \(\alpha_{l,L}=L^{-1/2}\) for all \(L\geq 1\) and \(l\in[L]\). Then, the results of Theorem 2 are satisfied with \(\lambda(t)=t\), \(r_{L}=L^{-1}\), and \(h_{L}=L^{-1}\). As a result, we have that_
\[\sup_{t\in(0,1]}\|q_{\lfloor tL\rfloor,n}(a,b)-q_{t}(a,b)\|_{L_{2}}\leq C\left( n^{-1/2}+L^{-1}\right),\]
_where \(C\) depends only on \(\|a\|,\|b\|,d\), and \(q_{t}\) is defined in Theorem 2._
Proof.: With \(\alpha_{l,L}=L^{-1/2}\), we trivially have \(h_{L}=L^{-1}\) and \(Lh_{L}^{2}=L^{-1}\). Moreover, given \(t\in(0,1]\) we have that \(\sum_{k=1}^{\lfloor tL\rfloor}\alpha_{l,L}^{2}=\frac{\lfloor tL\rfloor}{L}\), and therefore \(\left|\sum_{k=1}^{\lfloor tL\rfloor}\alpha_{l,L}^{2}-t\right|\leq L^{-1}\).
Better depth rate.In this paper, we obtain a depth rate of order \(L^{-1}\) in contrast to the \(L^{-1/2}\) convergence rate reported in [55]. The reason lies in the differences of the proof techniques used to derive the results. The proof techniques in both results are essentially 'orthogonal' in the following sense: in [55], the proofs rely on taking the depth to infinity first, while controlling the effect of width at the same time. With this approach, the best depth rate one can obtain is \(L^{-1/2}\) which is induced by the Euler disctretization error (note that with \(\alpha_{l,L}=L^{-1/2}\), the ResNet behaves as the solution of a Stochastic Differential Equation (SDE) in the infinite depth limit when the width is fixed). However, in this work, we first take the width to infinity while controlling the depth. By doing this, all the randomness in the covariance is removed as \(n\to\infty\), regardless of the depth \(L\). As a result, by taking depth to infinity, we deal with deterministic dynamical systems instead of stochastic ones (the SDE case), in which case the Euler disctretization error is of order \(L^{-1}\). We refer the reader to Section 7 for more details about the proof techniques.
_Remark_.: The normalized uniform scaling is optimal in terms of the depth-related error in Theorem 2. More precisely, the depth-related error is given by the term \(\mathcal{R}_{L}(\alpha)=h_{L}+Lh_{L}^{2}+r_{L}\) up to constant \(C\). Therefore, a natural question one might ask is: what properties should the sequence of scaling factors satisfy in order to minimize this error? Given a fixed depth \(L\), this problem can be formulated as a constrained minimization problem
\[\min_{\alpha\in\mathcal{S}_{1}}\mathcal{R}_{L}(\alpha)=h_{L}+Lh_{L}^{2}+r_{L}, \tag{7}\]
where the constraint is given by the fact that \(\alpha\in\mathcal{S}_{1}\).
**Lemma 2**.: _The normalized uniform scaling given by \(\alpha_{l,L}=L^{-1/2}\) is a solution to problem (7)._
To explain the intuition behind the result of Lemma 2, we first need to understand what each term in \(\mathcal{R}_{L}\) represents. The first term \(h_{L}\) is well-known in numerical methods and represents the Euler discretization (global) error. The second term \(Lh_{L}^{2}\) is a bound on the error between the Euler scheme of the ODE satisfied by \(q_{t}\) and the actual neural covariance kernel from the finite depth network. The last term \(r_{L}\) is induced by the behaviour of the scaling sequence as \(L\) grows. If we consider just the sum of the first two terms, uniform scaling balances the two terms which should intuitively minimize that sum. It also happens that for this choice of scaling \(r_{L}\) is of the same order as \(h_{L}+Lh_{L}^{2}\).
## 6 Experiments and Practical Implications
In this section, we validate our theoretical results with simulations on large width and depth residual neural networks of the form Eq. (5) with different choices of the sequence \(\alpha\).
### Convergence of the neural covariance
Theorem 2 and Theorem 1 predict that the covariance \(q_{l,n}(a,b)\) for two inputs \(a,b\) converges in \(L_{2}\) norm in the limit \(\min(n,L)\to\infty\).
Uniform scaling \(\alpha_{l,L}=L^{-1/2}\).In Fig. 1, we compare the empirical covariance \(q_{l,n}\) with the theoretical prediction \(q_{t}\) from Theorem 2 for \(n\in\{2^{3},2^{8},2^{14}\}\) and \(L\in\{2^{1},2^{3},2^{8}\}\). We chose maximum depth to be much smaller than maximum width to take into account the difference in the width and depth convergence rates: \(n^{-1/2}\) versus \(L^{-1}\) in this case.
The empirical \(L_{2}\) error between \(q_{L,n}\) and \(q_{1}\) (from Theorem 2) is also reported. As the width increases, we observe an excellent match with the theory. The role of the depth is less
Figure 1: The blue curve represents the average covariance \(q_{l,n}(a,b)\) for ResNet Eq. (5) with \(n\in\{2^{3},2^{8},2^{14}\}\), \(L\in\{2^{1},2^{3},2^{8}\}\), \(d=30\), and \(a\) and \(b\) are sampled randomly from \(\mathcal{N}(0,I_{d})\) and normalized to have \(\|a\|=\|b\|=1\). The average is calculated based on \(N=100\) simulations. The shaded blue area represents 1 standard deviation of the observations. The red dashed line represents the theoretical covariance \(q_{t}(a,b)\) predicted in Theorem 2. The empirical \(L_{2}\) error for \(t=1\) is reported.
noticeable, but for instance, with width \(n=2^{1}4\), we can see that the \(L_{2}\) error is smaller with depth \(L=256\) as compared to depth \(L=2\). The theoretical prediction \(q_{t}\) is approximated with a PDE solver (RK45 method, [1]) for \(t\in[0,1]\) with a discretization step \(\Delta t=\)1e-6.
Convergent Scaling \(\alpha_{l,L}=l^{-1}\).In Fig. 2, we run the same experiment for decreasing scaling \(\alpha_{l,L}=l^{-1}\). Note that in this case, the limiting neural covariance does not depend on \(t\). The red dashed line represents this limiting value in the figure (estimated with \(q_{L,n}\) with \(L=\)1e5 and \(n=\)1e5). Similar to the results with uniform scaling, we observe a convergence pattern to the red line as \(L\) and \(n\) increase. The role of depth in this case is more pronounced.
### Comparison with other architectures
In Fig. 3, we show the evolution of the distribution of \(q_{L,n}\) for three different architectures with \(L=n\). With our choice of scaling factors \(\alpha_{l,L}\), the distribution concentrates around the deterministic limit given by the solution of the ODE described in Theorem 2. For MLP with shaped ReLU, and the Shaped ResNet (Proposition 4, the main branch is scaled with \(\beta=1/2\)), we observe that the neural covariance remains random as width (and depth) grows. The sequential infinite-width-then-depth is illustrated in blue, and shows that with
Figure 2: Same setup of Fig. 1, with \(\alpha_{l,L}=l^{-1}\). The red dashed line represents the theoretical covariance \(q_{\infty}(a,b)\) predicted in Theorem 1. The empirical \(L_{2}\) error for \(t=1\) is reported.
our choice of scaling factors, the covariance concentrates around this sequential limit even when \(n=L\to\infty\). In contrast, with shaped MLP/ResNet, the two limits (sequential vs proportional) exhibit different behaviours, confirming that commutativity does not hold in these two cases.
### Improved Depth rate
In [55], commutativity was established with the choice of scaling factors \(\alpha_{l,L}=L^{-1/2}\). The reported convergence rate (as \(n\) and \(L\) go to infinity) of the neural covariance is of the form \(\mathcal{O}(n^{-1/2}+L^{-1/2})\). In this paper, we established an improved convergence rate of order \(\mathcal{O}(n^{-1/2}+L^{-1})\), which suggests that convergence is more sensitive to the width than to depth. To validate this result, we conduct the following experiment: we take \(n\) to infinity while fixing \(L\) and obtain the infinite-width neural covariance \(q_{L,\infty}\) (infinite-width covariance for the last layer). We then measure \(\Delta_{L}=|q_{L,\infty}(a,b)-q_{t=1}(a,b)|\) where \(q_{t}\) is given in Theorem 2 and \((a,b)\) are sampled randomly following the procedure in Fig. 1. We observe a perfect match of the \(L^{-1}\) convergence rate (where the intercept was adjusted so that all the lines start from the same initial value).
## 7 Outline of Proof Techniques
In [55], it was shown that the neural covariance satisfies commutativity with the specific scaling \(\alpha_{l,L}=L^{-1/2}\). The main technical novelty in that work is taking the depth \(L\) to infinity first, while controlling the dependence of the constants on the width \(n\). Given a fixed width \(n\), taking depth \(L\) to infinity results in an SDE behaviour (Stochastic Differential
Figure 3: The distribution of the \(q_{L,n}\) with \(L=n\) for varying \(n\in[5,4000]\). **(Left)** ResNet described in Eq. (5) with \(\alpha_{l,L}=L^{-1/2}\). **(Center)** MLP described in Eq. (2) with shaped ReLU \(\phi_{L}\)[45]. **(Right)** Shaped ResNet with \(\beta=1/2\) (Proposition 4). The vertical blue line represents the limit sequential limit \(\lim_{L\to\infty}\lim_{n\to\infty}q_{L,n}\). The inputs \(a,b\) are sampled following the same procedure in Fig. 1.
Equation), and the main tools to study such convergence are numerical methods for SDEs (Euler discretization scheme). In this case, it is possible to obtain infinite-depth strong convergence where the constants do not depend on width \(n\). Commutativity then follows by studying the infinite-width limit of these SDEs. This involves the use of tools from mean-field stochastic calculus (namely McKean-Vlasov processes).
In this work, we take an orthogonal approach where the width is taken to infinity first, and the constants are well chosen so that they do not depend on depth, followed by infinite-depth which concludes the proof. The main innovation in the proofs is related to the introduction of _the auxiliary process_\(\tilde{Y}\): given a residual network of the form Eq. (5), we introduce an auxiliary process \(\tilde{Y}_{l}\) that shares some properties with the original neural process \(Y_{l}\). We bound the difference between \(Y_{l}\) and \(\tilde{Y}_{l}\) using Gronwall's type of techniques, and show that the constants in this bound can be chosen to be independent of depth, hence providing a depth-uniform bound for the infinite-width limit. More importantly, the auxiliary process has iid Gaussian entries, which facilitates the study of the covariance kernel related to \(\tilde{Y}_{l}\), and allow us to conclude on commutativity. Some technical results involve the use of concentration inequalities to deal with low probability events such as \(\phi(Y_{l})=0_{\mathbb{R}^{n}}\) when \(n\) is large.
## 8 Conclusion and Limitations
In this paper, we have shown that, at initialization, under general assumptions on the sequence of scaling factors, the large-depth and large-width limits of a residual neural network (resnet) commute for the neural covariance. We used novel proof techniques. Our results generalize and strengthen previous works on commutativity.
However, our results are restricted to the neural covariance function and cannot imply anything about commutativity for other neural functions. More importantly, it is unclear what happens during training, and potentially, different behaviors can occur depending on how the learning rate is chosen as a function of width and depth. While we can heuristically conjecture that commutativity holds during training under suitable scaling strategies, commutativity is a precise mathematical statement that requires rigorous proofs.
One might also ask whether commutativity is needed in the current context of Large Language Models, where most architectures are in the regime \(n\gg L\gg 1\) (e.g. \(n\sim 1000,L\sim 50\)) and that this regime can be fairly described by the sequential limit '\(n\to\infty\), then \(L\to\infty\)'. While this might be true to some extent, note that convergence of neural functions can happen at different width and depth rates (e.g. \(n^{-1/2}\) and \(L^{-1}\) in the case of neural covariance), which implies that small changes in depth (or width) could significantly change the behaviour of the neural function. We leave this question for future work. |
2304.14463 | Moccasin: Efficient Tensor Rematerialization for Neural Networks | The deployment and training of neural networks on edge computing devices pose
many challenges. The low memory nature of edge devices is often one of the
biggest limiting factors encountered in the deployment of large neural network
models. Tensor rematerialization or recompute is a way to address high memory
requirements for neural network training and inference. In this paper we
consider the problem of execution time minimization of compute graphs subject
to a memory budget. In particular, we develop a new constraint programming
formulation called \textsc{Moccasin} with only $O(n)$ integer variables, where
$n$ is the number of nodes in the compute graph. This is a significant
improvement over the works in the recent literature that propose formulations
with $O(n^2)$ Boolean variables. We present numerical studies that show that
our approach is up to an order of magnitude faster than recent work especially
for large-scale graphs. | Burak Bartan, Haoming Li, Harris Teague, Christopher Lott, Bistra Dilkina | 2023-04-27T18:41:37Z | http://arxiv.org/abs/2304.14463v2 | # Moccasin: Efficient Tensor Rematerialization for Neural Networks
###### Abstract
The deployment and training of neural networks on edge computing devices pose many challenges. The low memory nature of edge devices is often one of the biggest limiting factors encountered in the deployment of large neural network models. Tensor rematerialization or recompute is a way to address high memory requirements for neural network training and inference. In this paper we consider the problem of execution time minimization of compute graphs subject to a memory budget. In particular, we develop a new constraint programming formulation called Moccasin with only \(O(n)\) integer variables, where \(n\) is the number of nodes in the compute graph. This is a significant improvement over the works in the recent literature that propose formulations with \(O(n^{2})\) Boolean variables. We present numerical studies that show that our approach is up to an order of magnitude faster than recent work especially for large-scale graphs.
Machine Learning, Neural Networks, Machine Learning, Deep Learning, Neural Networks
## 1 Introduction
The need for efficient deep neural network (DNN) computations for both training and inference continues to grow. Most compute architectures for DNNs make use of a limited amount of fast, _local memory_ in conjunction with a much larger amount of _global memory_ that is significantly slower to access. For example, accelerators for mobile devices today will dedicate several MB of local memory (per core) for storage of intermediate output tensors before resorting to the devices' main memory; training of large DNNs are done primarily on GPU's local GDDR memory before offloading tensors to DDR memory on the motherboard. However, there are many DNNs in practice for which the local memory is not enough to store all the required intermediate outputs.
Effective optimization of the local memory footprint during the computation can make a difference in meeting latency targets for computations since global memory accesses can be too slow (Sze et al., 2017, Sec. V). In addition, the trend toward on-device training will further stress the limited resources due to the need to store intermediate data for gradient back-propagation (tinyML Foundation, 2022).
There are several techniques for DNN computation sequencing that can be used to manage the local memory footprint, and a well-developed toolchain for this purpose is likely to employ multiple methods working together. For example, _operation sequencing_ is a technique studied in (Gagrani et al., 2022) and its references. Our paper focuses on another technique called _rematerialization_. When input data is needed for a compute operation, it can be read from local memory, or alternatively be recomputed (rematerialized). In some cases the extra compute can prove to be advantageous relative to occupying valuable local memory with the data until it is ready to be used, with the latter leaving a larger memory footprint. Deciding what operations to rematerialize (or not) in order to optimize latency and memory-use is a PSPACE-complete problem (Gilbert et al., 1979). Further, we often need a method that can tackle large computation graphs with acceptable "compile time", thus need an approach that scales well with graph size.
While there is a history of rematerialization research for managing register use by compilers (Kubota, 1998; Briggs
Figure 1: Percentage of total duration increase against solve time in seconds for our proposed method (Moccasin) and Checkmate(Jain et al., 2020) on a real-world graph with \(n=442\) nodes and \(m=1247\) edges. The memory budget is set to \(80\%\) of the peak memory for the initial topological order without rematerialization. The time until the first data point in each curve is spent on presolve.
et al., 1992), more recently, these techniques have been further developed and studied for use in DNNs, where the size of computation graphs and the amount of data movement can be extremely large (Jain et al., 2020; Patil et al., 2022; Schuler et al., 2022). These works formulate a mixed integer linear program (MILP) with a Boolean variable for each use of a compute node's output - either read from memory or rematerialized. We have found that this formulation has limitations when attempting to scale to large graphs due to the need for \(O(n^{2})\) Boolean variables (where \(n\) is the number of nodes in the computation graph). While linear relaxation followed by rounding is an approximation that is leveraged in these works, we have found that these rounded solutions can be far from optimal, limiting the applicability of this approach.
The general rematerialization problem is stated here, and will be made more explicit in Section 2. Given a directed acyclic graph (DAG), \(G=(V,E)\), with \(|V|=n\) and \(|E|=m\), let nodes \(v\in V\) represent compute operations and directed edges \((u,v)\in E\) represent data dependencies such that the output tensors of all the nodes \(\{w:(w,v)\in E\}\) (predecessors of \(v\)) are required to be present in local memory before computing \(v\). Let \(seq(G)\) be a _rematerialization sequence_ of nodes (ordered list) that contains each node at least once. The rematerialization problem statement is then as follows:
Memory-constrained Computation Graph Sequencing with Rematerialization \[\underset{seq(G)}{\text{minimize}}\quad\text{total execution duration}\] subject to \[-\text{seq}(G)\text{ meets the data dependencies of }G\] \[-\text{peak memory footprint of }seq(G)\] \[\qquad\leq\text{local memory capacity }M\]
This definition is intentionally high-level and lacking some details that will be provided later. Since our later formulations do not directly optimize the sequence but rather optimize other variables from which memory footprints and the final sequence are computed, we skip further description of such details in this section.
Our work introduces a new optimization model formulation that defines the problem variables as _retention intervals_, allowing use of only \(O(n)\) integer variables. We then leverage constrained programming (CP), allowing complex data dependency and cumulative memory threshold constraints to be enforced easily and efficiently during optimization. We further show that our proposed approach demonstrates significant scaling improvements relative to the MILP formulation of Checkmate(Jain et al., 2020). Figure 1 provides a numerical comparison of the two methods.
### Related Work
The problem of rematerialization has its roots in compiler optimization (Colombet et al., 2015; Lozano and Schulte, 2019; Lozano et al., 2019) and automatic differentiation with checkpointing (Kubota, 1998; Griewank and Walther, 2008; 2000). Recently, there is a stream of works on rematerialization in machine learning, motivated by the need to train large DNN on GPU with limited memory capacity (Kumar et al., 2019; Kusumoto et al., 2019; Beaumont et al., 2021; Chen et al., 2016; Kirisame et al., 2021; Mostafa, 2022; Huang et al., 2019); most relevant to us are the works based on combinatorial optimization (Jain et al., 2020; Patil et al., 2022; Schuler et al., 2022).
The scope of our work is most closely aligned with that of (Jain et al., 2020), where they develop a mixed integer linear program (MILP) for duration minimization under memory constraints referred to as Checkmate. The formulation of Checkmate defines Boolean matrix variables \(R,S,F\) for node/tensor recomputation, local memory storage, and deallocation, respectively, resulting in \(O(n^{2})\) Boolean variables. While the paper demonstrates the capability of rematerialization to meet memory limits while minimizing duration, the paper does not sufficiently address the problem complexity and scaling to larger graphs. The authors do mention this issue and offer a method of linear relaxation of the Boolean variables followed by 2-stage rounding to accelerate finding a solution. However, our experiments show that for many graphs, this rounding approach produces results that are far from optimal, and often the rounded solution does not meet the memory threshold and is thus infeasible (see Section 3).
As a consequence of formulating the problem using matrices of variables where the columns represent specific nodes (permutations are not supported), in Checkmate all solutions are constrained to follow an input sequence of the graph nodes without rematerialization. We call this the "input topological order". This input must be a valid topological order for solutions that meet the node precedence requirements to be possible. The question then arises, "what input topological order should be used?" Indeed this constraint significantly reduces the space of potential valid rematerialization sequences so can be valuable to limit algorithm complexity. However, as shown in (Gagrani et al., 2022), there can be a wide variability of peak memory footprint for different sequences (without rematerialization).
One of the attractive features of our new formulation is that it does not require an input topological order. However, when studying the graphs used in (Jain et al., 2020), we found no variations in the peak memory footprint across 50 randomly generated topological orders for each graph (without rematerialization). To compare directly with prior methods, we add the input topological order constraint (see Section 2.3) and use this for all of our experiments. Remov
ing the input topological order constraint and/or studying the impact of different input topological orders on solution quality and solve time are an interesting topic for future research.
A follow-on to Checkmate(Patil et al., 2022) recognizes that memory optimization may combine rematerialization with paging - strategically reading and writing intermediate data to external memory. While these external memory accesses are slow, they still may be required if the available local memory is too small to fit any rematerialization solution of practical total duration. This work, however, still suffers from a scaling issue as it inherits the \(O(n^{2})\) Boolean variable formulation. An extension of our formulation to incorporate paging would be valuable future research. Another extension is (Schuler et al., 2022) which builds on the Checkmate formulation to allow making rematerialization decisions across two heterogeneous cores. This extension also suffers from the same fundamental scaling issue.
In (Kumar et al., 2019), a novel method is introduced for finding a valid rematerialization sequence (one that meets the required memory threshold) by performing divide-and-conquer using tree decomposition. This work focuses on proving an attractive complexity for generating such a sequence. However, it is of limited use in practice since it does not contain any explicit mechanism for minimizing total duration of the final sequence produced by the algorithm.
The potential for reducing peak local memory footprint using rematerialization depends on the computation graph topology. For example, a line graph (string of nodes connected by single edges) offers no potential for improvement since the local memory footprint at each node does not depend on past deallocations. In contrast, a simple U-net typically allows significant opportunities for footprint savings. In general, we observe that networks with "long skip connections" are the topologies that exhibit more potential for rematerialization gains. DNN training computation graphs have a "U-net-like" structure since they have forward and backward paths with edges that cross between the two to support gradient computation in the backward path. (Jain et al., 2020) only experiments with training graphs. In contrast, we study rematerialization on both training and inference graphs, and find compelling rematerialization gains also for inference graphs, particularly ones with complex interconnect topology.
### Contributions
* _Retention intervals formulation of rematerialization:_ We introduce Moccasin, a new formulation with significantly better scaling of solution complexity to large graphs. We demonstrate that this formulation can be solved effectively using _constraint programming_ (CP).
* _Constrained number of allowed rematerializations:_ We introduce parameter \(C_{v}\) that defines the maximum number of times a node \(v\) can be computed in the final sequence. We demonstrate empirically that this complexity reduction retains solution quality even for very small values of \(C_{v}\).
* _Compare and contrast approaches:_ We provide comparison of solution speed for Checkmate vs Moccasin and demonstrate equivalence of solutions.
* _Impact of memory limit:_ Prior works make simple assumptions on local memory limit in their evaluation. We show the impact of a range of local memory limits on solution speed and final solution value.
## 2 Retention Intervals Formulation
The main building block of our formulation is the concept of _output retention intervals_ which, as we will show, simplifies the problem formulation greatly. For each node in the computation graph, we will define intervals that indicate the retention of its output in local memory. More precisely, each rematerialization of a node will have its own interval. Here, with a misuse of terminology, we use the term rematerialization for the first time that a node is computed as well. We will assume a node \(v\in V\) is allowed to be rematerialized up to \(C_{v}\) times, hence \(C_{v}\) intervals will be defined for node \(v\). The parameter \(C_{v}\) could be considered a hyperparameter. We will show via numerical experiments that picking \(C_{v}\) to be as small as \(2\) for all of the nodes is often good enough in practice and is without loss of optimality (see Section 3).
We define the retention intervals for each node \(v\in V\) by its "start" and "end" times
\[s_{v}^{i},e_{v}^{i}\in\mathcal{D},\ \forall i\in\left\{1,\ldots,C_{v}\right\},\]
where the domain \(\mathcal{D}\) is a set of integers of size \(O(n)\) that will be defined explicitly in the subsequent subsections. A memory block of size \(m_{v}\) is allocated at the start and deallocated at the end of the interval.
The concept of time in our formulation is defined through _compute events_. This is one of the key components of our proposed formulation. To better illustrate this point, consider Figure 3, which depicts our formulation for the 4-node example graph in Figure 2. In this example, we have set \(C_{v}=2\) for all nodes \(v\). Figure 3 illustrates the formulation
Figure 2: Example graph with 4 nodes.
for this graph where the vertical axis corresponds to different nodes and the horizontal axis is the time partitioned into events. The formulation as visualized in Figure 3 shows two intervals for node 1 and one interval for the other nodes. The reason for this is that the optimal solution for this example case requires only one active interval for nodes 2, 3, 4 while requiring two active intervals for node 1. We note that the starting and ending positions of the intervals in Figure 3 are obtained by solving the optimization problem. These are not known ahead of time, but rather modeled as variables in the optimization problem. Furthermore, this figure depicts circles as visual representation of potential node execution events. For example, in Figure 3, node 1 is computed during event 1 and its output is kept in memory until event 3. The second interval for node 1 starts in event 7 indicating that node 1 is recomputed during event 7. Observe that event 1 has a memory usage of \(m_{1}\), while event 7 has a memory usage of \(m_{1}+m_{3}\), as the output of node 3 is retained in memory when node 1 is recomputed.
The total duration of the rematerialization sequence is a weighted sum over the events in which a node is being executed, where the weights are simply the actual durations of the nodes (in seconds or processor cycles). Such events are marked with a filled circle in Figure 3. We say that the _peak memory footprint_ of a solution is the maximum memory usage among all events. The cyan colored arrows in Figure 3 represent the memory usage of node execution events. The peak memory footprint of the solution in the example is 3, assuming each node outputs a unit-size tensor, realized at event 10. Observe that events marked with empty circles do not contribute to either the duration or the peak memory footprint of the solution.
The primary reason why we work with this event-based definition for time is to keep the domain of the variables \(s^{i}_{v},e^{i}_{v}\) at a small size. More concretely, the CP-SAT solver (Perron and Furnon, 2022), which we use to solve numerically our proposed optimization problem, accepts only integer variables as it implements Lazy Clause Generation (Ohrimenko et al., 2009), known to be effective for constrained scheduling problems (Schutt et al., 2013). If we instead set the time domain to be seconds (which would require quantization) or processor cycles, then the corresponding CP formulation would require variable domains of larger sizes. The domain size of the variables in the formulation has a direct impact on the solver speed especially for CP solvers.
Note that for each interval, the first time slot (shown using a filled circle in Figure 3) is dedicated to the computation of node \(v\) while the rest of the interval indicates the retention of the output of node \(v\). It follows that intervals may not start at the same time due to our system model where nodes are executed sequentially, i.e. there is no parallel computing. The empty circles in Figure 3 indicate that no interval starts at that particular time, i.e., no computation occurs. The decision of which circles will be filled or empty is a by-product of the optimization problem and not known ahead of time. Furthermore, it is important to note that a memory block of size \(m_{v}\) is allocated during the entirety of the interval. Also, observe that since the first event of an interval indicates the compute of that node, this is when the predecessors of that node must be available in memory.
### Optimization Problem Formulation
In addition to the start and end times of the intervals, we define the Boolean variables \(a^{i}_{v}\) to model whether the \(i\)'th interval of node \(v\) is active or inactive. This grants us the flexibility of not requiring a node \(v\) to be rematerialized exactly \(C_{v}\) times. If inactive (i.e. \(a^{i}_{v}=0\)), the corresponding interval does not contribute to the sum in the objective as well as the sums in the memory and precedence constraints. We assume that the duration \(w_{v}\) and output size \(m_{v}\) for each node are known. Using the retention intervals as the foundation of our formulation, we state the rematerialization problem as an optimization problem as follows:
\[\underset{s,e,a}{\text{minimize}} \sum_{v,i}w_{v}a^{i}_{v}\] (1) subject to \[s^{i}_{v}\leq e^{i}_{v},\;\forall v,i \tag{2}\] \[e^{i}_{v}\leq s^{i+1}_{v},\;\forall v,\forall i<C_{v}\] (3) \[\sum_{v,i\,:\,s^{i}_{v}\leq t\leq e^{i}_{v}}m_{v}a^{i}_{v}\leq M, \;\forall t\in\mathcal{D}\] (4) \[\forall(u,v)\in E,\,\forall i\in\{i:a^{i}_{v}=1\},\,\exists j\;\text {such that}\] \[a^{j}_{u}=1,s^{j}_{u}+1\leq s^{i}_{v}\leq e^{j}_{u}\] (5) \[s^{i}_{v}\neq s^{j}_{u},\;\forall v,u\in\{v,u:v\neq u\},\,\forall i,j\] (6) \[a^{1}_{v}=1,\,\forall v\] (7) \[s^{i}_{v},e^{i}_{v}\in\mathcal{D},\,a^{i}_{v}\in\{0,1\},\, \forall v,i\,. \tag{8}\]
The objective (1) is the total duration. Constraints (2) enforce that the end time of each interval comes after its start time. Constraints (3) ensure that intervals of the same node
Figure 3: Visualization of the retention intervals formulation.
do not overlap. The memory budget constraints are given in (4), where the parameter \(t\) represents time. The precedence constraints (5) enforce that there exists an overlapping and active interval for all predecessors of \(v\) at all start times \(s^{i}_{v}\) of \(v\). Observe that the memory and precedence constraints are nonlinear constraints. We describe in detail how the _cumulative constraint_ and the _reservoir constraint_ from constraint programming are used to model these nonlinear constraints in the next subsection. The constraints in (7) are intended to make the first retention interval for every node active. The constraints (6) ensure that the starting times of the intervals are all different, i.e. compute events do not overlap.
The domain of the variables \(s^{i}_{v},e^{i}_{v}\) is as follows
\[\mathcal{D}=\{1,2,\ldots,\sum_{v}C_{v}\}\,, \tag{9}\]
where \(\sum_{v}C_{v}\) is equal to the number of intervals, which is upper bounded by \(|\mathcal{D}|\leq n\max_{v}C_{v}\). Observe that we may not have more than \(\sum_{v}C_{v}\) filled circles in Figure 3.
### Memory and Precedence Constraints using CP
**Memory constraints:** The set of constraints in (4) enforces that the resource usage never exceeds the local memory budget \(M\) at any time \(t\). Note that the constraint (4) is not linear in the variables \(s^{i}_{v},e^{i}_{v}\). This type of inequality can be modeled using the CP constraint cumulative.
The cyan colored arrows in Figure 3 are intended to visually represent how we model the memory constraint. Each vertical arrow can be viewed as a separate inequality constraint. In particular, the sum of output sizes \(m_{v}\) for each interval an arrow intersects is less than or equal to \(M\). Recall that it is not needed to have an arrow for each circle in Figure 3 since the memory use could not increase during the empty circles. Hence, we only need to consider the filled circles. However, we do not know which circles are to be filled ahead of time. This is readily addressed by the cumulative modeling which considers only the beginning of intervals for memory calculation.
We have used the function AddCumulative from CP-SAT (Perron and Furnon, 2022) for the memory constraints. This function requires intervals, demands, and capacity as input arguments. Intervals are the retention intervals from our formulation defined by the start and end variables \(s^{i}_{v},e^{i}_{v}\) and considered only when \(a^{i}_{v}=1\). The demand for each interval is the output size \(m_{v}\) for the corresponding node \(v\). Finally, the capacity is simply the memory budget \(M\). The pseudo-code representation of this constraint is as follows:
\[\texttt{cumulative}(\{s^{i}_{v},e^{i}_{v},a^{i}_{v},m_{v}\}_{v\in V,i\in\{1..C_{v}\}},M)\,.\]
**Precedence constraints:** We recall that prior to the execution of each compute task, the outputs of all of its predecessors must be available in memory. This could be written as the constraints in (5). Note that these are nonlinear constraints in the variables. We employ the reservoir constraint from constraint programming to model the data dependencies. We view each predecessor as a resource whose level needs be maintained such that it does not go below \(0\) while its successor is being executed.
Let us define the resource level change events \(f(\cdot)\) for each edge \((u,v)\in E\) and \(i\in\{1,\ldots,C_{v}\}\):
\[f(s^{i}_{v})=-1\] \[f(s^{i}_{v}+1)=1\] \[f(s^{1}_{u})=1,f(s^{2}_{u})=1,\ldots,f(s^{C_{u}}_{u})=1\] \[f(e^{1}_{u})=-1,f(e^{2}_{u})=-1,\ldots,f(e^{C_{u}}_{u})=-1\,. \tag{10}\]
The level change function \(f(\cdot)\) returns \(0\) at every other point in time except for those in (10). The function AddReservoirConstraintWithActive from CP-SAT takes the following input parameters: times, demands, actives, minimum level. Times and demands are as defined in (10). Actives are the Boolean variables \(a^{i}_{v}\) and the minimum level is set to \(0\). This constraint ensures that for node \(v\), one of the intervals for each of its predecessors \(u\) overlaps with the starting time of the node \(v\).
Finally, the nonlinear constraints (6) could be addressed using the alldifferent constraint from CP.
### Complexity Reduction by Enforcing an Initial Topological Ordering
The proposed framework offers the flexibility of allowing solutions that are not limited to a predetermined topological ordering. However, one could enforce an input topological ordering to reduce the size of the search space and in turn reduce the solve time. We expand upon this point in this subsection.
We start by considering a new domain for the integer variables \(s^{i}_{v},e^{i}_{v}\):
\[\mathcal{D}=\left\{1,2,\ldots,\frac{n(n+1)}{2}\right\}, \tag{11}\]
where the domain size is now \(O(n^{2})\) rather than \(O(n)\). We now introduce the concept of _stages_ into our formulation, which has been previously employed in (Jain et al., 2020). The \(j\)'th stage contains \(j\) events; this is depicted in Figure 4. Given a topological ordering of the nodes, the first stage enforces the computation of the first node in the order. In subsequent stages, the \(j\)'th node in the input topological order is enforced to be computed in the last event of stage \(j\), while the previous events in stage \(j\) allow nodes \(1,\ldots,(j-1)\) to be computed. In particular, we enforce that node \(j\) could be computed only in event \(j\) of a stage. Since this
restricts the starting times of each interval to specific events, we no longer need to explicitly include the constraint (6).
Note that the start time for the first retention interval for each node is no longer a variable but a fixed value. More precisely, for node \(v\), the value of \(s^{1}_{v}\) is equal to \(j(j+1)/2\), where \(j\) is the index of node \(v\) in the input topological ordering.
### Optimization in Two Phases
Our approach consists of two phases. In Phase 1, we solve a variant of the optimization problem in (1) - (8) to find a memory feasible solution. This solution is then used as initialization for Phase 2, which is the optimization problem stated in (1) - (8). The problem in Phase 1 has the objective
\[\underset{s,e,a,M_{var}}{\text{minimize}}\quad\max(M_{var},M) \tag{12}\]
where we introduce the variable \(M_{var}\in\mathbb{R}\) for the peak memory footprint. We modify the memory constraint in (4) as follows for Phase 1:
\[\sum_{v,i\,:\,s^{i}_{v}\leq t\leq e^{i}_{v}}m_{v}a^{i}_{v}\leq M_{var},\,\forall t \in\mathcal{D}\,. \tag{13}\]
The goal of Phase 1 is to arrive at an intermediate solution, whose peak memory footprint is below the local memory target \(M\). We address this by considering the maximum of the peak memory variable \(M_{var}\) and memory budget \(M\) as the objective. This objective could be linearized by introducing an auxiliary variable \(\tau\in\mathbb{R}\) and then considering: minimize \(\tau\) subject to \(\tau\geq M_{var}\) and \(\tau\geq M\). The other constraints of the optimization problem are unchanged.
Note that any topological ordering of the graph \(G\) provides a trivial feasible solution to the problem in Phase 1 since it does not have a hard memory constraint. For the rematerialization problem itself, however, there is no easy solution that could be constructed and provided as an initial solution. Phase 1 is intended to fill this gap.
### Overview of Formulations
Table 1 provides a complexity comparison for our approach and the approach of (Jain et al., 2020). When comparing the number of constraints for different formulations, it is important to keep in mind that the constraints in the MILP are all linear constraints while the constraints of the CP could be linear or nonlinear.
_Remark 2.1_.: The optimization problem in (1) - (8) is specified using integer variables \(s^{i}_{v}\) and \(e^{i}_{v}\) while the discrete variables of the MILP in (Jain et al., 2020) are all Boolean. The fact that our formulation is stated using integer variables as opposed to Boolean variables is only for convenience. For a more direct comparison of variable counts, it is straightforward to rewrite this optimization problem by representing the integer variables as Boolean sequences, in which case the number of Boolean decision variables will be \(O(Cn\log n)\) (taking \(C=\max_{v}C_{v}\)) as opposed to \(O(n^{2})\) Boolean variables in the Checkmate formulation.
Note that the size of search space in Table 1 for Moccasin is dominated by the term \(n^{Cn}\). For the version of Moccasin with Boolean decision variables, the search space size would scale with \(2^{Cn\log n}\), which is the same as \(n^{Cn}\). This is smaller than the size of the search space for Checkmate, which is \(2^{n^{2}+nm}\).
## 3 Numerical Results
### Graphs for Evaluation
The graphs we use for evaluation of the method presented in this paper come from several sources. Additional descriptions can be found in Appendix A.2.
**Checkmate Graphs:** We use graphs from the checkmate repository (Jain, 2020). They represent the single-batch training computation graphs for a selected set of neural networks. Use of these graphs allows us to compare directly to results presented in that work.
**Random Layered Graphs:** We leverage the synthetically constructed random layered graphs introduced in (Gagrani et al., 2022, Appendix A) as examples of graphs with complex interconnect topology.
**Real-world Graphs:** While our synthetic and standard, public-domain graphs are valuable for experimentation, we also see value in presenting results obtained from neural computation graphs used for commercial development of artificial intelligence hardware and software products. We sample a set of representative graphs that have diverse architectures and size.
Figure 4: Illustration of events grouped into stages.
### Experimental Setup
We use the open source CP-SAT solver from Google ORTools (Perron and Furnon, 2022) to solve the CP. For the comparisons against Checkmate, we have used the implementation provided in the codebase for (Jain et al., 2020), which utilizes Gurobi (Gurobi Optimization, LLC, 2023) and CVXPY (Diamond and Boyd, 2016; Agrawal et al., 2018). We have run all of the numerical experiments on a 16-CPU core workstation with a 32 GB of RAM. Furthermore, we note that once a solution to the optimization problem is obtained, the corresponding sequence of tasks can be executed on any hardware, CPU or GPU. We note that this is due to the generality of our approach where we do not make any assumptions on the computing unit that will be used to execute the computations.
### Experimental Results
Figure 5 shows the solve progress plots for 4 random layered graphs \(G_{1},\ldots,G_{4}\) of varied sizes and 4 different local memory budget values for each graph. The curves of Moccasin are all shifted to the right by the amount of time spent in Phase 1 for fair comparison. We have set the time limit to \(30\) minutes for \(G_{1},G_{2},G_{3}\) and \(1\) hour for \(G_{4}\).
For \(G_{1}\), the solve times are in the order of a few seconds and we observe that Moccasin is faster to obtain similar objective values as Checkmate. For \(G_{2}\), which has \(n=250\) nodes, when the memory budget is tight, we see that Checkmate fails to find a feasible solution within the given time limit of \(30\) minutes. For the highest memory budget value, it finds the optimal solution in 10 minutes while Moccasin finishes in a few seconds. For graphs \(G_{3}\) and \(G_{4}\), which have \(n=500\) and \(n=1000\) nodes respectively, Checkmate times out with no solution even if we consider a higher time limit of 3 hours while Moccasin converges to a good solution (i.e. low total duration increase) under less than an hour. In addition to the solve time, the memory required for optimization also becomes a critical factor for large graphs. In fact, for \(G_{3}\) and \(G_{4}\), Checkmate exits with an out-of-memory error. The results in Figure 5 are best understood by contrasting the number of discrete variables in Checkmate, which grows quadratically in \(n\) while it grows \(n\log(n)\) in Moccasin (see Table 1).
In all of the experiments, we have set \(C_{v}=2\) for all \(v\in V\), which we specified in the plot legends by \(C=2\). We have used the version of the formulation in Section 2.3 where we enforce an input topological ordering, which we have randomly generated.
Table 2 provides numerical results for a range of different computation graphs. In Table 2 we have selected the memory budget values for each graph to be the \(80\%\) and \(90\%\) of the initial peak memory without rematerialization. In addition to the methods of Checkmate and Moccasin, we include results for the rounding algorithm proposed in (Jain et al., 2020) under the "LP+rounding" column of the table. This method consists of relaxing the MILP into a linear program (LP) and then rounding the solution (see (Jain et al., 2020) for further details on this algorithm). Note that solution produced by the rounding algorithm is not guaranteed to satisfy the memory budget constraint. This could be seen in Table 2 where in most cases the peak memory for the relaxation and rounding approach is higher than the memory budget \(M\).
Table 2 shows that the random layered graphs and real-world graphs are the most challenging ones among the graph set. The solve times for these graphs are higher than the CM graphs, which is consistent with the fact that they have higher edge densities and more complex edge connectivities.
It is critical to highlight that Moccasin is able to scale to graphs with few hundreds of nodes and a few thousands of edges. This is while achieving a few percents of total duration increase, which is essential to performance.
## 4 Conclusion
Rematerialization is a valuable technique that allows to save peak local memory footprint at the expense of longer total duration of computation. However, finding a rematerialization sequence that minimizes total duration while meeting a prescribed local memory limit is not easy - especially for graphs with complex interconnections. Previous research (Jain et al., 2020) demonstrates a MILP formulation that can address this optimization problem, however our experiments show that that approach does not scale well as the graph size, and the complexity of graph topology grow.
We introduce Moccasin, a new formulation that expresses the decision variables as _retention intervals_ specified by the start and end event indices of each node's output tensor. We further manage complexity using the hyper-parameter
\begin{table}
\begin{tabular}{||c c c c c||} \hline Formulation & \# Bool. vars & \# integer vars & Size of search space & \# constraints \\ \hline \hline Checkmate - MILP & \(O(n^{2}+nm)\) & - & \(O(2^{n^{2}+nm})\) & \(O(n^{2}+nm)\) \\ \hline Moccasin - CP & \(O(Cn)\) & \(O(Cn)\) (domain size = \(O(n)\)) & \(O(2^{Cn}+n^{Cn})\) & \(O(Cm)\) \\ \hline \end{tabular}
\end{table}
Table 1: Overview of formulation complexities for Checkmate and our proposed method.
which limits the maximum number of retention intervals for each node. Our experimental results include Checkmate graphs (training graphs with simple interconnect topology), synthetically generated random layered graphs (that model inference graphs with complex topology), and real-world inference graphs in active use commercially. We demonstrate that \(\textsc{Moccasin}\) provides the same optimization result as Checkmate with up to an order-of-magnitude less solve time for mid-sized graphs (100-250 nodes). It also enables us to find solutions for larger graphs (up to 1000 nodes) and for more challenging peak local memory limits, cases where the MILP formulation of Checkmate failed to return any solution within the time limit.
Finally, rematerialization is seen as one component among a set of tools for managing memory use during execution of complex computation graphs arising in deep learning applications. Joint optimization of this with other methods such as sequencing, scheduling for parallel compute, and paging to global memory are viewed as valuable topics for further research.
|
2307.10355 | Selection functions of strong lens finding neural networks | Convolution Neural Networks trained for the task of lens finding with similar
architecture and training data as is commonly found in the literature are
biased classifiers. An understanding of the selection function of lens finding
neural networks will be key to fully realising the potential of the large
samples of strong gravitational lens systems that will be found in upcoming
wide-field surveys. We use three training datasets, representative of those
used to train galaxy-galaxy and galaxy-quasar lens finding neural networks. The
networks preferentially select systems with larger Einstein radii and larger
sources with more concentrated source-light distributions. Increasing the
detection significance threshold to 12$\sigma$ from 8$\sigma$ results in 50 per
cent of the selected strong lens systems having Einstein radii
$\theta_\mathrm{E}$ $\ge$ 1.04 arcsec from $\theta_\mathrm{E}$ $\ge$ 0.879
arcsec, source radii $R_S$ $\ge$ 0.194 arcsec from $R_S$ $\ge$ 0.178 arcsec and
source S\'ersic indices $n_{\mathrm{Sc}}^{\mathrm{S}}$ $\ge$ 2.62 from
$n_{\mathrm{Sc}}^{\mathrm{S}}$ $\ge$ 2.55. The model trained to find lensed
quasars shows a stronger preference for higher lens ellipticities than those
trained to find lensed galaxies. The selection function is independent of the
slope of the power-law of the mass profiles, hence measurements of this
quantity will be unaffected. The lens finder selection function reinforces that
of the lensing cross-section, and thus we expect our findings to be a general
result for all galaxy-galaxy and galaxy-quasar lens finding neural networks. | A. Herle, C. M. O'Riordan, S. Vegetti | 2023-07-19T18:00:00Z | http://arxiv.org/abs/2307.10355v1 | # Selection functions of strong lens finding neural networks
###### Abstract
Convolution Neural Networks trained for the task of lens finding with similar architecture and training data as is commonly found in the literature are biased classifiers. An understanding of the selection function of lens finding neural networks will be key to fully realising the potential of the large samples of strong gravitational lens systems that will be found in upcoming wide-field surveys. We use three training datasets, representative of those used to train galaxy-galaxy and galaxy-quasar lens finding neural networks. The networks preferentially select systems with larger Einstein radii and larger sources with more concentrated source-light distributions. Increasing the detection significance threshold to \(12\sigma\) from \(8\sigma\) results in 50 per cent of the selected strong lens systems having Einstein radii \(\theta_{\rm E}\geq 1.04\) arcsec from \(\theta_{\rm E}\geq 0.879\) arcsec, source radii \(R_{S}\geq 0.194\) arcsec from \(R_{S}\geq 0.178\) arcsec and source Sersic indices \(n^{\rm S}_{\rm Sc}\geq 2.62\) from \(n^{\rm S}_{\rm Sc}\geq 2.55\). The model trained to find lensed quasars shows a stronger preference for higher lens ellipticities than those trained to find lensed galaxies. The selection function is independent of the slope of the power-law of the mass profiles, hence measurements of this quantity will be unaffected. The lens finder selection function reinforces that of the lensing cross-section, and thus we expect our findings to be a general result for all galaxy-galaxy and galaxy-quasar lens finding neural networks.
keywords: gravitational lensing: strong; methods: data analysis, statistical; techniques: image processing
## 1 Introduction
Strong gravitational lensing allows one to address a broad range of cosmological and astrophysical questions, from the nature of dark matter (e.g Vegetti et al., 2018; Ritondale et al., 2019; Gilman et al., 2020; Hsueh et al., 2020) and galaxy evolution (e.g. Sonnenfeld et al., 2019; Mukherjee et al., 2021; Rizzo et al., 2021) to measuring the Hubble constant and other cosmological parameters (e.g. Birrer & Treu, 2021; Collett & Auger, 2014). The field is set to profit enormously from ongoing and upcoming large-sky surveys (with e.g. Euclid, the Square Kilometre Array and the Vera Rubin Observatory), as the number of known strong gravitational lens systems is expected to increase by many orders of magnitude (Collett, 2015; McKean et al., 2015). An understanding of the selection function of these surveys is required for a correct interpretation of the scientific results from strong gravitational lensing analyses of the larger sample.
There are two sides to the problem of selection bias in the newly discovered strong lens systems: the effect of the lensing cross-section, and that of the lens finding method used to identify the lenses. Sonnenfeld et al. (2023) focused on the former and showed that strong lenses are a biased subset of the true population with respect to the lens and source parameters. The effect of the lens finding methodology, however, is still not well understood.
The task of finding strong gravitational lens systems amongst the large number of objects imaged in future surveys is a challenging one, especially as strong lenses are, by their nature, very rare. Hence, automation in lens finding is expected to play an important role in the identification of these systems.
Neural networks are a class of machine learning models that are often applied in astronomy to search for rare objects. Lens finder neural networks are those trained to sift through large volumes of image data specifically to find strong gravitational lens candidates. Over the years, Convolution Neural Networks (CNNs) have emerged as the state of the art for lens finding. These models are essentially classifiers, trained to distinguish between _lens_ and _non-lens_ gravitational systems. For example, Lanusse et al. (2018) developed CMU DeepLens, which is a CNN trained to find strong gravitational lens systems on LSST data. The ResNet architecture implemented in their work became the standard for this task, and was adapted for the DESI Legacy survey by Huang et al. (2020) and Huang et al. (2021). CNNs have also been used to find strong lenses in the Kilo-Degree Survey (Petrillo et al., 2017, 2019, 2019), HST/ACS data of the COSMOS field (Pourrahmani et al., 2018), PANSTARS (Canameras et al., 2020), HSC (Canameras et al., 2021, 2023) and LOFAR (Rezaei et al., 2022).
In this paper, we present the first systematic study of the strong lens finder selection function. We develop lens finder neural networks in a similar fashion as commonly done in the lens finding literature and then characterise the selection effects that they introduce into the recovered sample of strong gravitational lenses. In particular, we are interested in identifying the characteristics of a strong gravitational lens system that drive the classification. Finally, we discuss the implications of these selection effects for different scientific applications of strong gravitational lensing.
The paper is organised as follows. In Section 2 we discuss the
mathematical formalism used. In Section 3 we describe the image simulation process for the three datasets used in this work. In Section 4 we detail the machine learning training and interpretability framework. In Section 5 we present our results on the selection function. In Section 6 and 7 we discuss our results and summarise our conclusions.
## 2 Selection bias formalism
From Sonnenfeld et al. (2023), the probability \(\mathrm{P_{SL}}\) of selecting a sample of strong lens systems for a given selection criterion S can be expressed as:
\[\mathrm{P_{SL}}(\Psi_{I},\Psi_{S}|S)\propto\mathrm{P_{I}}(\Psi_{I})\mathrm{P_{ s}}(\Psi_{S})\mathrm{P_{sl}}(\Psi_{I},\Psi_{S}|S)\;. \tag{1}\]
Here, \(\Psi_{I}\) and \(\Psi_{s}\) are the set of parameters describing the lens galaxy mass and the background source light distributions, respectively. \(\mathrm{P_{I}}\) and \(\mathrm{P_{s}}\) are the corresponding probability density distributions. \(\mathrm{P_{sel}}\) encapsulates the probability that a specific combination of lens and source produce a strong lens system and that this is found in the survey. It can be further separated into two components:
\[\mathrm{P_{sel}}(\Psi_{I},\Psi_{S}|S)=\mathrm{P_{det}}(\Psi_{I},\Psi_{S}) \mathrm{P_{find}}(\Psi_{I},\Psi_{S}|S). \tag{2}\]
where \(\mathrm{P_{det}}\) is the probability that multiple images of the source form as clearly distinct features in the survey image, and \(\mathrm{P_{find}}\) is the probability that this image is correctly classified as a strong lens system. The latter depends on the identification procedure adopted. By focusing on the first term, Sonnenfeld (2022) and Sonnenfeld et al. (2023) have quantified how strongly the properties (\(\Psi_{I}\) and \(\Psi_{s}\)) of the lenses and background sources are biased with respect to the general population of galaxies. The goal of this paper is to quantify \(\mathrm{P_{find}}\) for the case in which a CNN is used to identify lens systems in a given survey.
## 3 Data
The Euclid mission is expected to find about \(10^{5}\) strong gravitational lens systems (Collett, 2015). In this work, we focus on neural networks trained only with single-band data because most of the gravitational lens systems that will be found in the Euclid Wide Survey will be observed using the Visual imager (VIS) instrument.
VIS is a broadband optical instrument with a resolution of 0.16 arcsec, which is about three times better than that of the three infrared bands of the Near Infrared Spectrometer and Photometer (NISP) instrument (O'Riordan et al., 2023; Euclid Collaboration et al., 2022). Since the Einstein radius of the strong gravitational lenses expected to be found by Euclid peaks around 0.5 arcsec (Collett, 2015), the arcs and lens light are expected to be blended together within the NISP instrument. Hence, these features will be much better resolved by VIS, at the cost of losing colour information. The absence of multi-band data can be crucial in informing the decision rationale that CNNs arrive at during training, and may have a significant effect on the type of strong gravitational lens systems that will be identified. Moreover, Petrillo et al. (2019) found that the best performing network in terms of purity and completeness was the one trained on single-band data. Similarly, Lanusse et al. (2018) showed that CMU DeepLens is a successful tool for finding strong gravitational lens systems, and that even without colour information, it is able to learn enough about the lensed-arc morphology to solve the classification problem.
We make three datasets, \(\mathcal{D_{A}}\), \(\mathcal{D_{B}}\) and \(\mathcal{D_{C}}\), with \(10^{6}\) images for training and \(2\times 10^{5}\) for testing each. The samples are split evenly between two distinct classes, _lens_ and _non-lens_, which contain all and none of the lens systems, respectively.
Fig. 1 and 2 show examples of the two classes from \(\mathcal{D_{B}}\) and \(\mathcal{D_{C}}\), and Table 1 summarises the simulation parameter distributions for all datasets.
We refer the reader to the following sections for more details on the properties of each dataset. Briefly, the datasets \(\mathcal{D_{A}}\) and \(\mathcal{D_{B}}\) have extended sources, while the sources in \(\mathcal{D_{C}}\) are unresolved. This is done to understand how the selection function, \(\mathrm{P_{find}}\), differs between galaxy-galaxy and galaxy-quasar lens systems. The lens galaxies in \(\mathcal{D_{A}}\) have a simple analytical model for the lens-light distribution, while those in \(\mathcal{D_{B}}\) and \(\mathcal{D_{C}}\) are characterised by a higher level of complexity. This is done in order to quantify how the lens light model complexity affects the neural network selection function.
We add complexity to the lens rather than the source light because distinguishing between these components becomes non-trivial when the lens contains features resembling lensed source emission (e.g. arcs). The lensed emission is typically sufficiently distinct from analytic Sersic components because of its distorted morphology, such that introducing structure in the source will only make the task of lens finding easier. Moreover, using analytical source models allows us to more easily quantify how their parameters affect the network selection function for different choices of the lens light distribution.
### Source light
We use Sersic profiles (Sersic, 1963; Ciotti and Bertin, 1999) for the source-light model in the case of \(\mathcal{D_{A}}\) and \(\mathcal{D_{B}}\). A point-like source model is used for \(\mathcal{D_{C}}\). We refer the reader to Table 1 for more detailed information.
### Lens light
The lens light distribution in \(\mathcal{D_{A}}\) is created using a Sersic profile with an index and effective radius sampled uniformly from between 1.0 and 4.0 and between 0.05 and 0.5 arcsec, respectively. \(\mathcal{D_{B}}\) and \(\mathcal{D_{C}}\) use images generated with a Generative Adverserial Network (GAN, see Holzschuh et al., 2022, for more details). The GAN was trained to generate images that imitate those created from the SKIRT code on the IllustrisTNG simulation (Springel et al., 2018; Rodriguez-Gomez et al., 2019). The resulting datasets consist, therefore, of realistic early- (i.e. Sersic-like) and late-type galaxy images, several of which contain complex features like star-forming clumps, spiral arms and satellite galaxies (see Fig. 1 for examples of lens systems taken from \(\mathcal{D_{B}}\)). Structures of this kind can easily be confused for lensed-source emission when dealing with single-band data, making the problem of lens-finding more challenging. In this respect, \(\mathcal{D_{A}}\) represents the simplest formulation of the problem for the network to solve.
The lens magnitudes in \(\mathcal{D_{A}}\) and \(\mathcal{D_{B}}\) are sampled from a uniform distribution, and the source magnitudes are chosen such that they are dimmer than the lens. In \(\mathcal{D_{C}}\), we instead use fluxes defined relative to the source. In practice, the lens flux is set such that it is 5 to 100 times brighter (sampled uniformly in this range) than the source light. We chose these values based on the distribution of magnifications of the lens mass models (see Section 3.4 for more details) in this dataset, which peaks at \(\mu\approx 3\), thus ensuring that the number of systems where the lens light is completely absent in the image are negligible. These values are also chosen such as to keep the flux per pixel area the same, as the source light is concentrated into a point.
Figure 1: A representative sample from \(\mathcal{D}_{\mathcal{B}}\), ranked in increasing order of SNR from the top left to the bottom right in each panel. Note the range of complexities of the lens light models in the sample. In some cases, distinguishing between lens light and lensed source emission is trivial, but in cases where the lens light model contains complex features like spiral arms, this is not the case. Examples of lenses are shown on the left, and examples of non-lenses are shown on the right. The simulation parameters are sampled as shown in Table 1.
Figure 2: A representative sample from \(\mathcal{D}_{\mathcal{C}}\), ranked in increasing order of SNR from the top left to the bottom right in each panel. The images are simulated as described in Section 4, with the inclusion of contaminant galaxies and a variable PSF FWHM. Examples of lenses are shown on the left, and examples of non-lenses are shown on the right. The simulation parameters are sampled as shown in Table 1.
### Light contaminants
We do not include any light contaminants in \(\mathcal{D}_{\mathcal{R}}\) and \(\mathcal{D}_{\mathcal{B}}\). On the other hand, \(\mathcal{D}_{\mathcal{C}}\) does contain light contribution from nearby field galaxies. These are sampled from the GAN dataset and are placed randomly on the image, while ensuring that they lie outside the Einstein radius of the system and at least a set minimum distance apart from each other. These contaminants are 1 to 100 times brighter than the source light and their number in each image is sampled uniformly between 0 and 4. We sample their redshift between 0.2 and 5.0 from a probability density distribution based on the comoving volume at these redshifts. Examples of gravitational lens systems from \(\mathcal{D}_{\mathcal{C}}\) are shown in Fig. 2.
### Lens mass
For all three datasets, an elliptical power law is assumed for the lens mass model with the slope, \(\gamma\), ranging uniformly between 1.8 and 2.2 (Koopmans et al., 2006). In the case where the lens light is a GAN image (i.e. \(\mathcal{D}_{\mathcal{B}}\) and \(\mathcal{D}_{\mathcal{C}}\)), we use the image moments of the latter to align the position angle and axis-ratio of the mass profile with those of the lens light, thus ensuring that the light and mass distributions are consistent with each other. Moreover, the Einstein radius is scaled proportionally to the total flux of the selected GAN image. Thus, \(\mathcal{D}_{\mathcal{B}}\) and \(\mathcal{D}_{\mathcal{C}}\) have the same simulation parameters as \(\mathcal{D}_{\mathcal{R}}\), except for the Einstein radius and lens axis-ratio, for which the distributions are shown in the left and centre panels of Fig. 3, respectively.
External shear with no preferred direction and with strength sampled from a uniform distribution is applied to all datasets. Additionally, for \(\mathcal{D}_{\mathcal{C}}\), we further ensure that the numbers of doubly- and quadruply-imaged systems are equal within the lens class. This is done by first calculating the area inside the inner caustic for the specific axis-ratio chosen. We then find the radius of a circle that would be twice this area and sample the source position uniformly from within this circle.
### Noise
In order to add noise to the data, we follow the definition of signal-to-noise ratio (SNR) by O'Riordan et al. (2019):
\[S_{\mathrm{T}}=\frac{\Sigma_{i}^{N}m_{i}\mathrm{d}_{i}}{\sigma_{d}\sqrt{ \Sigma_{i}^{N}m_{i}}}\,, \tag{3}\]
where \(\sigma_{d}\) is the standard deviation of the sky noise, \(\mathrm{d}_{i}\) is the pixel value at index \(i\), and \(m_{i}\) is the masking variable defined such that \(m_{i}=1\) for pixels corresponding to lensed source-light and 0 otherwise. Pixel values that lie within twice the Einstein radius of the lens subtracted image of the system are considered to be source-light for \(\mathcal{D}_{\mathcal{R}}\) and \(\mathcal{D}_{\mathcal{B}}\), and in the case of \(\mathcal{D}_{\mathcal{C}}\), this is considered to be all pixels with a value greater than 1.5 times the standard deviation of the lens subtracted image.
The SNR of each image in \(\mathcal{D}_{\mathcal{R}}\) and \(\mathcal{D}_{\mathcal{B}}\) is set by the specific combination of source and lens magnitudes, and the resulting distribution is shown in the right panel of Fig. 3. In the case of \(\mathcal{D}_{\mathcal{C}}\), we first sample a value for the SNR from a uniform distribution (see Table 1), then add uncorrelated Gaussian noise to the images with a standard deviation calculated from Eqn. (3).
### Point-Spread Function
We use a circular Gaussian function for the Point-Spread Function (PSF). The Full Width at Half Maximum (FWHM) has a value of 0.16 arcsec for \(\mathcal{D}_{\mathcal{R}}\) and \(\mathcal{D}_{\mathcal{B}}\). For \(\mathcal{D}_{\mathcal{C}}\), we employ a variable FWHM, which is sampled uniformly between 0.16 and 0.3 arcsec. A FWHM
\begin{table}
\begin{tabular}{l c c c} \hline Parameter & \(\mathcal{D}_{\mathcal{R}}\) & \(\mathcal{D}_{\mathcal{B}}\) & \(\mathcal{D}_{\mathcal{C}}\) \\ \hline Source Light & Sérsic & Sérsic & Point-like \\ Source Radius, \(R_{\mathrm{S}}\) (arcsec) & \(\mathcal{U}(0.05,0.3)\) & \(\mathcal{U}(0.05,0.3)\) & - \\ Source Sérsic Index, \(n_{\mathrm{Sc}}^{\mathrm{S}}\) & \(\mathcal{U}(1,4)\) & \(\mathcal{U}(1,4)\) & - \\ Source Axis-Ratio, \(q_{\mathrm{S}}\) & \(\mathcal{U}(0.5,1.0)\) & \(\mathcal{U}(0.5,1.0)\) & - \\ Source Redshift & 2.0 & 2.0 & 2.0 \\ Source Apparent Magnitude, \(M_{\mathrm{VIS}}^{\mathrm{S}}\) & \(\mathcal{U}(M_{\mathrm{VIS}}^{\mathrm{S}},24.0)\) & \(\mathcal{U}(M_{\mathrm{VIS}}^{\mathrm{S}},24.0)\) & - \\ Source Flux & - & - & 1.0 \\ Lens light & Sérsic & GAN & GAN \\ Lens mass axis-ratio, \(q\) & \(\mathcal{U}(0.5,1.0)\) & - & - \\ Lens power-law slope, \(\gamma\) & \(\mathcal{U}(1.8,2.2)\) & \(\mathcal{U}(1.8,2.2)\) & \(\mathcal{U}(1.8,2.2)\) \\ Einstein Radius, \(\theta_{\mathrm{L}}\) (arcsec) & \(\mathcal{U}(0.5,2.0)\) & - & - \\ Lens redshift & 0.8 & 0.8 & 0.8 \\ Lens Sérsic Index, \(n_{\mathrm{Sc}}^{\mathrm{L}}\) & \(\mathcal{U}(1,4)\) & - & - \\ Lens Apparent Magnitude, \(M_{\mathrm{VIS}}^{\mathrm{L}}\) & \(\mathcal{U}(18.0,22.0)\) & \(\mathcal{U}(18.0,22.0)\) & - \\ Lens Flux, \(\mathrm{F}_{\mathrm{L}}\) & - & & \(\mathcal{U}(5.0,100.0)\) \\ Shear Strength & \(\mathcal{U}(0.0,0.1)\) & \(\mathcal{U}(0.0,0.1)\) & \(\mathcal{U}(0.0,0.1)\) \\ Contaminants & No & No & Yes \\ Contaminant Flux, \(F_{\mathrm{C}}\) & - & - & \(\mathcal{U}(1.0,100.0)\) \\ Pixel Size (arcsec) & 0.1 & 0.1 & 0.1 \\ Field-of-View (arcsec) & 10 & 10 & 10 \\ PSF\({}_{\mathrm{Iwhm}}\) (arcsec) & 0.16 & 0.16 & \(\mathcal{U}(0.16,0.3)\) \\ \hline \end{tabular}
\end{table}
Table 1: Parameter distributions used to create all datasets. From top to bottom: the parameters describing the source light distribution, the mass and light properties of the lens, distribution for the light contaminants and the observational set up. The superscripts S and L denote source and lens properties respectively.
of 0.16 arcsec corresponds to that of the VIS instrument aboard the Euclid telescope, and 0.3 arcsec represents one that is about 2 times worse than this.
## 4 Methods
### Training phase
During the training phase, the datasets were split into batches of \(10^{3}\) images each. ResNet18 CNNs were trained for 800 epochs each on \(\mathcal{D_{A}}\) and \(\mathcal{D_{B}}\). For \(\mathcal{D_{C}}\), we also included dynamic learning rate scheduling, wherein the learning rate is decreased by a factor \(10^{-0.5}\) if the test loss remains stagnant for 20 epochs. These values were chosen after extensive hyper-parameter tuning. The convergence criteria is set to be three consecutive drops in learning rate without a change in the test loss. With this setup, the network is trained for 260 epochs.
For all datasets, several data augmentation techniques were used during training, namely: random cropping to \(80\times 80\) pixels, random vertical and horizontal flips, rotation and erasing. The learning rate was set to 0.001 and the Adam optimizer was used to minimize the binary cross entropy loss. The images were normalised such that each pixel value is between 0 and 1 before being passed to the neural network.
Table 2 shows the differences in the final testing accuracy and loss for each of the three networks, as well as the True Positive Rate (TPR), False Positive Rate (FPR) and Area Under the Receiver Operator Characteristic (AUROC) curve for each network with corresponding dataset version. All three networks achieve very high classification accuracy, and have converged. This is also reflected in the AUROC values: the closer the AUROC is to 1, the closer the network is to a perfect classifier; thus the AUROC curve is a measure of the quality of the classifier for the particular dataset it is trained on.
The small decrease in network performance between \(\mathcal{D_{A}}\) and \(\mathcal{D_{B}}\) is a consequence of the fact that the network is exposed to a larger fraction of lens-light models and negative examples (galaxies in the _non-lens_ class) that have complex structure which can be confused for lensed arcs. We note that the higher test accuracy of the network trained on \(\mathcal{D_{A}}\) does not imply a superior performance, rather, it indicates the relative simplicity of the lens light model.
### Detection significance of a _lens_ image
We use the inverse error function to convert from probabilities to detection significance:
\[C=\sqrt{2}\ \mathrm{erf}^{-1}(p)\, \tag{4}\]
where \(C\) is the network lens detection significance and \(p\) is the probability that an image belongs to the _lens_ class. More details can be found in the Appendix A1. In a lens finding campaign, a detection significance threshold is chosen such that all classifications made by the network above this \(\sigma\)-cut constitute a strong lens system. The distributions of the outputs for the three different networks on their respective datasets, after converting to significance, are shown in the right panel of Fig. 4. Note that the distributions are roughly Gaussian and peak at different values for each dataset.
### Kullback-Leibler divergence
We would like to quantify the difference between the ground truth population of lenses and sources (\(\Psi_{I}\), \(\Psi_{S}\)) and the population recovered by the lens finder. To do this, we employ the Kullback-Leibler (KL) divergence, which is defined as:
\[D_{KL}(p||q)=-\int\ p(x)\ln\left[\frac{q(x)}{p(x)}\right]dx. \tag{5}\]
Here, \(p(x)\) is the probability distribution function of the true parent distribution (all lenses in the testing dataset) for the parameter \(x\), and \(q(x)\) is that of the selected sample at a particular detection
Figure 3: From left to right: distribution of lens Einstein radius, axis-ratio and SNR of the images for the datasets \(\mathcal{D_{B}}\) (orange histograms) and \(\mathcal{D_{C}}\) (green histograms). The bins are in log-space for the panel on the right.
significance threshold. We estimate these distribution functions at a specific \(\sigma\)-cut using Kernel Density Estimation (KDE) with a Gaussian kernel. We use Scott's rule (Scott, 2015) to estimate a reasonable bandwidth for the KDE.
The purpose of using the KL-divergence as a metric is two-fold: (i) from an interpretability point of view, the parameters that show an increase in the KL-divergence at high detection significance thresholds are clearly important for the networks to classify a system as a _lens_ or a _non-lens_, lending insight into what the networks have learnt in order to solve the lens finding problem. (ii) For physical parameters that are measured in a strong lensing survey, the KL-divergence indicates by how much the parameters inferred will deviate from the parent distribution for a given detection threshold.
It must be noted here that we have neglected the correlations between the different parameters when calculating the KL-divergence, which is akin to marginalising over all parameters except the one being considered in the calculation. This means that the variation of the KL-divergence for a specific parameter as the detection significance threshold becomes stricter indicates by how much this parameter will differ from the true sample, if we were only interested in measuring this specific parameter in a given survey. A more thorough analysis, which is beyond the scope of this work, would require the covariance of the parameters to be taken into account.
We also note that as the detection significance threshold increases, the number of lens images drops exponentially, as shown in the left panel of Fig. 4. Thus, there is an increase in the KL-divergence due to sampling noise, which needs to be accounted for in order to disentangle it from the increase in the KL-divergence that is a result of the selected sample differing from the truth. To this end, we sample the true distribution with different numbers of total samples (N) corresponding to each \(\sigma\)-cut, and then calculate the KL-divergence of this sample relative to the the entire true distribution of \(10^{5}\) systems. This is repeated several thousands of times to get enough realisations to account for sampling variance.
model and, therefore, the network does not need to learn complex concepts to distinguish between lensed and un-lensed emission. The more complex lens-light distributions in \(\mathcal{D}_{B}\) cause the network to not only be more sensitive to the lens axis-ratio and the source radius (we see a steep rise in KL-divergence already at \(>7\sigma\) for \(R_{\rm S}\)), but also to the Sersic index of the source. In particular, it is more likely to select sources with larger radii (\(R_{\rm S}\)), Sersic indices (\(n_{\rm SC}^{\rm s}\)) and axis-ratios (\(q_{\rm S}\)). Specifically, 50 per cent of the selected sample have \(R_{\rm S}\geq 0.194\) arcsec, \(n_{\rm SC}^{\rm S}\geq 2.62\) and \(q_{\rm S}\geq 0.758\) at a 12-\(\sigma\) cut as opposed to the ground truth values of \(R_{\rm S}\geq 0.175\) arcsec, \(n_{\rm SC}^{\rm s}\geq 2.51\) and \(q_{\rm S}\geq 0.751\). All these properties lead to larger and more concentrated source light distributions, making the lensed arcs more easily distinguishable from the lens light.
Additionally, all three networks tend to identify as lens systems those with a slightly more elliptical lens mass distribution at very strict detection significance thresholds, (i.e. \(>12\sigma\) for the networks trained on \(\mathcal{D}_{\mathcal{A}}\) and \(\mathcal{D}_{B}\)). An increase in the detection significance cut from 8 to 12\(\sigma\) causes 50 per cent of the selected lens systems to have \(q\leq 0.736\) and \(q\leq 0.713\) from \(q\leq 0.750\) and \(q\leq 0.723\), for \(\mathcal{D}_{\mathcal{A}}\) and \(\mathcal{D}_{B}\) respectively (with associated true values of \(q\leq 0.750\) and \(q\leq 0.723\)). The KL-divergence plot for \(\mathcal{D}_{C}\) (Fig. 10) shows more sensitivity (steep rise in KL-divergence at C \(>6\sigma\)) to the lens axis-ratio compared to the extended source lens finders. At a 7-\(\sigma\) cut, half of the lens systems have lens axis-ratios \(q\leq 0.670\) as opposed to the ground truth values of \(q\leq 0.723\). Moreover, this network is more efficient at selecting images with lower PSFwhm values. At lower PSFwhm, it becomes easier to distinguish between the lensed point-source and the lens/contaminant galaxies. However, the inclusion of stars as contaminants will make this more difficult, and the preference for lower PSFwhm may be negligible with stars as contaminants.
For this dataset, we also calculate the KL-divergence as a function of \(\sigma\)-cut for the flux of the lens \(F_{\rm L}\) and the contaminants, \(F_{\rm C}\), which are proxies for how bright these components are with respect to the source. We find that the KL-divergence is negligible at all values of \(\sigma\)-cut. This points to the fact that the network has learnt to distinguish between the lens/contaminant light and source light very effectively.
Interestingly, the increase in KL-divergence for the lens power-law slope for all three networks is consistent with the sampling noise, leading us to conclude that there is no preference for mass models of a particular slope.
## 6 Discussion
Our main findings indicate that the sample of strong gravitational lens systems identified in wide-sky surveys with CNNs are biased with respect to both lens and source properties. This could have important implications for the many scientific applications of strong gravitational lensing. In this section, we discuss the implications of these biases.
The CNN selection function, coupled with the lensing cross-section, will lead to samples of deflectors which are biased towards higher masses. This will effectively limit our prospects of studying the properties of lens galaxies with strong lensing to only the most massive objects. Essentially, the blurring by the PSF produces a lower limit on the Einstein radii of lens systems that can be found in a survey. This effect will be increased by the selection function of the neural networks.
Interestingly, we find no selection bias in favour of specific values of the slope, \(\gamma\), of the lens mass density profiles. Assuming that this result is confirmed with other CNNs, we can conclude that the large samples of new lens systems will allow one to measure this parameter and its evolution with redshift. This will provide a valuable probe of galaxy evolution and feedback models (e.g. Koopmans et al., 2006; Mukherjee et al., 2021).
Strong gravitational lensing acts as a cosmic telescope, allowing the study of high-redshift galaxies at higher physical resolution and SNR than usually possible otherwise. We have shown that networks
trained on images with complex and more realistic lens light models prefer larger source radii and Sersic indices. Sources with these properties are bigger and more concentrated, and thus produce arcs which are easier to distinguish from the lens light. A similar selection bias also occurs from the strong lensing cross-section (Serjeant, 2012; Hezaveh et al., 2012), and the neural networks will reinforce this effect. Hence, attempts to interpret the properties of lensed galaxies in the context of galaxy evolution models (as for example in Oldham et al., 2017, 2017; Stacey et al., 2021) from the large sample of discovered lenses will need to additionally account for the effect of the neural network selection function.
Time-delay cosmography can be used as an additional probe of the Hubble constant (Refsdal, 1964; Birrer et al., 2020; Birrer & Treu, 2021; Shajib et al., 2023). However, this requires the breaking of the mass-sheet degeneracy (Falco et al., 1985) by incorporating independent mass measurements from stellar dynamics. Velocity dispersion measurement uncertainties limit the overall precision on \(H_{0}\approx 10\) per cent (Kochanek, 2020; Schneider & Sluse, 2013). These uncertainties are too large to address the Hubble tension in a meaningful way. To overcome this issue, Birrer & Treu (2021) proposed a joint analysis using mass models and kinematic properties of galaxy-galaxy lenses as a prior on the mass profiles for the analysis of the
Figure 5: Distributions of the parent and selected sample for detection significance thresholds of 3, 7 and 10\(\sigma\) for the network trained on \(\mathcal{D}\): \(\theta_{\rm E}\) is the Einstein radius, \(q\) is the axis-ratio of the lens mass, \(\gamma\) is the power-law slope, \(M_{\rm VIS}^{\rm L}\) and \(M_{\rm VIS}^{\rm S}\) are the apparent magnitude of the lens and source respectively, \(R_{\rm S}\) is the source radius, \(n_{\rm Sc}^{\rm S}\) is the Sersic index of the source, \(q_{\rm S}\) is the axis-ratio of the source and SNR is the signal-to-noise ratio.
time-delayed galaxy-quasar lenses. This method inherently assumes that two types of strong gravitational lens systems are drawn from the same deflector parent population (see also Sonnenfeld, 2021). However, Sonnenfeld et al. (2023) pointed out that in the case of quadruply-imaged quasars, the lens galaxies tend to have larger ellipticities and halo masses for a given stellar mass. Similarly, we find that the neural network trained to find lensed quasars has a preference for higher ellipticity lens profiles as these have wider caustics and thus a higher chance of producing four image systems. More importantly, the networks trained to find lensed extended sources show a much weaker preference for higher ellipticities. Hence, the approach proposed by Birrer & Treu (2021, see also Gomer et al. 2022; Birrer et al. 2020) is more likely to introduce additional systematic errors as opposed to constraints. These can be accounted for with an understanding of the selection function of the lens finder used, which can be obtained with an analysis of the type done in this work.
### Limitations of our work
Our image simulations already account for several characteristics of real data that is often ignored in the lens finding literature, like complexity in the lens light models, field galaxies and a variable
Figure 6: Distributions of the parent and selected sample for detection significance thresholds of \(3,7\) and \(10\sigma\) for the network trained on \(\mathcal{D}_{\mathcal{B}}\). \(\theta_{\rm E}\) is the Einstein radius, \(q\) is the axis-ratio of the lens mass, \(\gamma\) is the power-law slope, \(M_{\rm VIS}^{\rm L}\) and \(M_{\rm VIS}^{\rm S}\) are the apparent magnitude of the lens and source respectively, \(R_{\rm S}\) is the source radius, \(n_{\rm Sc}^{\rm S}\) is the Sersic index of the source, \(q_{\rm S}\) is the axis-ratio of the source and SNR is the signal-to-noise ratio.
PSF FWHM. However, the training data used in this work could be further improved by the inclusion of complexities that: (i) make the data more diverse (e.g an elliptical PSF model with varying ellipticities, stars and non-lensed quasars as contaminants, complex source light models, gamma-rays, CCD artefacts) and (ii) make the task of separating lensed emission from source light more difficult (for example ring galaxies). We have seen that complexities like the former require the network to become adept at ignoring contaminants, as with the network trained on \(\mathcal{D}_{\mathcal{C}}\). For the latter, we expect a more complex selection function when trained on more realistic data (as with the neural network trained on \(\mathcal{D}_{\mathcal{B}}\)).
We have confined this study to the case of single-band data, thus it is unclear how training with multi-band data will influence the selection function of the neural networks. It is possible that in this scenario, colour related selection biases may be introduced in addition to the ones listed in this paper. Moreover, we have considered only the case of the ResNet18 architecture, as it is often the best performing network for lens finding (Canameras et al., 2023). How the selection function changes for different CNN architectures and for different ML models is an interesting question which is beyond the scope of this work.
Note that our datasets are split evenly between the two classes. A real survey will have far more non-lens than lens images. This is another example of the class imbalance problem in the machine learning literature. A real sample would have at most 1 lens system for every 100 objects, and a neural network trained on a dataset with a class ratio of 1:99 could achieve a 99 per cent accuracy by simply classifying every object as a non-lens. In order to alleviate this issue, the machine learning community has developed many techniques that centre around the theme of artificially making the class ratio closer to 1:1, which can be achieved by over-sampling the minority class or under-sampling the majority class during training. Since we use simulated data, we can circumvent this issue by creating a dataset with an equal number of lens and non-lens systems. Using a realistic class ratio could alter the notions that the networks learn during training. This may further exacerbate the selection effects that we outline in this paper, but we do not study the effect of class imbalance here.
Figure 7: Distributions of the parent and selected sample for detection significance thresholds of 3, 4, 5 and 6\(\sigma\) for the network trained on \(\mathcal{D}_{\mathcal{C}}\). \(\theta_{\text{E}}\) is the Einstein radius, \(q\) is the axis-ratio of the lens mass, \(\gamma\) is the power-law slope, \(F_{\text{L}}\) and \(F_{\text{C}}\) are the lens and contaminant flux respectively and SNR is the signal-to-noise ratio.
Figure 8: The increase in the Kullback-Leibler Divergence (\(D_{KL}\)) as the detection significance threshold is increased for the network trained on \(\mathcal{D}_{\mathcal{H}}\). The blue lines show the \(D_{KL}\) that is due to sampling noise, with the increasingly shaded regions depicting 1, 2 and 3\(\sigma\) contours.
Figure 9: The increase in the Kullback-Leibler Divergence (\(D_{KL}\)) as the detection significance threshold is increased for the network trained on \(\mathcal{D}_{\mathcal{B}}\). The blue lines show the \(D_{KL}\) that is due to sampling noise, with the increasingly shaded regions depicting 1, 2 and 3\(\sigma\) contours.
## 7 Conclusions
In this paper, we quantified the selection function of the machine learning models that will likely be used to find the strong lens systems in future surveys. We focused our efforts on the ResNet18 architecture trained on three different datasets, and found that the classification task of lens finding leads to a selection bias on the parameters of the identified sample of strong lenses.
This results from the fact that lens finding neural networks are more efficient at finding lenses with certain properties, making them more likely to be above any detection significance threshold chosen for a given survey. Moreover, samples of lenses found by neural networks in the first pass are used as training data for the next iteration of lens finding networks. This might have the effect of further exacerbating the selection bias in each iteration.
We have shown that neural networks are most sensitive to the Einstein radius of the system as they preferentially select strong lens systems with larger values of \(\theta_{\rm E}\). In addition, the networks are biased towards bigger sources with more concentrated light distributions. Galaxy-quasar lens finding neural networks also show a stronger preference for more clinical lens mass distributions than those trained to find galaxy-galaxy lens systems. We also find that the networks show no preference for any values of the lens power-law slope.
Lens finding neural networks reinforce the biases introduced by the lensing cross-section. Our results clearly show that an analysis of the selection effects of lens finding neural networks is a key additional step that needs to be incorporated into any systematic attempt to find strong gravitational lenses in upcoming surveys.
## Acknowledgements
AH thanks the Max Planck Computing and Data Facility (MPCDF) for computational resources and support. AH also thanks Daniel Grun, Matteo Guardiani and Philipp Frank for useful insights and discussions. SV thanks the Max Planck Society for support through a Max Planck Lise Meitner Group, and acknowledges funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (LEDA: grant agreement No 758853).
## Data Availability
The data used in this paper are available from the corresponding author on request.
|
2303.03806 | Operator learning of RANS equations: a Graph Neural Network closure
model | The spread of machine learning (ML) techniques in combination with the
availability of high-quality experimental and numerical data boosted in recent
years numerous applications in fluid mechanics. Among those, examples of
closure models for turbulent flows or data-assimilation based on neural
networks (NN) are already numerous. However, it is well known that these
techniques are prone to over-fit and necessitate an exceedingly large amount of
data, unless enforcing physical constraints. We address those limitations by
applying graph neural networks (GNN). This architecture is characterized by a
net of nodes that can be easily adapted to unstructured meshes. Moreover, it is
known GNN can show remarkable generalization capabilities as compared to
standard network models. Here, we demonstrate the use of GNN by interfacing
them with a finite elements (FEM) solver for the supervised learning of
Reynolds-averaged Navier--Stokes equations. We consider as a test-case the flow
past bluff bodies; we train the model using the wake past a cylinder, at
different Reynolds numbers 40<Re<150 and resolved on different grids. The GNN
model successfully predicts the Reynolds stress also in unseen cases
characterized by different geometric configurations and Reynolds numbers.
Interestingly, a small data-set is used for achieving these performances, thus
suggesting the applications of GNN in replacement of less flexible techniques
such as convolutional NN. | Michele Quattromini, Michele Alessandro Bucci, Stefania Cherubini, Onofrio Semeraro | 2023-03-07T11:20:39Z | http://arxiv.org/abs/2303.03806v1 | # Operator learning of RANS equations: a Graph Neural Network closure model
###### Abstract
The spread of machine learning (ML) techniques in combination with the availability of high-quality experimental and numerical data boosted in recent years numerous applications in fluid mechanics. Among those, examples of closure models for turbulent flows or data-assimilation based on neural networks (NN) are already numerous. However, it is well known that these techniques are prone to over-fit and necessitate an exceedingly large amount of data, unless enforcing physical constraints. We address those limitations by applying graph neural networks (GNN). This architecture is characterized by a net of nodes that can be easily adapted to unstructured meshes. Moreover, it is known GNN can show remarkable generalization capabilities as compared to standard network models. Here, we demonstrate the use of GNN by interfacing them with a finite elements (FEM) solver for the supervised learning of Reynolds-averaged Navier-Stokes equations. We consider as a test-case the flow past bluff bodies; we train the model using the wake past a cylinder, at different Reynolds numbers \(40\leq Re\leq 150\) and resolved on different grids. The GNN model successfully predicts the Reynolds stress also in unseen cases characterized by different geometric configurations and Reynolds numbers. Interestingly, a small data-set is used for achieving these performances, thus suggesting the applications of GNN in replacement of less flexible techniques such as convolutional NN.
## 1 Introduction
Machine learning (ML) applications are spreading across the scientific community in the most diverse fields, leading to the development of novel techniques and algorithms encompassed within the emerging field of _scientific machine learning_. This breakthrough is fostered by the massive augmentation of computer-aided activities, the constant increase of high performance computational power, the recent developments in the field of deep learning and the large availability of data Goodfellow et al. (2016).
In a very schematic way, ML techniques enable to identify mappings between observables of a system (inputs) and quantities of interest (outputs) we aim to predict by leveraging data; when these analysed data are governed by deterministic or statistical laws, in principle, these mappings correspond to approximating models. Due to these potentialities, the number of contributions combining ML and fluid mechanics has been constantly growing in a large variety of applications, ranging from modelling of complex behaviour to reinforcement learning Brunton et al. (2020). These applications found a natural ground into modelling for turbulent flows. In Duraisamy et al. (2019), a broad overview can be found on the different levels of approximation, together with a critical take on the limitations of the approach; in the review by Ghattas and Willcox (2021), the focus is on inverse modelling and model reduction. Among the numerous authors that addressed this problem, Ling and Templeton (2015) applied classification methods for identifying regions of uncertainties where the closure term of the Reynolds-averaged Navier-Stokes (RANS) might fail; Strofer and Xiao (2021) combined data assimilation with neural networks (NN) modelling of the Reynolds stress using limited observation. Other approaches leverage baseline models such as the the Spalart-Allmaras closure Singh and Duraisamy (2016), physics-informed NN (PINNs) Eivazi et al. (2022), regression methods Schmelzer et al. (2019) or decision trees Duraisamy et al. (2019).
In principle, we could identify universal closure models directly from data, having at disposal an infinite amount of them; in practice, we often deal with a limited amount of data or with few measurements available in the domain of interest. This impacts on the use of methods such as NN, where the expressivity and generic structure make them suitable for a large class of models, yet prone to generalisation problems. In this sense, ML models risk to be representative solely of the datasets included in the training process; thus, it becomes compelling given the available data to circumvent these problems by inputting well selected data during the training or providing prior knowledge through modelling Bucci et al. (2021); Shukla et al. (2022). Alongside those limitations, when unstructured meshes are considered, standard techniques such as convolutional NN (CNN) might be limited or would require an interpolation step.
In this paper, we address some of these limitations by introducing graph neural networks (GNN) Hamilton (2020) in combination with finite elements method (FEM) for the simulations of unsteady flows. GNNs have a strong potential due to their peculiarities: these architectures are characterized by complex multi-connected nets of nodes that can be provided as input and easily adapted to unstructured meshes. Indeed, the convolution in a GNN is performed by aggregating information from neighbouring nodes, thus overcoming the limitations imposed by the geometry in contrast with CNN. Moreover, besides being differentiable, it is known GNN models show remarkable generalization capabilities as compared to standard network models Sanchez-Gonzalez et al. (2018). Finally, the possibility of directly targeting the learning of the operator via the GNN's discrete stencils has been subjected to research (see Shukla et al. (2022)). Among these works, we take inspiration from the contribution by Donon et al. (2020), where the authors combined the GNN-based architecture - for incorporating permutation and translation invariance - with the statistical solver problem, and proof that this method has some universal approximation properties. Thus, GNN can be combined with unsupervised learning to perform operator learning for the solution of partial differential equations (PDE), while retaining a great flexibility with respect to the domain of computation. Following this philosophy, but using supervised learning, we consider the dynamics of the wake past a cylinder, at low Reynolds numbers \(40\leq Re\leq 150\) and infer a closure model of the associated RANS equations, which generalizes on unseen Reynolds numbers and geometries. More in detail, a GNN model is trained on a dataset composed by the meanflows and the Reynolds stress of the wake past a cylinder, computed for \(O(10)\) cases at different \(Re\) on meshes characterized by different levels of refinement. We show that such a model reproduces with satisfactory accuracy the Reynolds stress of unseen cases. Moreover, leveraging the flexibility of the GNN, the model is capable at predicting - although with a more limited accuracy - the Reynolds stress on unseen geometries and flow conditions, thus demonstrating the potentiality of operator learning.
The remainder of the article is organised as follows: in Sec. 2 we introduce the flow cases and parameters; Sec. 3 describes the data-driven strategy; results are illustrated in Sec. 4 and discussed in Sec. 5.
## 2 Governing equations and numerical simulations
We consider incompressible two-dimensional (2D) fluid flows developing past bluff bodies. Here, we will focus on time-averaged quantities and the second order-statistics, the Reynolds stress. To this end, we introduce the Reynolds decomposition
\[\mathbf{u}(\mathbf{x},t)=\overline{\mathbf{u}}(\mathbf{x})+\mathbf{u}^{\prime} (\mathbf{x},t), \tag{1}\]
where \(\overline{\mathbf{u}}=(\overline{u},\overline{v})\) is the meanflow and \(\mathbf{u}^{\prime}\) the fluctuation field. Formally, any unsteady flow can be described through this decomposition, whether we consider a laminar or turbulent case Foures et al. (2014). Plugging Eq. (1) in the
Navier-Stokes equations and averaging we get the system of equations
\[\overline{\mathbf{u}}\cdot\nabla\overline{\mathbf{u}}+\nabla\overline{p}- \frac{1}{Re}\nabla^{2}\overline{\mathbf{u}} = \mathbf{f} \tag{2a}\] \[\nabla\cdot\overline{\mathbf{u}} = 0, \tag{2b}\]
where \(\overline{p}\) is the average pressure field. The Reynolds number is defined as \(Re=U_{\infty}D/\nu\), with the reference velocity chosen as the free stream velocity \(U_{\infty}\) at the inlet, \(D\) the reference length for the analysed flow and \(\nu\) the kinematic viscosity. Mathematically, the resulting system provides the Reynolds-averaged Navier-Stokes Equations (RANS) and the forcing \(\mathbf{f}\) is the closure term or Reynolds stress. This term can be modelled or - when data are available - directly computed as
\[\mathbf{f}=-\overline{\mathbf{u}^{\prime}\cdot\nabla\mathbf{u}^{\prime}}. \tag{3}\]
The forcing term \(\mathbf{f}\) can be assumed as an unknown variable of a data assimilation scheme; if the RANS formulation is adopted as a baseline model, the identification of the forcing becomes a control parameter of the optimization process Foures et al. (2014).
### Case Study design: bluff body flows
As a reference case of a bluff body flow, we consider the unsteady wake developing past a cylinder at Reynolds numbers \(40\leq Re\leq 150\). This flow is commonly found as a benchmark and well documented: it exhibits stable behaviour up to a critical Reynolds number of \(Re_{c}\cong 46.7\), when a supercritical Hopf bifurcation occurs Giannetti and Lucchini (2007). At higher Reynolds numbers, the baseflow becomes an unstable solution, while the unsteady flow develops into a limit cycle behaviour known as von Karman street. We will focus on values of \(Re\) exhibiting this dynamic. In Fig. 1, the geometry of the test case is shown. The diameter of the cylinder is kept as the characteristic size of the analysed case, \(D=1\). Based on this dimension, the domain extension is \(L_{x}=30\) in the streamwise and \(L_{y}=10\) in the transverse direction; in the basic configuration, the cylinder is placed at \((0,0)\), at a distance \(\Delta x=11\) from the inlet. The flow evolves from the inlet, where a uniform velocity \(\mathbf{u}=(1,0)^{T}\) is imposed. At the outlet, the pressure is null, while on the top and bottom boundaries we set \(\partial_{t}u=0\) and \(v=0\). No slip conditions are considered around the obstacle. All the numerical simulations are initiated with null flow fields at \(t=0\) and performed using the FEM approach, supported by the FEniCS library Alnaes et al. (2015); more details are provided in the **supplementary appendix**. In Fig. 2\(a\), the meanflow along with the isolines of the vorticity \(\omega\) are shown for \(Re=150\). Statistics are computed on-the-fly during the simulation. The final time of simulations is determined by a convergence criterion based on the L2-norm of the difference between subsequent meanflows down to a threshold \(10^{-8}\). The closure term \(\mathbf{f}\) is obtained by plugging \(\overline{\mathbf{u}}\) and \(\bar{p}\) into the RANS equations, algebraically (see example in Fig. 2\(b\) at \(Re=150\)).
### Dataset
Data are central when considering ML based algorithms. As mentioned in the introduction, in order to obtain a model which is capable of generalization, _i.e._ predicting data unseen during the training process, it is necessary to collect a large amount of data. We can limit data-hungriness and overfitting problems characterizing other techniques such as the standard fully connected Neural Networks, by using a graph neural network (GNN) model detailed in SS3: by exploiting the geometry and mesh independence of GNNs, we are capable of learning the underneath operator and thus overcoming the mentioned problems. We consider as input data the meanflow \(\overline{\mathbf{u}}\) and as output the stress tensor \(\mathbf{f}\) in Eq. (2). The goal is to identify a data-driven model for Eq. (2) by means of supervised learning. We compute these quantities of interest using three meshes differently refined and running the simulations at various Reynolds numbers; in particular, the final dataset is composed of simulations ranging from \(Re=40\) to \(Re=150\) with a stride between two subsequent simulations of \(\Delta Re=10\). Simulations at \(40\leq Re\leq 70\) were performed on a coarse mesh (Fig. 3\(a\)), at \(80\leq Re<110\) on a medium mesh (Fig. 3\(b\)), and at \(120\leq Re\leq 150\) on a refined mesh (Fig. 3\(c\)). Note that we did not include \(Re=110\) in the training set. As shown in Fig. 3, these meshes have been refined in the most sensitive regions identified by the wavemaker (the near-wake of the cylinder) and around the obstacle boundaries, in order to increase the accuracy of the computations where the instability develops Giannetti and Lucchini (2007). The resulting dataset is relatively small, as it consists solely of 11 pairs of input-output flow fields. We will demonstrate the extent to which the model generalizes by performing predictions at unseen Reynolds numbers and with different geometries of the obstacle.
## 3 Methodology
The application of GNN combined with unstructured meshes revolves around the idea of graph representation of the data, which performs at its best when relations between data are relevant to the problem. In this case, we exploit the
Figure 1: Sketch of the computational domain. The cylinder with unitary diameter is considered as a reference case. Boundary conditions can be found in the text.
Figure 3: Computational meshes used in the present work. Different resolutions are adopted: \((a)\) Coarse mesh, 1842 nodes; \((b)\) Medium mesh, 7234 nodes; \((c)\) Fine mesh, 28668 nodes.
Figure 2: \((a)\) Streamwise component of the meanflow and vorticity isolines for the flow past a cylinder at \(Re=150\). \((b)\) For the same case, the streamwise closure term of RANS equations Eq. (2) is shown. In both cases, only a portion of the domain is shown.
spatial distances between different points of the mesh in order to reproduce the convective/diffusive dynamics of the system and thus to learn the RANS equations operator: each node of the mesh can be coupled one-to-one to a node of the GNN, which shares with the mesh the same connection properties as its own edge features. A sketch of these connections composing a graph is shown in Fig. 4. The methodology is inherited and adapted from the work of Donon et al. (2020), where the algorithm deep statistical solver is introduced; we refer the interested reader to this work. Here, we quickly summarize some of the fundamental features that characterize a GNN; in particular, the main process on which a GNN is based is the diffusion of the information extracted on each node and each connection across the whole network using the _message passing_ during the updates; it can be divided in three fundamental steps:
1. _Message Creation_ - on each node \(i\), an embedded state associated with the vector \(\mathbf{h}_{i}\) is created. It is initialized with a _zero state_ and it will embed information as long as the message passing proceeds.
2. _Message Propagation_ - in this step, information is propagated. For unravelling the dynamics of the underneath system, _i.e._ producing a model for the RANS equations, we transmit the message in both directions: from each node to its neighbours and from these latter to the node itself (Fig. 4). Mathematically this is stated as \[\boldsymbol{\phi}_{i,j}^{(k)}=\zeta_{i}^{(k)}\left(\mathbf{h}_{i}^{(k-1)}, \mathbf{a}_{ij},\mathbf{h}_{j}^{(k-1)}\right),\] (4) In what follows, the subscript will indicate the connections. The vector \(\mathbf{h}_{i}^{(k-1)}\) is the embedded state from the previous layer \(k-1\), \(\mathbf{a}_{ij}\) denotes the directed connections between the node \(i\) and the nodes \(j\), while \(\zeta_{i}^{(k)}\) is a generic differentiable operator such as multi-perceptron layers (MLP), see Goodfellow et al. (2016).
3. _Message Aggregation_ - finally, on each node, these collected informations are aggregated, providing an updated embedded state \(\mathbf{h}_{i}^{(k)}\) for each of them \[\mathbf{h}_{i}^{(k)}=\mathbf{h}_{i}^{(k-1)}+\alpha\Psi^{(k)}\left(\mathbf{h}_ {i}^{(k-1)},\{\overline{\mathbf{u}},Re\},\boldsymbol{\phi}_{\rightarrow,i}^{(k )},\boldsymbol{\phi}_{\leftarrow,i}^{(k)},\boldsymbol{\phi}_{\circ,i}^{(k)} \right),\] (5) where \(\boldsymbol{\phi}_{\rightarrow,i}^{(k)}=\frac{1}{N_{d}}\sum_{j=1}^{N_{d}} \boldsymbol{\phi}_{j,i}^{(k)}\) is the message that the node \(i\) sends to its \(N_{d}\) neighbours; \(\boldsymbol{\phi}_{\leftarrow,i}^{(k)}=\frac{1}{N_{d}}\sum_{j=1}^{N_{d}} \boldsymbol{\phi}_{i,j}^{(k)}\) are the messages that the node \(i\) receives from its \(N_{d}\) neighbours; \(\boldsymbol{\phi}_{\circ,i}^{(k)}=\boldsymbol{\phi}_{i,i}^{(k)}\) the message that the node \(i\) sends to itself to avoid lost of information as the process advances. \(\Psi^{(k)}\) is a generic differentiable operator that can be approximated also in this case by MLP; \(\alpha\) is a relaxation coefficient, included in the model hyperparameters. Note that the use of the mean operator in \(\boldsymbol{\phi}_{\rightarrow,i}^{(k)}\) and \(\boldsymbol{\phi}_{\leftarrow,i}^{(k)}\) allows one to collect, for each of the \(i\)-th embedded states, the information propagating from a number of neighbours that can differ for each of the \(i\)-th nodes. Nonetheless, different permutation-invariant functions can be introduced in this step (maximum, sum, concatenation...). In our case the inputs - indicated in Fig. 4 as \(\mathbf{G}_{i}\) - are the meanflow and the Reynolds numbers \(\{\overline{\mathbf{u}},Re\}\), and are fed during the training step.
Following the strategy adopted in Donon et al. (2020), we apply different trainable parameters for each layer of the GNN. At the end of the message passing process, the information on each node has been handled as described and thus carries with it information from any other point of the graph including their relative distances. In principle, the number of updates should cover the longest geodesic path that can be defined on the mesh, in order to allow the message on each node to reach any other node of the mesh; in practice, the number of updates are fixed and - in our application
Figure 4: Graph neural network structure: given the \(i\)-th node, \(\mathbf{h}_{i}\) is the embedded state; \(a_{ij}\) are the directed connections between the nodes; \(\mathbf{G}_{i}\) represents the inputs.
optimized using genetic algorithms Akiba et al. (2019). The latest embedded state is projected back to a physical state using a decoder, once again approximated using a differentiable function that in our case is a fully connected network. A detailed discussion of the parameters, their choice and the structure of the NNs is included in the **supplementary appendix**.
In short, a GNN layer is a parametrized function that takes in input a graph and returns the same graph with updated node and/or edge features, learnt during the training process and compared with the ground truth's RANS closure term **f** computed using DNS simulations. The resulting error leads to the definition of a loss function based on the Euclidean norm or mean square error \(\epsilon\), reading as
\[\epsilon=\sum_{k=1}^{\bar{k}}\gamma^{\bar{k}-k}\left[\frac{1}{N}\sum_{i=1}^{N }\left(x_{i}-y_{i}\right)^{2}\right], \tag{6}\]
which needs to be minimized during the training. In Eq. (9), \(N\) is the number of nodes, \(\mathbf{x}\) is a vector containing the GNN's prediction for each node, \(\mathbf{y}\) is a vector of the same dimensions representing the ground truth, \(\bar{k}\) is the number of GNN layers and \(\gamma\) a process parameter. Gradients of \(\epsilon\) with respect to the weights of the GNN are computed by automatic differentiation; stochastic gradient descent is adopted for updating the weights of the net to best fit the training data (back propagation) and minimizing \(\epsilon\).
## 4 Results
In this section, we briefly present the results based on the GNN model discussed in SS3. The implementation is based on Pytorch Geometric, a PyTorch library for building and training GNNs, while the coupling between the network and the mesh has been implemented with a Python scratch-coded script which provides an interface with the FEM library. The training of the model has been executed on 2 parallel GPU NVIDIA Tesla V100 with 32 GiB of RAM. Fig. 5 shows the training loss as a function of the number of epochs. By considering the same architecture, we checked the robustness of the training by running 10 models (Fig. 5\(a\)) on a limited number of epochs: for all cases, the drop in the loss is observed during the first 2000 epochs, as indicated by the mean and the variance. This behaviour indicates that all the trained models are characterized by comparable accuracy; we finally consider one of these 10 models and train it up to \(125k\) epochs (Fig. 5\(b\)). Using this model, we consider in the following different test cases for assessing the quality of the prediction.
The quantitative comparison between the GNN's prediction and the DNS's ground truth from the RANS closure terms, has been performed by computing the relative error
\[\Delta\mathbf{f}=\frac{\|\mathbf{f}_{dns}-\mathbf{f}_{gm}\|}{\|\mathbf{f}_{dns }\|}, \tag{7}\]
where \(\mathbf{f}_{dns}\) is the closure term of the RANS equations from direct numerical simulations and \(\mathbf{f}_{gm}\) is the GNN's prediction of the same term. First, we start from a proof of concept case, meaning that \(\mathbf{f}\) has been predicted by the GNN on a case, \(Re=130\), that is included in the training dataset (Fig. 6\(a\)-\(c\) and Fig. 8\(b\)). In this case an error \(\Delta\mathbf{f}=0.034\) is
Figure 5: Training curve of 10 different models: mean loss (blue line), together with the standard deviation (\(-\sigma\), \(+\sigma\)) (coloured zone), \((a)\). The green line indicate the actual model used in this work. The latter is shown for the final training performed over 125k epochs in \((b)\) as a green line and compared to the validation loss (yellow line).
found, visually shown in Fig. 6c. In Fig. 8\(b\), the same case is shown as a function of \(y\) in 7 different locations along \(x\) behind the cylinder in the plot.
As validation, we first consider cases that are not included in the dataset, by varying first the Reynolds number: in Fig. 6\(d\)-\(f\) and Fig. 8\(d\) the test is performed at \(Re=200\). This case is beyond the Reynolds number range used for training, set at \(Re=150\); the accuracy obtained in this case is \(\Delta\mathbb{I}=0.39\). In particular, we note that good agreement is achieved in the cases shown in details in Fig. 8\(a\) and Fig. 8\(d\) where the wake is fully developed, while discrepancies can be found in the immediate vicinity of the obstacle. Then, we consider changes in the configuration. In Fig. 7, the cylinder has been down shifted and still the GNN is able to show good prediction results (\(\Delta\mathbb{I}=0.29\)), confirmed in the 1D plot in Fig. 8\(e\). Finally, we analyse a different geometry for the bluff body: we assess the prediction of the closure term on a tilted 2D square in Fig. 7. In this case a decay of the performance is observed, with an error of \(\Delta\mathbb{I}=0.79\). The GNN model fails at capturing the details of the areas characterized by strong velocity gradients, although we can observe an overall ability in capturing the main patterns of the spatial distribution.
research; alternatives to alleviate limitations due to complex geometries were proposed in Sperotto et al. (2022), where radial basis functions are used for mesh-less cases, and Xu et al. (2022), leveraging vector-cloud networks. Moreover, during the conception and reduction of the present draft, two papers appeared focusing on GNN strategies; in particular, the review by Shukla et al. (2022) and the article by Chen et al. (2021), where steady flows around bluff bodies at Reynolds number of \(Re\approx 10\) are considered. With respect of the latter where random bluff bodies defined by Bezier curve constitute a large data-set of 2000 examples, here we consider numerical simulations of unsteady flows and train the model solely considering a data-set composed by 11 pairs of input-output snapshots in one configuration, the von Karman street developing past a cylinder in the interval \(40\leq Re\leq 150\). The model has been verified by considering cases contained in the data-set, unseen cases included in the numbers range of training, and more notably, unseen cases not included in the interval of training Reynolds numbers and modified geometries (shifted cylinder and squared bluff bodies). Despite the small amount of data, we found that the model is capable of reconstructing the Reynolds stress, with a good fidelity. On one hand, these performances are possible as the model is independent from the geometry and the mesh discretization. On the other hand, this striking difference with respect to analogous approaches is due to the application of statistical learning processes inspired by the deep statistical solver algorithm discussed in Donon et al. (2020), where the ultimate goal is the operator learning. In Donon et al. (2020) the potentialities of this strategy were proven mathematically for unsupervised cases. Here, we demonstrate that numerically for the supervised case by concluding that - in principle - the obtained GNN based on DSS is capable of generalization.
Following this philosophy, a number of improvements and follow-ups can be introduced. First, as the modelling is strongly dependent by the quality and amount of data, while being relatively parsimonious compared to analogous techniques, methodologies that systematically account for a selection of the data can be introduced, as well as attention mechanisms weighting the most informative regions of the flow (graph attention networks). The updates of the GNN, here performed through a chain of multi perceptron layers, can be replaced by one recurrent neural network Hamilton (2020). Finally, the terms approximated by these models could be integrated within turbulence modelling schemes and tested at higher Reynolds numbers. These research venues are currently subject to scrutiny.
**Acknowledgements.** The authors acknowledge M. Nastorg (LISN) for discussions.
**Funding.** The Ph.D. fellowship of M. Quattromini is supported by the Italian Ministry of University. A part of the research was funded by the grant PRIN2017-LUBRI-SMOOTH of the Italian Ministry of Research and ANR-21-REASON from the French Agency for National Research.
Figure 8: Reynolds stress profiles along \(y\) at 7 different locations past a cylinder: comparison between ground truth (red solid line) and GNN predictions (gray dotted line). From left to right: (\(a\)) \(Re=110\), unseen case; (\(b\)) \(Re=130\); (\(c\)) \(Re=150\); (\(d\)) \(Re=180\), unseen case; (\(e\)) downshifted cylinder at \(Re=110\), unseen case.
## Appendix A Numerical simulations
Numerical simulations were performed using a code written in FEniCS Alnes et al. (2015). Time resolved simulations were used for building the dataset, while the inputs and the outputs of the model were obtained by averaging these data. From the numerical viewpoint, the spatial discretization is obtained by the weak formulation based on the finite element method (FEM). In particular, the finite element used is the Taylor-Hood element, with second order elements P2 for velocity and first order elements P1 for pressure. The implemented integration scheme reads as
\[\left\{\begin{array}{rl}\left(\dfrac{3\mathbf{u}^{n}-4\mathbf{u}^{n-1}+ \mathbf{u}^{n-2}}{2\Delta t}\right)+(\mathbf{u}^{n-1}\cdot\nabla)\mathbf{u}^{ n}+(\mathbf{u}^{n}\cdot\nabla)\mathbf{u}^{n-1}&\\ -(\mathbf{u}^{n-1}\cdot\nabla)\mathbf{u}^{n-1}-\dfrac{1}{Re}\Delta\mathbf{u}^ {n}+\nabla p^{n}&=\mathbf{0}\\ \nabla\cdot\mathbf{u}^{n}&=0,\end{array}\right. \tag{8}\]
where \(\mathbf{u}=(u,v)^{T}\) represents the velocity vector, \(p\) the pressure and \(Re\) the Reynolds number. The time step is denoted by \(\Delta t\); the \(n\) apex indicates the values of a quantity at the current time, with \(n-1\) the values at the previous time step and \(n-2\) its values two time steps before. The time marching is performed by second order backward differentiation formula (BFD). The convective term is treated using the Newton and Picard methods, such that a linear system of equations is solved in each step of the temporal iteration.
Boundary conditions complete the numerical setting and are reported here for completeness. Uniform flow \(\mathbf{u}=(1,0)^{T}\) is imposed at the inlet. At the outlet, \(p=0\) is imposed, while on the far boundaries \(\partial_{t}u=0\) and \(v=0\). No slip conditions are considered around the obstacles; all the numerical simulations were initiated with null flow fields at \(t=0\). More details on the numerical scheme can be found in Zienkiewicz et al. (2014), while the validation of the code is reported in Guegan (2022).
## Appendix B Graph Neural Network details
This section focuses on the design of the graph neural network (GNN). The main code has been written in PyTorch Geometric, a library built upon PyTorch for the developing and training GNNs. Theoretical aspects and applications to RANS are described in the main article; here, we describe and comment on parameters choices and implementation.
### Overall architecture
The whole framework of the GNN is described below, with reference to Fig. 9. The starting point is \(\mathbf{H}^{0}\) which represents the tensor composed by all the embedded states defined on each node, initialized at the zero state. Along with the externally injected quantities (\(\mathbf{G}\), the mean flow and the Reynolds number \(Re\) in our case), it is provided to the message passing process \(\mathbf{M}_{\theta}^{1}\) (see SS3 of the main paper). This latter will output an updated version of the embedded state on each node, \(\mathbf{H}^{1}\) which will pass through a decoder, \(\mathbf{D}_{\theta}^{1}\), a multi-layer perceptron (MLP) trainable function in charge of reconstructing a meaningful physical state from the embedded state (\(\mathbf{U}^{1}\)) which, in our case, is the closure term of the RANS equations. Finally, this physical state is compared with the ground truth that comes from the DNS via a loss function. The entire process is repeated in each layer until the final layer \(\tilde{k}\) is reached. The last prediction (\(\mathbf{U}^{\tilde{k}}\)) represents the output of the entire algorithm. Following the intuition in Donon et al. (2020), all the intermediate loss values are considered, in order to robustify the learning process, in a global loss function (Eq. 3.3 of the main paper)
\[\epsilon=\sum_{k=1}^{\tilde{k}}\gamma^{\tilde{k}-k}\left[\frac{1}{N}\sum_{i=1 }^{N}\left(x_{i}-y_{i}\right)^{2}\right]. \tag{9}\]
\(\epsilon\) is the loss function, \(N\) is the number of nodes, \(\mathbf{x}\) is a vector containing the GNN's prediction for each node, \(\mathbf{y}\) is a vector of the same dimensions representing the ground truth and \(\tilde{k}\) is the number of layers.
### Hyperparameters
Numerous parameters define the structure of the neural network, usually denoted with the term _hyperparameters_. These terms are defined a-priori, before the training of the model, thus they need to be manually tuned as they can't be "learnt" during the training process. In the following we list some of them, by making a distinction between _model_ hyperparameters (SSB.2.1) and _process_ hyperparameters (SSB.2.2).
#### b.2.1 Model hyperparameters
A model hyperparameter defines the capacity of the neural network, i.e. the ability of the model to represent functions of high complexity. Thus, the capacity is directly related to the possibility of approximating a large variety of nonlinear functions. Two parameters directly impact on the capacity of the considered GNN, the embedded dimension and the number of updates.
Embedded dimension- this hyperparameter defines the dimension of the embedded vector encoding the observed quantities of the physical system. During the training process, these encoded observables are injected at each layer, whose number is defined by the number of updates \(k\). Note that the embedded state does not have a direct physical meaning.
Number of updates, k- the number of updates of each embedded state is denoted by the variable \(k\); the GNN behaviour relies on the message passing process (see SS3 in the main document): embedded states assigned to each node are updated at each step of the message passing iteration in order to acquire information from each of the neighbouring nodes while the process progresses.
#### b.2.2 Process hyperparameters
The training is defined by the process hyperparameters. Tuning these hyperparameters deeply modifies the duration of the training, its computational costs and the way the weights are adjusted while the model evolves.
Update weight, \(\alpha\)- the embedded state's update weight \(\alpha\) is a relaxation parameter that allows to scale each update of the embedded states with the previous one during the _message passing_ loop, as stated in Eq. (3.2) of the paper and reported here for clarity of exposition
\[\mathbf{h}_{i}^{(k)}=\mathbf{h}_{i}^{(k-1)}+\alpha\Psi^{(k)}\left(\mathbf{h} _{i}^{(k-1)},\{\overline{\mathbf{u}},Re\},\boldsymbol{\phi}_{-,i}^{(k)}, \boldsymbol{\phi}_{+,i}^{(k)},\boldsymbol{\phi}_{\odot,i}^{(k)}\right). \tag{10}\]
Here, for each node, \(\mathbf{h}_{i}^{(k)}\) is the embedded state at the current layer \(k\); \(\mathbf{h}_{i}^{(k-1)}\) the one at the previous layer; \(\Psi^{(k)}\) is a generic differentiable operator such as MLP; \(\mathbf{G}_{i}\) is the externally injected input. Moreover, we recall that \(\boldsymbol{\phi}_{-,i}^{(k)}\) is the message that the node \(i\) sends to its neighbours, \(\boldsymbol{\phi}_{-,i}^{(k)}\) are the messages that the node \(i\) receives from its neighbours, \(\boldsymbol{\phi}_{\odot,i}^{(k)}\) is the message that the node \(i\) sends to itself to avoid losing information as the process moves on in time.
Loss function weight, \(\gamma\)- this hyperparameter is used to control the importance of the contributions associated with each update of the embedded states, as reported in the Eq. (3.3) of the paper and in Eq. (2.1) of the present appendix. The relative importance of the message is assigned through an exponential increasing function, thus meaning that the latest steps, which are supposed to be the richest in information, will have the highest importance in the process.
Optimizer- the training algorithm consists of an optimization, where the objective is to minimize the cost function in Eq. (3.3). One of the most used in scientific machine learning is the Adam Optimizer, used in the present work and based on the stochastic gradient descent (SGD) method Diederik P. Kingma (2017). The length of the step taken by the SGD method in the steepest descent direction is defined by the learning rate.
Figure 9: Fig. inherited from the paper of Donon et al. (2020) showing the overall framework of the GNN.
Learning rate, \(Lr\)- the optimization underneath the learning process requires the definition of a \(LR\); to this end, the scheduler ReduceLROnPlateau has been used to let this parameter to be dynamically adjusted, in order to control the SGD method online during the training process. This hyperparameter alleviates the risk of having a learning rate being too large in vicinity of the convex point of local minima, or too small, thus leading to, respectively divergence or slow convergence.
#### b.2.3 Optimizing the hyperparameters
The GNN architecture presented in the paper is defined upon a certain number of hyperparameters. This does not pose problems in deep learning when the expressivity of the NN (e.g. number of neurons, number of layers, etc.) is enough to represent the complexity of the problem at play. However, since the GNN is trained to fulfill a specific task and because the computational cost of the GNN inference has to be affordable compared to the most sophisticated turbulence models available today, we tried to keep the GNN as parsimonious as possible. To that end, a further optimization is required on the set of hyperparameters defining the architecture. Unfortunately, standard gradient based optimizers cannot be employed when dealing with integer numbers (i.e. number of neurons, number of layers). For this purpose gradient-free algorithms have been used. There exists dedicated libraries that can automate the tuning process through all the possible sets of hyperparameters by trying and appropriately pruning the unpromising sets of them. In this work we apply the library Optuna Akiba et al. (2019), an open-source package which combines efficient searching and pruning algorithms. By exploring the complex solution space, a number of optimized combinations of hyperparameters is found, and the one that outperforms the others on the monitored validation metrics is characterized by the following set
1. Embedded dimension, 18
2. Number of GNN layers, \(k=87\)
3. Update relaxation weight, \(\alpha=0.01\)
4. Loss function weight, \(\gamma=0.1\)
5. Learning rate, \(LR=3\times 10^{-3}\), as maximal/starting value.
### Boundary conditions of the GNN
A specific treatment for the nodes corresponding to the boundary conditions has been implemented. We start by recalling that during the message passing iterations, each node provides and retrieves informations contained from the neighbouring nodes. This process stems from the bi-directionality of the connections \(\mathbf{a}_{ij}\) between each node. On the nodes corresponding to Dirichlet boundary conditions, namely the no-slip condition around the obstacle, we can apply a different strategy: in order to keep fixed the value on the boundaries, the direction of message propagation is only outward. In this way the BCs nodes are still able to provide information to their neighboring nodes but, since they can't receive anything back, they will be trained by keeping values equal to the enforced initial boundary conditions.
|
2304.11533 | Bi-Level Attention Graph Neural Networks | Recent graph neural networks (GNNs) with the attention mechanism have
historically been limited to small-scale homogeneous graphs (HoGs). However,
GNNs handling heterogeneous graphs (HeGs), which contain several entity and
relation types, all have shortcomings in handling attention. Most GNNs that
learn graph attention for HeGs learn either node-level or relation-level
attention, but not both, limiting their ability to predict both important
entities and relations in the HeG. Even the best existing method that learns
both levels of attention has the limitation of assuming graph relations are
independent and that its learned attention disregards this dependency
association. To effectively model both multi-relational and multi-entity
large-scale HeGs, we present Bi-Level Attention Graph Neural Networks (BA-GNN),
scalable neural networks (NNs) that use a novel bi-level graph attention
mechanism. BA-GNN models both node-node and relation-relation interactions in a
personalized way, by hierarchically attending to both types of information from
local neighborhood contexts instead of the global graph context. Rigorous
experiments on seven real-world HeGs show BA-GNN consistently outperforms all
baselines, and demonstrate quality and transferability of its learned
relation-level attention to improve performance of other GNNs. | Roshni G. Iyer, Wei Wang, Yizhou Sun | 2023-04-23T04:18:56Z | http://arxiv.org/abs/2304.11533v1 | # Bi-Level Attention Graph Neural Networks
###### Abstract
Recent graph neural networks (GNNs) with the attention mechanism have historically been limited to small-scale homogeneous graphs (HoGs). However, GNNs handling heterogeneous graphs (HeGs), which contain several entity and relation types, all have shortcomings in handling attention. Most GNNs that learn graph attention for HeGs learn either node-level or relation-level attention, but not both, limiting their ability to predict both important entities and relations in the HeG. Even the best existing method that learns both levels of attention has the limitation of assuming graph relations are independent and that its learned attention disregards this dependency association. To effectively model both multi-relational and multi-entity large-scale HeGs, we present Bi-Level Attention Graph Neural Networks (BA-GNN), scalable neural networks (NNs) that use a novel bi-level graph attention mechanism. BA-GNN models both node-node and relation-relation interactions in a personalized way, by hierarchically attending to both types of information from local neighborhood contexts instead of the global graph context. Rigorous experiments on seven real-world HeGs show BA-GNN consistently outperforms all baselines, and demonstrate quality and transferability of its learned relation-level attention to improve performance of other GNNs.
graph neural networks, representation learning
## I Introduction
Highly multi-relational data are characteristic of real-world HeGs. Relational data in HeGs are defined as triples of form (_h:_head entity, r:_relation, t:_tail entity_), indicating that two entities are connected by a specific relation type. Figure 1 shows a HeG formed by such triples. However, even comprehensive HeGs [1] remain incomplete. Regarding HeGs completion, despite the recent years' research progress in developing GNNs for representation learning in various domains [7, 8, 14] and adapting the successful attention mechanism [17, 18], most GNNs face several challenges. They either are ill-equipped to handle HeGs [9, 18], or do handle HeGs but do not learn graph attention [4, 6, 13, 23], or learn inaccurate graph attention [5, 12, 19, 22].
Considering the GNNs that learn graph attention, their architectures are limited to only one level of attention, either for nodes or relations, but rarely for both, shown in Table I. This is problematic for modeling HeGs which contain several different entity and relation types. Bi-level attention is more powerful in learning compared to uni-level attention, where only one level of attention is learned by the model. Bi-level attention learns attention at different levels of granularity in HeGs which captures more information about graph components than a uni-level attention mechanism is capable of. h:_nn, one of the few models that attempts to use bi-level attention, unsurprisingly falls short of capturing the associations between the node and relation levels in the HeG. First, h:_nn places unnatural assumptions on the data because it treats graph relations as independent from each other, omitting most relation-relation interactions in HeGs. Second, it requires manually chosen meta paths that force many node-node and node-relation interactions to also be left out, and requires domain specific knowledge to compute. Third, h:_nn lacks a general framework for systematically studying bi-level attention.
To address the above challenges, in this paper, we present Bi-Level **A**ttention **G**raph **Ne**ural **N**etworks (BA-GNN) for HeGs. To summarize, our work makes the following contributions:
1. We design a general framework for bi-level attention, and identify challenges of state-of-art NNs for HeGs.
2. We propose BA-GNN to model both multi-relational and multi-entity large-scale HeGs. BA-GNN avoids manually chosen meta paths, and learns personalized graph properties by integrating graph entity/relation types, graph structure, and graph attention using local graph neighborhoods instead of global graph context. BA-GNN improves accuracy of state-of-art GNNs and scales to million-node/edge graphs, like the AM archaeological dataset.
3. To our knowledge, we are the first to propose efficient bi-level attention GNNs that learn from dependency interactions of both nodes/relations and without meta paths.
4. We rigorously experiment on seven real-world HeGs showing BA-GNN consistently outperforms major state-of-art NN groups, and also demonstrate quality and transferability of BA-GNN's attention-induced change in graph structure to enrich other GNNs.
The remainder of this paper is organized as follows. Section II examines preliminaries and related work. Section III presents a general framework for computing bi-level attention, and describes BA-GNN's architecture. Section IV presents experiment results, ablation studies, and case studies of BA
Fig. 1: Partial HeG of AIFB dataset.
GNN models, and section V concludes.
## II Preliminary and Related Work
Here, we introduce HeG concepts and discuss the achievement of various state-of-art NNs, summarized in Table I.
**Definition 1**: _Heterogeneous Graph: We define HeGs as \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) with nodes \(v_{i}\in\mathcal{V}\), and edges \(e_{i,j}\in\mathcal{E}\) connecting source \(v_{i}\) and target \(v_{j}\). Nodes in the HeG are associated with entity types through an entity mapping function \(\Lambda(v_{i}):\mathcal{V}\rightarrow\mathcal{B},\mathcal{B}=\{b|b\in\mathcal{ B}\}\), for entity type \(b\). Edges in the HeG are associated with relation types through a relation mapping function \(\Gamma(e_{i,j}):\mathcal{E}\rightarrow\mathcal{R},\mathcal{R}=\{r|r\in \mathcal{R}\}\) for relation type \(r\). For efficiency in our model, we compute entity and relation mapping functions for node and relation neighborhoods rather than globally such that \(\Lambda(v_{i}):\mathcal{V}_{i}\rightarrow\mathcal{B}_{i},\mathcal{B}_{i} \subset\mathcal{B}\) and \(\Gamma(e_{i,j}):\mathcal{E}_{i}\rightarrow\mathcal{R}_{i},\mathcal{R}_{i} \subset\mathcal{R}\). In this paper, we consider local HeG neighborhoods of a node \(v_{i}\in\mathcal{V}\) consisting of both one-hop nodes, \(\{v_{j}|v_{j}\in\mathcal{V}_{i}\}\) and one-hop relations, \(\{r|r\in\mathcal{R}_{i}\}\)._
**Definition 2**: _Meta Relation: The meta relation for \(e_{i,j}\) between source \(v_{i}\) and target \(v_{j}\) is \((\Lambda(v_{i}),\Gamma(e_{i,j}),\Lambda(v_{j}))\), and \(\Gamma(e_{i,j})^{-1}=\Gamma(e_{j,i})\) is the inverse of \(\Gamma(e_{i,j})\). In this paper, we loosely use the term relation to denote meta relation. Traditional meta paths are a sequence of such meta relations._
**Definition 3**: _Graph Attention: Graph attention enables NNs to learn useful graph representations by selectively attending to different nodes and relations. Multiplicative and additive attention are state-of-art attention mechanisms used in NNs [17, 18], both of which operate on encoder states. Multiplicative attention uses an inner product or cosine similarity of encoder states while additive attention is a linear combination or concatenation of encoder states._
### _GNNs for Homogeneous Graphs_
Successful models in this category, like GAT [18], use attention-based neural architectures for learning representations.
**Graph Attention Networks**GAT [18] are additive attention-based GNNs that effectively leverage graph structure and sparsity to compute a node's attention. GAT models, however, are limited to HoGs and cannot handle HeGs which contain different relations that may have varying levels of importance for different nodes.
### _GNNs for Heterogeneous Graphs_
Successful models (1) leverage different graph relations, like R-GCN, (2) learn bi-level attention, like HAN, and (3) learn multiplicative attention, like transformer-based NNs.
**Relational Graph Convolutional Networks**R-GCNs [13] extend GCNs and GAT, which operate on local graph neighborhoods of HoGs, to operate on multi-relational graphs by distinguishing nodes by relation type. R-GCNs, however, treat all relation-specific nodes as equally important. Further, R-GCNs do not utilize graph attention as they are limited to directly learning from weight parameters.
**Heterogeneous Graph Attention Networks**To address limitations of the above models, HAN integrates bi-level attention, which learns node- and relation-level attention, with GNNs to learn node embeddings. However, HAN uses a global learnable weight vector lacking local inter-relation comparison. Besides, HAN uses pre-defined metapaths which are computationally expensive to design and compute, and result in sub-optimal graph components learned by the model.
**Transformer**transformer models [17], although successful in natural language processing for small text sequences, have limitations for multi-relational and multi-entity HeGs. This is because transformer attends to all other tokens in the sequence, making it infeasible for large-scale input. While recent works extend transformer-like attention to other graph domains, they have limitations, shown in Table I.
## III BA-GNN Architecture
We design a general bi-level attention framework for computing hierarchical attention and then discuss BA-GNN's architecture. Source code and data are at: [https://github.com/roshnigyier/BA-GNN](https://github.com/roshnigyier/BA-GNN). \(\mathrm{READ.md}\) details dataset properties, data splits, and hyperparameters of BA-GNN models.
### _General Bi-Level Attention Framework_
Bi-level attention in HeGs incorporates interactions between relation-specific nodes for learning lower level attention which informs the higher level attention that captures inter-relation interactions. In this way, bi-level attention jointly attends to node-node, relation-relation and node-relation interactions to collectively produce a representative node embedding. Uni-level attention models omit these critical graph interactions and ability for the two-levels of attention to jointly inform each other. Eq. 1 describes the general bi-level attention framework to compute embeddings for each \(v_{i}\) in the HeG:
\[\widetilde{\mathbf{h}}_{i}^{(l+1)} =\mathbf{HigherAtt}\big{(}\mathbf{LowerAtt}(\cdot),\mathcal{R}, \widetilde{\mathbf{h}}^{(l)}\big{)} \tag{1}\] \[=\mathrm{AGG}\Bigg{(}\bigg{\{}\mathbf{f}_{v}\Big{(}\big{(} \mathbf{LowerAtt}(\cdot)\big{|}r\in\mathcal{R},\widetilde{\mathbf{h}}^{(l)} \big{)}\Big{)}\bigg{|}r\in\mathcal{R}\bigg{\}}\Bigg{)},\]
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline Type & Model & [A] & [B] & [C] & [D] & [T] & [E] & [F] & [G] \\ \hline – & Transformer [17] & ✓ & ✓ & ✓ & ✗ & ✓ & ✓ & – \\ \hline \multirow{4}{*}{(1)} & TransEe [2] & ✓ & ✓ & ✗ & ✗ & ✗ & ✓ & ✗ \\ \cline{2-10} & HQE [10] & ✓ & ✓ & ✗ & ✗ & ✗ & ✓ & ✗ \\ \cline{2-10} & DistMIT [21] & ✓ & ✓ & ✗ & ✗ & ✗ & ✓ & ✗ \\ \cline{2-10} & Complex [16] & ✓ & ✓ & ✗ & ✗ & ✗ & ✓ & ✗ \\ \hline \multirow{2}{*}{(2)} & GCN [9] & ✗ & ✓ & ✗ & ✗ & ✗ & ✓ & ✗ \\ \cline{2-10} & GAT [18] & ✗ & ✓ & ✓ & ✗ & ✓ & ✓ & ✗ \\ \hline \multirow{4}{*}{(3B)} & metaframe\({}_{\mathcal{V}}\) & ✓ & ✓ & ✗ & ✗ & ✗ & ✗ \\ \cline{2-10} & HEEc [14] & ✓ & ✓ & ✗ & ✗ & ✗ & ✗ \\ \cline{2-10} & HDEV [6] & ✓ & ✓ & ✗ & ✗ & ✗ & ✗ \\ \cline{2-10} & HGEAN [7] & ✓ & ✓ & ✗ & ✗ & ✓ & ✗ \\ \cline{2-10} & TemporalGAT [5] & ✗ & ✓ & ✓ & ✓ & ✓ & ✓ & ✗ \\ \cline{2-10} & HGTNN [23] & ✓ & ✓ & ✗ & ✗ & ✗ & ✓ & ✗ \\ \hline \multirow{4}{*}{(3B)} & H-GCN [13] & ✓ & ✓ & ✓ & ✗ & ✗ & ✓ & ✗ \\ \cline{2-10} & HAN [19] & ✓ & ✓ & ✓ & ✓ & ✗ & ✗ & ✗ \\ \hline \multirow{4}{*}{(3B)} & DySAT [12] & ✓ & ✗ & ✓ & ✗ & ✓ & ✓ & ✓ \\ \cline{2-10} & TGM [20] & ✗ & ✗ & ✓ & ✗ & ✓ & ✓ & ✓ \\ \cline{2-10} & HGT [8] & ✓ & ✓ & ✓ & ✗ & ✓ & ✓ & ✓ \\ \cline{2-10} & GTN [22] & ✓ & ✓ & ✓ & ✗ & ✓ & ✓ & ✓ \\ \hline \multirow{4}{*}{(3B)} & BA-GNN (ours) & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \cline{2-10} & \multicolumn{10}{c}{} & \multicolumn{10}{c}{} & \multicolumn{10}{c}{} & \multicolumn{10}{c}{} & \multicolumn{10}{c}{} & \multicolumn{10}{c}{} \\ \cline{2-10} & \multicolumn{10}{c}{} & \multicolumn{10}{c}{} & \multicolumn{10}{c}{} & \multicolumn{10}{c}{} & \multicolumn{10}{c}{} & \multicolumn{10}{c}{} \\ \cline{2-10} & \multicolumn{10}{c}{} & \multicolumn{10}{c}{} & \multicolumn{10}{c}{} & \multicolumn{10}{c}{} & \multicolumn{10}{c}{} & \multicolumn{10}{c}{} \\ \cline{2-10} & \multicolumn{10}{c}{} & \multicolumn{10}{c}{} & \multicolumn{10}{c}{} & \multicolumn{10}{c}{} & \multicolumn{10}{c}{} & \multicolumn{10}{c}{} \\ \cline{2-10} & \multicolumn{10}{c}{} & \multicolumn{10}{c}{} & \multicolumn{10}{c}{} & \multicolumn{10}{c}{} & \multicolumn{10}{c}{} & \multicolumn{10}{c}{} \\ \cline{2-10} & \multicolumn{10}{c}{} & \multicolumn{10}{c}{} & \multicolumn{10}{c}{} & \multicolumn{10}{c}{} & \multicolumn{10}{c}{} & \multicolumn{10}{c}{} & \multicolumn{10}{c}{} \\ \cline{2-10} & \multicolumn{10}{c}{} & \multicolumn{10}{c}{} & \multicolumn{10}{c}{} & \multicolumn{10}{c}{} & \multicolumn{10}{c}{} & \multicolumn{10}{c}{} & \multicolumn{10}{c}{} \\ \hline \multirow{4}{*}{(3B)} & HGT [20] & ✗ & ✗ & ✓ & ✗ & ✓ & ✓
\[\mathbf{LowerAtt}(\cdot) =\mathrm{AGG}\big{(}\big{\{}\mathbf{g}_{\gamma}(e_{i,j}|r,\widetilde{ \mathbf{h}}^{(l)})\big{|}v_{j}\in N_{i}^{r}\big{\}}\big{)}, \tag{2}\] \[\mathcal{R} =\{\Gamma(e_{i,j})|v_{j}\in\mathcal{V}\}, \tag{3}\]
where \(\mathbf{h}_{i}\) and \(\widetilde{\mathbf{h}}_{i}\) are the initial and projected node features respectively, \(\mathcal{R}_{i}\) is the relation set on the edge of \(v_{i}\), and \(\mathbf{g}_{\gamma}(\cdot)\) is a vector-output function of the node-level attention, \(\gamma\), that provides a relation-specific embedding summary using learned node representations from the previous layer, \(\widetilde{\mathbf{h}}^{(l)}\), which is aggregated, \(\mathrm{AGG}(\cdot)\), over edges \(e_{i,j}\) in the relation-specific neighborhood context of \(v_{j}\in N_{i}^{r}\). \(\mathbf{f}_{\psi}(\cdot)\) is a vector-output function of the relation-level attention, \(\psi\), that are attended relation-specific local context embeddings, \(\mathbf{LowerAtt}(\cdot)\), which are aggregated over relations in the neighborhood context to form the layer's final node embedding, \(\widetilde{\mathbf{h}}_{i}^{(l+1)}\).
In Sections III-B and III-C, we propose a novel semi-supervised attention-based GCN model, BA-gnN, for multi-relational and multi-entity HeGs. BA-gnN performs attention aggregation on node and _relation_ levels, rather than nodes and edges. For node-level attention, attention is placed on the _edges_ of neighbor nodes. For relation-level attention, attention is placed on the relations, which are formed by _grouping_ edges by relation type. In this way, our model uses a hierarchical attention mechanism. The higher-order graph considers relation-type-specific edge groups, and the lower-order graph considers nodes in their local contexts. Figure 2 summarizes BA-gnN's attention mechanism. BA-gnN models use \(L\) stacked layers, each of which is defined through Eq. 1, and for further efficiency, all nodes and relations are restricted to their neighborhoods. Model input can be chosen as pre-defined features or as a unique one-hot vector for each node.
### _Node-level Attention_
Node-level attention distinguishes different roles of nodes in the neighborhood context for learning relation-specific node embeddings. As node-level attentions are target-node-specific, they are different for different target nodes. Our model learns node embeddings such that it intrinsically captures graph attributes and structure in its neighborhood. In HeGs, neighbor nodes may belong to different feature spaces due to different node types, so we utilize an entity-specific type transformation matrix, \(\boldsymbol{\mathcal{T}}_{\Lambda(v_{i})}\), to project all node features to the same space through \(\widetilde{\mathbf{h}}_{i}=\boldsymbol{\mathcal{T}}_{\Lambda(v_{i})}\cdot \mathbf{h}_{i}\). \(\boldsymbol{\mathcal{T}}_{\Lambda(v_{i})}\in\mathbb{R}^{d\times|\mathcal{R}_{ i}|},\mathbf{h}_{i}\in\mathbb{R}^{|\mathcal{R}_{i}|}\) if the initial features are chosen to be a one-hot vector of dimension \(d\), where \(\widetilde{\mathbf{h}}_{i}\) is continuously updated to learn the final embedding.
BA-gnN's node-level attention uses additive attention inspired by GAT, discussed in Section II, but overcomes GAT's limitation by extending the attention to HeGs. GAT performs projections that do not consider the different relation types in the HeG. We address this by using a learnable relation-specific attention vector, \(\mathbf{a}_{r}^{(l)}\in\mathbb{R}^{2d}\). For a specific relation \(r\), the attention is shared for all node pairs, so that each node is influenced by its neighborhood context. The attention is also asymmetric since the importance of \(v_{j}\) to \(v_{i}\) may be different from the importance of \(v_{i}\) to \(v_{j}\). We compute relation-specific node-level attention at layer \(l\) as follows, with a \(\mathrm{softmax}(\cdot)\) activation applied to normalize each node-pair attention weight and where \(v_{j},v_{k}\in N_{i}^{r}\) and \(\mathbf{x}^{T(l)}\) is transpose of \(\mathbf{x}\) at layer
\[l\text{: }\gamma_{i,j}^{(l),r}\ =\ \frac{\exp\!\left(\mathrm{LeakyReLU}\big{(} \mathbf{a}_{r}^{r(l)}\big{[}\widetilde{\mathbf{h}}_{i}^{(l)}\big{|}\big{|} \widetilde{\mathbf{h}}_{i}^{(l)}\big{|}\big{)}\right)}{\sum_{v_{k}\in N_{i}^{r}} \exp\!\left(\mathrm{LeakyReLU}\big{(}\mathbf{a}_{r}^{r(l)}\big{[}\widetilde{ \mathbf{h}}_{i}^{(l)}\big{|}\big{|}\widetilde{\mathbf{h}}_{i}^{(l)}\big{|} \big{|}\widetilde{\mathbf{h}}_{i}^{(l)}\big{]}\big{)}\right)},\]
where \(\mathbf{a}_{r}^{(l)}\) attends over the concatenated, \(||\), node features of \(v_{i}\) and \(v_{j}\) with applied \(\mathrm{LeakyReLU}(\cdot)\) and \(\mathrm{softmax}(\cdot)\) activations. By restricting the attention to within the relation-specific local context of nodes, sparsity structural information is injected into the model through adjacency-masked attention layers.
Node \(v_{i}\)'s relation-specific embedding, \(\mathbf{z}_{i}^{(l),r}\), can then be learned with \(\mathrm{AGG}(\cdot)\) from Eq. 1 being a weighted summation of the neighbor's projected features as follows:
\[\mathbf{z}_{i}^{(l),r} =\mathbf{LowerAtt}(\cdot) \tag{4}\] \[=\mathrm{AGG}\big{(}\big{\{}\mathbf{g}_{\gamma}(e_{i,j}|r, \widetilde{\mathbf{h}}^{(l)})\big{|}v_{j}\in N_{i}^{r}\big{\}}\big{)}\] \[=\sum_{v_{j}\in N_{i}^{r}}\Big{[}\mathbf{g}_{\gamma}(e_{i,j}|r, \widetilde{\mathbf{h}}^{(l)})\Big{]}\] \[=\sum_{v_{j}\in N_{i}^{r}}\Big{[}\gamma_{i,j}^{(l),r}\widetilde{ \mathbf{h}}_{j}^{(l)}\Big{]},\]
where \(\mathbf{z}_{i}^{(l),r}\) provides a summary of relation \(r\) for \(v_{i}\) at layer \(l\). We also add skip-connections for corresponding \(\mathbf{LowerAtt}(\cdot)\) from the previous layer, \(l-1\), to preserve learned node-level representations as the depth of the NN is extended.
### _Relation-level Attention_
Relation-level attention distinguishes roles of different relations in the neighborhood context for learning more comprehensive node embeddings. In HeGs, different relations may play different roles of importance for \(v_{i}\), in addition to \(v_{i}\)'s relation-specific neighbor nodes. So, we learn relation-level attention to better fuse \(v_{i}\)'s relation-specific node embeddings. One could design a simple node-relation attention mechanism to encode the effect of relations between nodes, but this would fail to capture relation-relation dependencies hidden in HeGs.
We address transformer's inefficiency for large-scale HeGs through an approximation technique by sampling relation-specific node embeddings from the local graph context. Further, instead of using the same single set of projections for all words, we enable each relation-specific embedding to learn a distinct set of personalized projection weights, while maximizing parameter sharing. This technique captures unique relation-dependent characteristics such that each relation-specific node embedding is also influenced by its local relation context.
Fig. 2: Bi-level attention visualization. (a) Node-level aggregating: A node’s features is a weighted combination of its prior layer’s relation-specific embeddings, \(\mathbf{z}_{i}^{r}\). (b) Relation-level aggregating: Relation-level attention is learned via multiplicative attention using neighborhood similarity to determine relative relation importance.
Node \(v_{i}\)'s relation-specific transformer-based query \(\mathbf{q}_{r,i}^{(l)}\), key \(\mathbf{k}_{r,i}^{(l)}\), and value \(\mathbf{v}_{r,i}^{(l)}\) vectors are computed as follows: \(\mathbf{q}_{r,i}^{(l)};\mathbf{k}_{r,i}^{(l)};\mathbf{v}_{r,i}^{(l)}=\mathbf{W} _{1,r}\mathbf{z}_{i}^{(l),r};\mathbf{W}_{2,r}\mathbf{z}_{i}^{(l),r};\mathbf{W}_{ 3,r}\mathbf{z}_{i}^{(l),r}\), such that \(\mathbf{z}_{i}^{(l),r}\) is projected onto the learnable weight matrices \(\mathbf{W}_{1,r},\mathbf{W}_{2,r},\mathbf{W}_{3,r}\in\mathbb{R}^{d\times d}\). The relation-level attention for relations \((r,r^{\prime})\) are computed by iterating over all possible relation pairs in the neighborhood context, \(r,r^{\prime}\in\mathcal{R}_{i}\), where \(\mathcal{R}_{i}=\{\Gamma(e_{i,j})|v_{j}\in\mathcal{V}_{i}\}\). The importance of relation \(r^{\prime}\) of node \(v_{i}\) is as follows, with relation similarity being captured through \(\psi_{i}^{(l),r,r^{\prime}}=\mathrm{softmax}(\mathbf{q}_{r,i}^{T^{(l)}}\mathbf{ k}_{\mathcal{V}_{r,i}}^{(l)})\), where the more similar \(r^{\prime}\) is to \(r\), the greater the attention weights of \(r^{\prime}\), which results in more contribution of \(r^{\prime}\)'s embedding to \(v_{i}\)'s final embedding. A \(\mathrm{softmax}(\cdot)\) activation is then applied to normalize each relation pair's attention weight.
A node's relation-specific embedding is then informed by a weighted summation of its similarity to other local context relations, \(\psi_{i}^{(l),r,r^{\prime}}\). To reduce information loss, we add a self-connection of a special relation type per node, which is projected onto \(\mathbf{W}_{i}\), and aggregated to the attended relation-specific embedding, \(\psi_{i}^{(l),r,r^{\prime}}\mathbf{v}_{r^{\prime},i}^{(l)}\). Lastly, a \(\mathrm{ReLU}(\cdot)\) activation is applied to get the overall relation-specific embedding, \(\mathbf{\delta}_{i}^{(l),r}=\mathrm{ReLU}(\sum_{r^{\prime}\in\mathcal{R}_{i}}\psi_ {i}^{(l),r,r^{\prime}}\mathbf{v}_{r^{\prime},i}^{(l)}+\mathbf{W}_{i}\widetilde {\mathbf{h}}_{i}^{(l)})\).
Node \(v_{i}\)'s final embedding is learned with \(\mathrm{AGG}(\cdot)\) in Eq. 1 being a summation of all attended relation-specific embeddings through iteration of neighborhood relations, \(r\in R_{i}\). We also apply multi-head attention of \(S=\{1,...,K\}\) heads to allow BA-GNN to jointly attend to different representation subspaces of nodes/relations, with aggregation, \(\mathrm{AGG}(\cdot)\), via averaging:
\[\widetilde{\mathbf{h}}_{i}^{(l+1)} =\mathbf{HigherAtt}(\mathbf{LowerAtt}(\cdot),\mathcal{R}, \widetilde{\mathbf{h}}^{(l)}) \tag{5}\] \[=\frac{1}{K}\sum_{k=1}^{K}\sum_{r\in\mathcal{R}_{i}}\Big{[} \mathbf{f}_{\mathbf{v}}\Big{(}\big{\{}\mathbf{LowerAtt}(\cdot)|r\in\mathcal{R },\widetilde{\mathbf{h}}^{(l)}\big{\}}\Big{)}\Big{]}\] \[=\mathrm{AGG}\Big{(}\big{\{}\sum_{r\in\mathcal{R}_{i}}\big{[} \mathbf{\delta}_{i}^{(l),r}\big{]}|k\in S\big{\}}\Big{)}.\]
We also add skip-connections for corresponding \(\mathbf{HigherAtt}(\cdot)\) from the previous layer, \(l-1\), to preserve learned higher-order relation-level representations as depth of the NN is extended.
The final representation of a node at layer \((l+1)\) is:
\[\widetilde{\mathbf{h}}_{i}^{(l+1)}=\mathrm{AGG}\bigg{(}\Big{\{} \sum_{r\in\mathcal{R}_{i}}\mathrm{ReLU}(\] \[\sum_{r^{\prime}\in\mathcal{R}_{i}}\mathrm{softmax}(\mathbf{q}_{r,i}^{T^{(l)}}\mathbf{k}_{r^{\prime},i}^{(l)})\mathbf{v}_{r^{\prime},i}^{(l)}+ \mathbf{W}_{i}\widetilde{\mathbf{h}}_{i}^{(l)})\Big{|}k\in S\Big{\}}\bigg{)}. \tag{6}\]
### _Analysis of Proposed Attention_
We use multiplicative attention at the relation level, instead of additive attention, because learning attention through a concatenation of features does not compute feature similarity which inner product operations capture. Since relation features are characterized by single attribute relation types, a relation's scaling can be directly determined by its feature similarity to other relations in its neighborhood. This is unlike node-level attention, where node features may have several attributes, making it more difficult to learn latent similarity of nodes through direct feature comparison such as through inner product computation. Rigorous evaluation of mixed combinations of additive/multiplicative attention shown in experiments further support our bi-level attention combination choice.
## IV Experiments
In this section, we evaluate BA-GNN on seven large-scale heterogeneous datasets (HDs). We conduct experiments on node classification and link prediction using Pytorch Geometric and Deep Graph Library frameworks on an Nvidia Tesla V100 GPU cluster, and report model test accuracies.
DatasetsWe evaluate on benchmark Resource Description Framework (RDF) format datasets [11] for node classification: AIFB, MUTAG, BGS, and AM. For link prediction, we evaluate on FB15k [15], WN18 [3], and FB15k-237 [15].
### _Node Classification_
Node classification is the semi-supervised classification of nodes to entity types. For evaluation consistency against primary baseline models, we implement BA-GNN with \(L=2\) and where the output of the final layer uses a \(\mathrm{softmax}(\cdot)\) activation per node. Our model follows the same node classification evaluation procedure as [13], using cross-entropy loss with parameters learned from the Adam Optimizer.
BaselinesTable I summarizes our baselines. To adapt the models to our problem setting of multi-relational, static HeGs, we made the following modifications. For GAT[18] and GCN[9], we omit HeG relations. For TemporalGAT[5], we omit the temporal convolutional network used for temporal interactions. For HetGNN[23], we consider the neighbors to be the entire set of neighbor nodes and relations. For DySAT[12], we omit temporal attention. For TGAT[20], we omit functional time encoding. For HGT[8], we omit relative temporal encoding.
ResultsExperiment results are in Table II. Results show BA-GNN significantly and consistently outperforms all baselines for all tasks on all datasets. For example, on AIFB, MUTAG, BGS, and AM, against the most competitive NNs per category, BA-GNN achieves relative performance gains of up to 22%, 25%, 20%, and 24% respectively, and overall performance gains of up to 32%, 33%, 36%, and 35% respectively. Further, the Welch t-test of unequal variance shows that BA-GNN's relative performance compared to each model per dataset is statistically significant to be greater, with \(\mathrm{p}\)-\(\mathrm{value}<0.001\).
indicates that BA-GNN's relation-level attention is more effective across the different data domains and that its personalized attention to local graph contexts yields performance gain.
### _Link Prediction_
Link prediction involves assigning confidence scores to HeG triples to determine how likely predicted edges belong to true relations. Our models follow the same evaluation framework as [2] and [13] using negative sampling and cross-entropy loss with parameters learned from the Adam Optimizer. We use evaluation metrics of mean reciprocal rank (MRR) and Hits @ n, in raw and filtered settings. The same number of negative samples, \(w=1\), are used to make datasets comparable.
_Baselines_: We evaluate standalone GNNs (BA-GNN, R-GCN), KGE models, and GNN+KGE autoencoder models using the same setup procedure as [2] and [13]. The autoencoder models include: BA-GNN\({}_{x}\) and R-GCN\({}_{y}\) where \(x\), \(y\) are Transe (\(T\)), Hole (\(H\)), DistMult (\(D\)), and ComplEx (\(C\)).
_Results and Ablation Studies_: Experiment results are in Table III. Results show that the best BA-GNN models outperform R-GCN models on all datasets for all metrics of both tasks of MRR and Hits @ n = 1, 3, 10. We observe that BA-GNN outperforms R-GCN when comparing standalone models. Further, results show that autoencoder models outperform each of GNN and KGE standalone models, showing that GNNs
\begin{table}
\begin{tabular}{c|c|c c c c c c|c c c c} \hline \hline \multirow{2}{*}{Type} & \multicolumn{2}{c}{Model} & \multicolumn{2}{c}{AlfB} & \multicolumn{2}{c}{MUTAG} & \multicolumn{2}{c}{BGS} & \multicolumn{2}{c}{AM} & BA-GNN (\% improve) \\ \hline \multirow{4}{*}{(1) Non-GNN-based KGE models for HeGs} & \multicolumn{2}{c}{Transe [2]} & 66.89 \(\pm\) 1.26 & 54.41 \(\pm\) 0.73 & 58.46 \(\pm\) 0.38 & 61.23 \(\pm\) 0.57 & +\{32.05, 36.29\} \\ & \multicolumn{2}{c}{Hole [10]} & 67.98 \(\pm\) 0.42 & **63.17**\(\pm\) 0.26 & 72.74 \(\pm\) 0.22 & 65.25 \(\pm\) 0.38 & +\{22.01, 31.43\} \\ & \multicolumn{2}{c}{DistMult [21]} & 73.64 \(\pm\) 0.10 & 62.06 \(\pm\) 0.18 & 68.19 \(\pm\) 0.13 & 69.12 \(\pm\) 0.24 & +\{25.3, 27.56\} \\ & ComplEx [16] & **77.24**\(\pm\) 0.15 & 62.38 \(\pm\) 0.22 & **74.72**\(\pm\) 0.18 & **72.39**\(\pm\) 0.16 & +\{20.03, 25.43\} \\ \hline \multirow{4}{*}{(2) GNNs for HGGs} & GCN [9] & 91.99 \(\pm\) 0.21 & **67.02**\(\pm\) 0.08 & **78.74**\(\pm\) 0.16 & 86.82 \(\pm\) 0.60 & +\{69.5, 20.79\} \\ & GAT [18] & **92.50**\(\pm\) 0.29 & 66.18 \(\pm\) 0.10 & 77.93 \(\pm\) 0.17 & **88.52**\(\pm\) 1.65 & +\{64.4, 21.63\} \\ \hline \multirow{4}{*}{(3) Non-transformer-based GNNs for HeGs} & \multicolumn{2}{c}{metapatra2vec [4]} & 89.52 \(\pm\) 0.12 & 66.04 \(\pm\) 0.27 & 78.34 \(\pm\) 0.13 & 85.48 \(\pm\) 0.11 & +\{9.42, 21.77\} \\ & HERec [14] & 91.03 \(\pm\) 0.15 & 66.96 \(\pm\) 0.18 & 79.36 \(\pm\) 0.25 & 85.98 \(\pm\) 0.07 & +\{7.91, 20.85\} \\ & HN2vec [6] & 91.63 \(\pm\) 0.17 & 66.29 \(\pm\) 0.14 & 79.01 \(\pm\) 0.12 & 86.22 \(\pm\) 0.21 & +\{73.1, 21.52\} \\ & HEGAN [7] & 92.33 \(\pm\) 0.13 & 68.07 \(\pm\) 0.08 & 81.60 \(\pm\) 0.27 & 86.79 \(\pm\) 0.14 & +\{66.61, 19.74\} \\ & TemporalGAT [5] & 93.42 \(\pm\) 0.11 & 66.88 \(\pm\) 0.24 & 79.14 \(\pm\) 0.13 & 89.10 \(\pm\) 0.13 & +\{55.2, 20.93\} \\ & HetGNN [23] & 95.18 \(\pm\) 0.16 & 75.64 \(\pm\) 0.09 & 82.05 \(\pm\) 0.25 & 89.67 \(\pm\) 0.05 & +\{3.76, 12.70\} \\ & R-GCN [13] & 95.31 \(\pm\) 0.62 & 73.23 \(\pm\) 0.48 & 83.10 \(\pm\) 0.80 & 89.29 \(\pm\) 0.35 & +\{3.63, 14.58\} \\ & HAN [19] & **96.25**\(\pm\) 0.12 & **76.46**\(\pm\) 0.07 & **86.84**\(\pm\) 0.21 & **90.68**\(\pm\) 0.23 & +\{2.69, 11.35\} \\ \hline \multirow{4}{*}{(3) Transformer-based GNNs for HeGs} & \multicolumn{2}{c}{DYSAT [12]} & 92.64 \(\pm\) 0.21 & 66.57 \(\pm\) 0.05 & 78.02 \(\pm\) 0.19 & 88.80 \(\pm\) 0.15 & +\{63.0, 21.24\} \\ & TGAT [20] & 92.84 \(\pm\) 0.14 & 67.19 \(\pm\) 0.21 & 78.35 \(\pm\) 0.15 & 89.43 \(\pm\) 0.28 & +\{6.10, 20.62\} \\ & HGT [8] & 95.97 \(\pm\) 0.15 & **76.84**\(\pm\) 0.12 & **86.01**\(\pm\) 0.18 & 90.33 \(\pm\) 0.13 & +\{2.97, 10.97\} \\ & GTN [22] & **96.04**\(\pm\) 0.17 & 76.32 \(\pm\) 0.12 & 85.38 \(\pm\) 0.24 & **90.56**\(\pm\) 0.10 & +\{2.90, 11.49\} \\ \hline \multirow{4}{*}{BA-GNN variants} & BA-GNN-node & 95.46 \(\pm\) 0.13 & 73.19 \(\pm\) 0.25 & 84.23 \(\pm\) 0.22 & 89.45 \(\pm\) 0.02 & - \\ & BA-GNN-relation & 95.28 \(\pm\) 0.23 & 76.17 \(\pm\) 0.22 & 85.43 \(\pm\) 0.34 & 90.52 \(\pm\) 0.18 & - \\ & BA-GNN(node)+HAN(rel.) & 96.30 \(\pm\) 0.09 & 76.48 \(\pm\) 0.04 & 86.80 \(\pm\) 0.24 & 90.67 \(\pm\) 0.35 & - \\ & BA-GNN(\(\text{M}_{\text{\tiny{\text{score}}}}/\text{A}_{\text{\tiny{\text{all. }}}}\)) & 96.02 \(\pm\) 0.13 & 76.20 \(\pm\) 0.09 & 86.90 \(\pm\) 0.17 & 90.54 \(\pm\) 0.11 & - \\ & BA-GNN(\(\text{M}_{\text{\tiny{\text{score}}}}/\text{A}_{\text{\tiny{\text{all. }}}}\)) & 96.38 \(\pm\) 0.14 & 76.61 \(\pm\) 0.22 & 87.00 \(\pm\) 0.09 & 90.73 \(\pm\) 0.05 & - \\ & BA-GNN(M\({}_{\text{\tiny{\text{score}}}}/\text{A}_{\text{\tiny{\text{all. }}}}\)) & 96.44 \(\pm\) 0.16 & 77.01 \(\pm\) 0.07 & 86.92 \(\pm\) 0.12 & 90.78 \(\pm\) 0.08 & - \\ & BA-GNN(ours) & **98.94**\(\pm\) 0.13 & **87.81**\(\pm\) 0.11 & **94.75**\(\pm\) 0.08 & **96.68**\(\pm\) 0.14 & - \\ \hline \multirow{4}{*
and KGE models can each be benefited by their joint learning. Results also show that BA-GNN autoencoders outperform R-GCN autoencoders on all datasets for all tasks and metrics.
### _Case Study_
We conduct experiments to determine the quality of relation-level attention and graph-structure of BA-GNN. We modify the AM dataset to contain the following relation types, each with cummulative 10% splits: (1) relations randomly selected, (2) relations with the highest relation-level attention weights from BA-GNN, and (3) relations with the lowest relation-level attention weights from BA-GNN. Experiment figures on node classification for HAN and BA-GNN models are in Figure 3(a).
(2)'s graph structure yields the highest test accuracy on all splits of AM compared to (1) or (3), while (3) yields the lowest test accuracy. (1) is as expected in between test accuracies of (2) and (3). Models that do not learn relation-level attention (BA-GNN-node) still benefit from the graph structure identified by (2). This suggests that BA-GNN's relation-level attention can selectively identify important graph components and that its learning of graph structure can enhance other leading GNNs.
### _Attention Visualization_
We randomly sample two nodes belonging to entity _Persons_ on AIFB and plot its learned relation-level attention weights from layer \(l=L\) using heat maps, seen in Figures 3(b) and (c). The corresponding partial graphs of _person 1_ and _person 2_ are in Figure 1. In Figure 3(b), _author\({}^{-1}\)_ and _member\({}^{-1}\)_ have high attention to _is_worked_on_by_ because a person is likely to have publications and research affiliations in their research area. _name_of_ has high attention to _homepage_of_, observed in both Figures 3(b) and (c), because a homepage may directly contain personal identifying information. In Figure 3(c), _head\({}^{-1}\)_ and _member\({}^{-1}\)_ have high attention to each other, since head of a research group is a member. Further, members of the research group are likely to work on the group's projects and focus on a particular research domain, explaining why _works_at_project\({}^{-1}\)_ has higher attention to _head\({}^{-1}\)_ and _member\({}^{-1}\)_, and _works_at_project\({}^{-1}\)_ and members of the research group also have higher attention to _is_worked_on_by_.
## V Conclusion
We propose Bi-Level Attention Graph Neural Networks (BA-GNN) for modeling multi-entity and multi-relational large-scale heterogeneous graphs (HeGs), via entity type and meta relation information to learn graph structure and properties. Further, BA-GNN distinguishes nodes and relations using a novel smart-sampling bi-level attention mechanism to guide the model when aggregating features in graph neighborhoods. We conduct extensive experiments on seven real-world heterogeneous datasets, and show BA-GNN learns effective and efficient embeddings. We observe that BA-GNN outperforms all state-of-art GNN baselines on various information recovery tasks.
## VI Acknowledgements
This work was supported by NSF 1705169, 1829071, 1937599, 2031187, 2106859; NIH R35-HL135772, NIBIB R01-EB027650; DARPA HR00112090027; Okawa Foundation Grant; and Amazon Research Awards. We also thank Kai-Wei Chang and Yunsheng Bai for helpful discussions.
|
2308.00927 | Physics-informed neural networks for blood flow inverse problems | Physics-informed neural networks (PINNs) have emerged as a powerful tool for
solving inverse problems, especially in cases where no complete information
about the system is known and scatter measurements are available. This is
especially useful in hemodynamics since the boundary information is often
difficult to model, and high-quality blood flow measurements are generally hard
to obtain. In this work, we use the PINNs methodology for estimating
reduced-order model parameters and the full velocity field from scatter 2D
noisy measurements in the ascending aorta. The results show stable and accurate
parameter estimations when using the method with simulated data, while the
velocity reconstruction shows dependence on the measurement quality and the
flow pattern complexity. The method allows for solving clinical-relevant
inverse problems in hemodynamics and complex coupled physical systems. | Jeremias Garay, Jocelyn Dunstan, Sergio Uribe, Francisco Sahli Costabal | 2023-08-02T04:04:49Z | http://arxiv.org/abs/2308.00927v1 | # Physics-informed neural networks for blood flow inverse problems
###### Abstract
Physics-informed neural networks (PINNs) have emerged as a powerful tool for solving inverse problems, especially in cases where no complete information about the system is known and scatter measurements are available. This is especially useful in hemodynamics since the boundary information is often difficult to model, and high-quality blood flow measurements are generally hard to obtain. In this work, we use the PINNs methodology for estimating reduced-order model parameters and the full velocity field from scatter 2D noisy measurements in the ascending aorta. The results show stable and accurate parameter estimations when using the method with simulated data, while the velocity reconstruction shows dependence on the measurement quality and the flow pattern complexity. The method allows for solving clinical-relevant inverse problems in hemodynamics and complex coupled physical systems.
keywords: Physics-informed neural networks, hemodynamics, reduced-order modeling, blood flow, patient-specific model +
Footnote †: journal: Computer Methods in Applied Mechanics and Engineering
## 1 Introduction
Computational hemodynamics has been established as a relatively new research area in which blood flow is studied in many scenarios. Applications include the blood ejected from the heart through the aorta artery or blood flow through small vessels, such as the capillaries in the brain, or the flow through the coronaries arteries. Moreover, hemodynamics simulations have proven helpful for planning and assessing different cardiovascular pathologies, for instance, aortic dissection [1], stenosis [2], and aneurysms [3], which are known to be highly
affected by blood flow patterns and can generally be life-threatening and progressive. In a clinical context, the main reason for engaging in simulations in a complex cardiovascular scenario is to gain prediction capabilities, a feature that cannot be attained only based on images.
However, one of the main limitations of hemodynamical simulations and their clinical use is the strong sensitivity of the results to the modeling assumptions and parameters. Different imaging techniques have helped to fill this gap, helping to obtain patient-specific geometries and boundary condition information, which is essential for most physical models. Magnetic Resonance Imaging (MRI) [4] and Computational Tomography (CT) [5] are widely used and can be obtained relatively easily for small to large patient cohorts and healthy volunteers. Moreover, tissue and blood flow motion is usually measured by the so-called Phase-Contrast MRI (PC-MRI) technique [6], capable of encoding the tissue velocity into the phase of the emitted signal. From this method, we can obtain time-resolved vascular measurements, usually consisting of 20 to 30 snapshots within the cardiac cycle. Although the temporal and spatial resolution on different applications have become progressively better, some physical properties cannot be measured using purely non-invasive techniques, such as the stress at the boundary of the vessels, namely the wall shear stress, pressure drops across a stenotic region of the vessel, and other mechanical properties of the arteries.
Another limitation of hemodynamic simulations is the large computational cost and time required to make the models realistic. For this reason, several strategies for reducing model complexity have been proposed. For instance, the so-called _reduced order_ modeling approaches, which try to simplify some physical phenomena, making the simulation realistic while keeping a tractable computational time and needed resources [7; 8]. In hemodynamics, the range of the applications could vary in size, making the reduced strategy also change. Some examples on different applications are in: the coronary arteries [9; 10; 11], the aorta artery [12; 13], the pulmonary artery [14] and heart-valve problems [15; 16].
A popular choice for medium-to-large size arteries is using the three-element Windkessel model [17]. This model considers the vessels' compressibility and ability to store elastic energy during systole to release it in diastole. In general terms, a Windkessel model introduces a 0D differential equation at the vessel outlet, modeling the pressure and flow with an equivalent electric circuit, where the blood flow is represented by the current and the pressure as the voltage [18]. The model also introduces a resistance parameter, where higher resistance values can explain difficulties in the blood flow through that specific portion of the artery. Additionally, the model introduces a compliance \(C\), which considers the vessel's ability to store elastic energy.
The clinical value of estimating the Windkessel parameters on a specific data set relies on the physical information one can obtain from them, such as localized flow resistance effects or to know if a specific vessel presents a reduced elastic response. This data-driven information could be seen as a new cardiovascular biomarker for later clinical patient assessment due to its specificity and non-invasive nature. In this line, different strategies have been used to infer Windkessel parameters from partial measurements to make blood flow simulations patient-specific. For example, Arthurs et al. [19] applied the Reduced-Order Unscented Kalman Filter to the 3D time-dependent Navier-Stokes model with simplified fluid-solid
interaction effects on the wall to estimate the Windkessel parameters in the boundary conditions. Using variational data assimilation, Fevola et al. [20] estimated a single resistant model (i.e., \(C=0\)) on a stationary Stokes problem. Bertoglio et al. [21] used a Kalman Filter approach to estimate elastic and Windkessel parameters of the vessels from wall displacement measurements. Finally, Garay et al. [22] used a novel data assimilation term in their sequential inverse problem workflow to estimate three-element Windkessel parameters from highly aliased and noisy PC-MRI blood flow measurements.
However, all the works mentioned above strongly depend on the complete description of the physical model, sometimes leading to oversimplified solutions, especially in cases where the physics is complex and poorly understood. For that reason, in this work, we propose to use physics-informed neural networks (PINNs) [23; 24], which have shown great potential in solving inverse problems where the physical information is not complete, having the networks the ability to learn or to discover the missing parts from the data itself. We will use a feed-forward network architecture to represent the flow state of the coupled Navier-Stokes equations with the Windkessel model. Note that partial knowledge of the boundary conditions is very common in hemodynamics and usually leads to ill-posed problems which classical approaches failed to address [7]. Hence, PINNs allow us to estimate the time-dependent 3D velocity and pressure field from scatter-simulated medical images by solving the forward problem and the closest set of Windkessel parameters by solving the inverse problem simultaneously.
PINNs have emerged as a novel methodology for solving complex forward problems and ill-posed inverse problems in many research areas, such as in fluid mechanics [25; 26; 27], electrophysiology problems [28; 29], geosciences and wave-propagation problems [30; 31], non-linear solid mechanics [32; 33; 34] and also in cardiovascular biomechanics [35; 36; 37; 38]. See Ref [39] for an extensive review of all current applications. Although PINNs have been applied in many problems, and the Navier-Stokes equations have been focus of PINNs since its inception [23], to the best of the author's knowledge, this is the first time that PINNs have been applied in a coupled 3D fluid mechanics problem to estimate hemodynamic parameters from medical images.
The rest of this article is organized as follows. In Section 2, we present the mathematical model, the neural architecture to be used, and the reference velocity and simulated measurements used for the inverse problem. In Section 3, we present the results obtained for both studied regimes: the steady and transient flow cases. A discussion of the results and future lines of work are given in Section 4. Finally, the conclusions are presented in Section 5.
## 2 Methods
We start by introducing the model that describes the physics of blood flow in arteries.
### The mathematical model
Let \(\Omega\subset\mathbb{R}^{3}\) be the interior (lumen) of the thoracic aorta, represented in Figure 1, with its boundary \(\partial\Omega\) sub-divided as follows:
\[\partial\Omega=\Gamma_{in}\cup\Gamma_{wall}\cup\big{(}\cup_{k=1}\Gamma_{k}\big{)},\]
where \(\Gamma_{in}\) is the inlet boundary, \(\Gamma_{wall}\) the arterial wall, and \(\Gamma_{1},\ldots,\Gamma_{K}\) are the \(K\)-outlet boundaries. We consider that the blood flow in this domain is governed by the incompressible Navier-Stokes equations, with velocity \(\mathbf{u}(\mathbf{x},t)\) and pressure \(p(\mathbf{x},t)\):
\[\begin{cases}\rho\dfrac{\partial\mathbf{u}}{\partial t}+\rho\big{(}\mathbf{u} \cdot\nabla\big{)}\mathbf{u}-\mu\Delta\mathbf{u}+\nabla p=0\quad\text{in}\quad \Omega\times[0,T],\\ \nabla\cdot\mathbf{u}=0\quad\text{in}\quad\Omega\times[0,T],\\ \mathbf{u}=\mathbf{u}_{inlet}(t)\quad\text{on}\quad\Gamma_{in}\times[0,T],\\ \mathbf{u}=\mathbf{0}\quad\text{on}\quad\Gamma_{wall}\times[0,T],\\ \mu\dfrac{\partial\mathbf{u}}{\partial n}-p\mathbf{n}=-P_{k}(t)\mathbf{n} \quad\text{on}\quad\Gamma_{k}\times[0,T],\quad k=1,\ldots,K\end{cases} \tag{1}\]
where \(\rho\) is the blood density and \(\mu\) the dynamic viscosity of the fluid. Note that a Dirichlet inflow and a non-slip boundary conditions are adopted at \(\Gamma_{in}\) and \(\Gamma_{wall}\), respectively. Moreover, the pressure at the outlet \(P_{k}(t)\) is assumed given by the _three-element Windkessel_ model:
\[\begin{cases}P_{k}=R_{p,k}\ Q_{k}+\pi_{k},\\ Q_{k}=\int_{\Gamma_{k}}\mathbf{u}\cdot\mathbf{n},\\ C_{k}\dfrac{d\pi_{k}}{dt}+\dfrac{\pi_{k}}{R_{d,k}}=Q_{k}.\end{cases} \tag{2}\]
In the Windkessel model, \(R_{p,k}\) and \(R_{d,k}\) represent the resistance of the vasculature proximal and distal to \(\Gamma_{k}\), respectively, while \(C_{k}\) is the compliance of the distal vessels. The exterior normal vector of the outlet is represented by \(\mathbf{n}\).
In the special case of a stationary flow, all time derivatives vanish from both models, resulting in the lower dimensional system an effective _one-element_ Windkessel boundary condition with no internal pressure variable \(\pi\). The equations of the Windkessel model are reduced to:
\[\begin{cases}P_{k}=(R_{p,k}+R_{d,k})\ Q_{k}=R_{t,k}\ Q_{k},\\ Q_{k}=\int_{\Gamma_{k}}\mathbf{u}\cdot\mathbf{n}.\end{cases} \tag{3}\]
with the distal and proximal resistances absorbed in a total resistance parameter \(R_{t}\).
### Physics-Informed Neural Networks (PINNs)
We will use PINNs to solve the inverse problem of inferring the Windkessel model parameters \(R_{p,k},R_{d,k},C_{k}\quad\forall k=1,...,K\), and the velocity and pressure fields in the aorta from partial velocity measurements \(\mathbf{u}_{meas}\), obtained from PC-MRI and an average pressure measurement in the outlets \(\bar{p}_{meas}\). We represent the solution of the coupled system in Equation 1 and 2 by a neural network consisting of \(n\) feed-forward layers, as shown in Figure 2:
\[\left(\mathbf{u}(\mathbf{x},t),\ p(\mathbf{x},t)\right)=\mathcal{N}\mathcal{N} (\mathbf{x},t;\mathbb{W},\mathbf{b}) \tag{4}\]
Where the neural network \(\mathcal{N}\mathcal{N}\) outputs the fields \(\mathbf{u}\) and \(p\), and takes as input the coordinate points in space and time, parametrized by weights (\(\mathbb{W}\)) and biases (\(\mathbf{b}\)). The values for \(\mathbb{W}\) and \(\mathbf{b}\) are obtained after training the network. To achieve this, we define a loss function \(\mathcal{L}_{tot}\) that encourages learning the measurements, the normalized physics equations (1)a, (1)b and, (2), and the non-slip boundary condition in (1)d:
\[\mathcal{L}_{tot}=\mathcal{L}_{NS}+\mathcal{L}_{WK}+\mathcal{L}_{BC}+ \mathcal{L}_{u,data}+\mathcal{L}_{p,data}+\mathcal{L}_{gradp}. \tag{5}\]
where the individual loss components are defined as:
\[\mathcal{L}_{NS} = \lambda_{NS}\ \left|\left|\frac{\partial\mathbf{u}}{\partial t}+ \left(\mathbf{u}\cdot\nabla\right)\mathbf{u}-\frac{1}{Re}\Delta\mathbf{u}+ \nabla p\right|\right|_{\Omega}+\lambda_{NS}\ \left|\left|\nabla\cdot\mathbf{u}\right|\right|_{\Omega}, \tag{6}\] \[\mathcal{L}_{WK} = \lambda_{WK}\ \sum_{k=1}^{K}\left|\left|p_{k}-R_{p,k}Q_{k}-\pi_{k} \right|\right|_{\Gamma_{k}}+\left|\left|C_{k}\frac{d\pi_{k}}{dt}+\frac{\pi_{k} }{R_{d,k}}-Q_{k}\right|\right|_{\Gamma_{k}},\] (7) \[\mathcal{L}_{BC} = \lambda_{BC}\ ||\ \mathbf{u}\ ||_{\Gamma_{wall}},\] (8) \[\mathcal{L}_{u,data} = \lambda_{data}\ ||\ \mathbf{u}-\mathbf{u}_{meas}\ ||_{\Omega_{meas}},\] (9) \[\mathcal{L}_{p,data} = \lambda_{data}\ ||\ \bar{p}-\bar{p}_{meas}\ ||_{\Omega_{meas}},\] (10) \[\mathcal{L}_{p,gradp} = \lambda_{gradp}\sum_{k=1}^{K}\left|\left|\ \nabla p\ \right|\right|_{\Gamma_{k}}. \tag{11}\]
Figure 1: Aortic domain \(\Omega\). The Windkessel outlets are defined at \(\Gamma_{1}\) to \(\Gamma_{5}\) consisting of 2 resistors (\(R_{d}\), \(R_{p}\)) and 1 capacitor \(C\) each.
The last added term (11) is included to promote a null pressure gradient at the Windkessel outlets since the coupled equations between the 3D and 0D systems assumed the pressure to be constant at the interface. In all cases, \(||\cdot||\) represents the \(L_{2}\) norm of the quantities. We make all equations dimensionless for the loss function to obtain normalized predictions of velocity and pressure, which improves the training procedure. We use the following normalization of the physical quantities:
\[\mathbf{x} \longrightarrow\mathbf{x}/L\] \[\mathbf{u} \longrightarrow\mathbf{u}/U\] \[t \longrightarrow t/(L/U)\] \[p \longrightarrow p/(\rho U^{2})\]
where \(U\) and \(L\) are a characteristic velocity and length scale of the system, respectively. Consequently, the Reynolds number \(Re\), defined as \(Re=\frac{\rho UL}{\mu}\), appears in the Laplacian term of Equation (6). The Windkessel system is also dimensionally reduced as:
\[\pi \longrightarrow\pi/(\rho U^{2})\] \[Q \longrightarrow Q/(UL^{2})\] \[R_{d,p} \longrightarrow R_{d,p}/(\rho U/L^{2})\] \[C \longrightarrow C/(L^{3}/\rho U^{2})\]
To fix the blood pressure level of the coupled system, we also give to the inverse problem the average pressure curve (obtained from the reference solution) defined as:
\[\bar{p}=\sum_{k=0}^{K}\frac{1}{area(\Gamma_{k})}\int_{\Gamma_{k}}pdS. \tag{12}\]
Furthermore, we interpolate the latter curve in time to match the MRI measurements timestep, so both measurements set (velocity and mean pressure) are consistent in time. Note that for the steady problem, this curve consists of a single value representing the systolic pressure of the artery. In a clinical scenario, this mean pressure curve could be approximated as the pressure measured using a sphygmomanometer or an oscillometric blood pressure monitor, generally placed at the patient's upper arm or wrist [40; 41; 42].
Concerning the collocation points in which the term \(\mathcal{L}_{NS}\) is enforced, we use \(403,223\) points corresponding to a tetrahedral generated mesh for later usage. For each point, we generate a random time within the cardiac cycle as input to the network in the transient problem. Also, training by mini-batch is performed using only around \(1\%\) of these points at every iteration.
Furthermore, the coupling between the models requires the computation of a boundary integral (the velocity flow \(Q_{k}\) at \(\Gamma_{k}\)). For that reason, all the points at the outlets have to be evaluated at the same time. To optimize the computational resources, we evaluate the velocity of every outlet point only at the measurement times (i.e., \(22\) time intervals within the cardiac cycle) and at the end of every epoch. With this set of points, the flow is estimated
using a finite element quadrature and interpolated in time using a cubic interpolator for later evaluation at any time within the cardiac cycle.
The transient problem presents its unique challenges compared to the steady problem. For this reason, we take into account some additional considerations that were not necessary for the steady problem:
* We use a vector potential representation of the velocity \(\mathbf{u}\) to hard-impose mass conservation of the solution. Consequently, the velocity is assumed to be originated from the curl of a vector potential \(\Phi(\mathbf{x},t)\)[43] as: \[\mathbf{u}(\mathbf{x},t)=\nabla\times\Phi(\mathbf{x},t).\] (13) With this definition, the velocity field \(vecu\) is always divergence free. We find that this change of variable improves the quality of the results and convergence of the method at the expense increasing the computational time.
* We tune each weight of the total loss function automatically following the algorithm presented in [44], using the \(L_{1}\) norm of the gradient of the physics loss. We perform the update every 10 epochs. The new values of the weights are exponentially averaged with their respective history following the rule: \[\lambda_{new,i}=(1-\alpha)\cdot\lambda_{old,i}+\alpha\cdot\frac{||\nabla \mathcal{L}_{phys}||_{1}}{\overline{\nabla\mathcal{L}_{i}}}\qquad\forall i=1,2,...,N_{L}\]
Figure 2: Feed-forward neural network architecture used for the transient problem. The network represents a vector potential \(\Phi\) and the pressure field. After physics-informed training, the output of the network is used for reconstruct the 3D velocity field and the Windkessel parameters of all the domain outlets.
being \(N_{L}\) the number of loss terms. In all cases, we set \(\alpha\) to \(0.1\).
* We divide the training into two stages. First, we randomly initialize the Windkessel model parameters \(R_{p,k},R_{d,k},C_{k}\) and fix them. Then, we train only the network parameters \(\mathbb{W},\mathbf{b}\). In a second stage, we allow the optimizer to also change the Windkensell model parameters. We found that splitting the training in this way leads to better results since the network had time first to adapt the representation of physical velocities and pressures and then to estimate the Windkessel parameters.
* We compute the compliance parameter \(C\) from the distal resistance \(R_{d}\) and an estimation of the decay time \(R_{d}\cdot C\). This time was obtained at the beginning of the training from the mean pressure curve used as an extra measurement. Furthermore, the decay time was computed by fitting an exponential curve from the valve closure time and the end of the diastole, an interval at which the Windkessel model assumes a decay behavior modulated by \(\sim\exp(-t/R_{d}C)\). The decay time was assumed constant for all outlets at the aorta.
Finally, we obtained the weights and biases by minimizing the total loss function as:
\[\mathbb{W},\mathbf{b}=\arg\min_{W,b}\ \mathcal{L}_{tot}(W,b). \tag{14}\]
Figure 2 shows a schematic of the final architecture used for the transient problem. The minimization was performed using the ADAM optimizer [45] with the _swish_ activation function [46]. We perform the estimation of the velocity and pressure fields and the estimation of the Windkessel parameters simultaneously. The Winkessel parameters start from a random seed between a predefined confidence interval between the half and the double of the respective reference value and then evolve with the optimization. Five independent realizations are performed in which the network weights are randomly initialized in each one of these. All cases are summarized in Table 1.
The training of all the experiments were done in an NVIDIA RTX-A6000. The codes were written in Python using the PyTorch library [47] and are available on GitHub at [https://github.com/yeyemedicen/PINNs-WK-MRI](https://github.com/yeyemedicen/PINNs-WK-MRI). All used hyperparameters are reported for both stationary and transient problems in Table 2.
The normalized velocity and length values were tuned by hand, keeping in mind that the normalized quantities entering the PINNs workflow have to be ideally not larger than the unity. In the case of the transient problem, the value of the normalized velocity had to be significantly reduced because of the lower velocities at the end of the cardiac cycle (diastole).
\begin{table}
\begin{tabular}{c c c} \hline \hline & Steady Problem & Transient Problem \\ \hline Estimation & \(\mathbf{u}\), \(p\), \(\left\{R_{tot}\right\}_{k=1}^{5}\) & \(\mathbf{u}\), \(p\), \(\left\{R_{p},R_{d},C_{d}\right\}_{k=1}^{5}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Different quantities to estimate in the steady and transient problems.
### The Reference Solution
To create the reference solution, equations (1) and (2) are solved using the finite element method (FEM), discretizing in space using stabilized \(\mathbb{P}1/\mathbb{P}1\) elements, and in time using a backward Euler scheme. The system was solved using a non-incremental fractional step scheme detailed in Ref [22]. The initial conditions were set to:
\[\begin{cases}\mathbf{u}=\mathbf{0},\\ \pi_{k}=78.8\ \mathrm{mmHg}\quad\mathrm{for}\quad\mathrm{k}=1,\dots,\mathrm{K}. \end{cases}\]
The values of \(\pi_{k}\) correspond to approximately the periodic state of the 3D-0D system taken from Ref [48]. Finally, we assume a velocity profile at \(\Gamma_{\mathit{inlet}}\) as:
\[\mathbf{u}(\mathbf{x},t)_{\mathit{inlet}}=\mathbf{u}_{\mathit{stokes}}( \mathbf{x})f(t)\ \mathbf{n}, \tag{15}\]
where \(\mathbf{u}_{\mathit{stokes}}(\mathbf{x})\) is the solution of a Stokes problem to ensure a parabolic-shape solution already adapted to the domain as is shown in Figure 3(a). The function \(f(t)\) represents the time-dependency of the inflow velocity and is taken in such a way that the total flow through \(\Gamma_{in}\) follows the curve in Figure 3(b), obtained from the Vascular Model Repository database [48]. The whole boundary condition is then added into the variational form as a penalization term \(A_{\mathit{inlet}}\) defined as:
\[A_{\mathit{inlet}}=\gamma\int_{\Gamma_{\mathit{inlet}}}(\mathbf{u}-\mathbf{u }_{\mathit{inlet}})\cdot\mathbf{v} \tag{16}\]
\begin{table}
\begin{tabular}{c c c} \hline \hline Hyperparameter & Steady Problem & Transient Problem \\ \hline Batchsize & \(1580\) & \(3000\) \\ Learning Rate & \(10^{-3}\) & \(10^{-3}\) \\ Learning Rate of Parameters & \(10^{-2}\) & \(10^{-2}\) \\ Learning Rate Scheduler & Yes & Yes \\ Hidden layers & \(7\) & \(7\) \\ No. neurons per layer & \(220\) & \(220\) \\ Hidden layers (\(\pi\)) & \(-\) & \(6\) \\ No. neurons per layer (\(\pi\)) & \(-\) & \(10\) \\ Activation Function & swish & swish \\ Epochs & \(1250\) & \(120+1250\) \\ Potential Vector Representation & No & Yes \\ Normalized Length (\(L\)) & \(1\) cm & \(0.5\) cm \\ Normalized Velocity (\(U\)) & \(5000\) cm/s & \(120\) cm/s \\ Loss weights & Manual & Automatic \\ \hline \(\lambda_{\mathit{phys}}\) & \(1.5\) & \(-\) \\ \(\lambda_{\mathit{data},u}\) & \(1.0\) & \(-\) \\ \(\lambda_{\mathit{data},p}\) & \(1.0\) & \(-\) \\ \(\lambda_{\mathit{BC}}\) & \(6.0\) & \(-\) \\ \(\lambda_{\mathit{winldk}}\) & \(1.0\) & \(-\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Hyperparameters used for training the PINNs
with the parameter \(\gamma\) fixed in \(10^{5}\) gr/(cm\({}^{2}\cdot\) s).
The Windkessel parameters were also obtained from the aforementioned repository database. For the numerical values of these constants, see Table 3.
The computational domain corresponds to a thoracic aorta with a total stream-wise length of 10.8 cm and meshed with 558,112 tetrahedrons and 115,779 vertices. We assume blood has a density of 1.06 gr/cm\({}^{3}\) and a constant dynamic viscosity of 0.035 P. The time step was set to \(\tau=0.001\) s with a total run time of 0.66 s. We also added backflow stabilization in every system outlet [49].
Figure 4 shows the resulting velocity field at peak systole next to each outlet's flow rates and pressure curves.
### PC-MRI measurement generation
From the reference solution, we simulate a 2D phase-contrast magnetic resonance acquisition located at three axial planes covering the whole domain, as shown in Figure 5(a). Thus, the velocity field is first interpolated into a voxel-like slice mesh with a resolution of \(1.0\times 1.0\) mm\({}^{2}\) and downsampled in time from \(1\)\(ms\) to \(30\)\(ms\), following the typical timestep of a clinical acquisition. After that, two different complex magnetization vectors are defined as:
\[M_{meas}^{u} = M_{0}\ \exp(i\phi_{0}+i\pi u_{MRI}/venc), \tag{17}\] \[M_{meas}^{0} = M_{0}\ \exp(i\phi_{0}), \tag{18}\]
Figure 3: (a) Stokes-profile velocity at the inlet of the aorta. (b) Velocity flow at the inlet taken from the SimVascular dataset.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Parameter & \(\Gamma_{1}\) & \(\Gamma_{2}\) & \(\Gamma_{3}\) & \(\Gamma_{4}\) & \(\Gamma_{5}\) \\ \hline \(R_{p}\) (dyn\(\cdot\) s\(\cdot\) cm\({}^{-5}\)) & 713 & 713 & 602 & 689 & 98 \\ \hline \(R_{d}\) (dyn\(\cdot\) s\(\cdot\) cm\({}^{-5}\)) & 12023 & 12023 & 10143 & 11609 & 1650 \\ \hline \(C_{d}\) (dyn\({}^{-1}\cdot\) cm\({}^{5}\)) & 8.256e-5 & 8.256e-5 & 9.785e-5 & 8.55e-5 & 6.015e-4 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Three-element Windkessel parameters for every outlet.
where \(M_{meas}^{u}\) is the velocity-encoded magnetization and \(M_{meas}^{0}\) is an extra measurement usually done to capture the reference phase \(\phi_{0}\). The interpolated velocity is represented by \(u_{MRI}\) while \(M_{0}\) is assumed constant within the vessel's lumen. Moreover, Gaussian noise is added to the magnetization components, producing a fixed signal-to-noise ratio of 18 dB. Consequently, a non-Gaussian noise distribution is induced in the reconstructed velocity, as is generally observed in PC-MRI measurements [50]. The final velocity field is then reconstructed using the following equation:
\[\mathbf{u}_{meas}=venc\ \frac{\angle\big{(}M_{meas}^{u}/M_{meas}^{0}\big{)}}{ \pi}, \tag{19}\]
with the symbol \(\angle\) representing the angle of the complex quantity measured from \(-\pi\) to \(\pi\). The velocity encoding parameter (\(venc\)) is chosen as the 120% of the maximum velocity to ensure aliased-free measurements. Figure 5(b) shows the resulting slice measurements for the stationary problem. We perform a similar procedure for the transient flow solution, resulting in 22 phases along the cardiac cycle.
## 3 Results
### Steady problem
In this section, we present the results obtained for the stationary problem. For each experiment, statistics are presented of 5 independent experiment realizations.
Figure 6 shows the evolution of the estimated total resistance by every training epoch and Windkessel outlet. We normalize all values to ease the visualization of the parameter
Figure 4: Reference simulation results. The blood flow velocity at peak systole and its flow rates and average pressure curves are depicted in (a) and (b), respectively.
evolution. It can be seen that all parameters stabilize rapidly to their reference value, yet oscillations are present; they reduce at the last stages of the training. Also, not all parameters show the same convergence rate since, for instance, the outlet number 3 takes around one-third of the total training epochs to converge to its reference value, while the outlet number 5 needs to approximately double the iterations to have a same-accurate result.
Table 4 shows the final values of the estimation, the mean estimated flow, and pressure at every outlet. These quantities were computed from the mean of the velocities and pressures by every experiment realization. We have found that the PINNs represent the solution at
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline & \multicolumn{8}{c}{Steady Problem} \\ \cline{2-10} & \multicolumn{2}{c}{\(Q\) (mL/s)} & \multicolumn{4}{c}{\(P\) (mmHg)} & \multicolumn{4}{c}{\(R_{tot}\) (\(10^{3}\) dyn\(\cdot\) s\(\cdot\) cm\({}^{-5}\))} \\ \cline{2-10} Bnd. & Ref. & Mean & Std & Ref & Mean & Std & Ref & Mean & Std \\ \hline \(\Gamma_{in}\) & -298.88 & -297.16 & 2.07 & 248.39 & 248.92 & 0.63 & \(-\) & \(-\) & \(-\) \\ \(\Gamma_{1}\) & 25.82 & 25.58 & 0.10 & 247.96 & 246.93 & 1.63 & 12.74 & 12.86 & 0.14 \\ \(\Gamma_{2}\) & 25.69 & 25.64 & 0.13 & 247.90 & 247.34 & 0.80 & 12.74 & 12.89 & 0.06 \\ \(\Gamma_{3}\) & 30.56 & 30.71 & 0.27 & 248.20 & 247.98 & 0.66 & 10.75 & 10.78 & 0.08 \\ \(\Gamma_{4}\) & 26.65 & 26.08 & 0.12 & 248.22 & 249.08 & 0.66 & 12.30 & 12.71 & 0.10 \\ \(\Gamma_{5}\) & 188.85 & 185.97 & 0.84 & 248.15 & 248.30 & 0.66 & 1.74 & 1.78 & 0.04 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Flow and mean pressure for each outlet of the aorta. The mean pressures were computed as \(P_{i}=\frac{1}{Area_{i}}\int_{\Gamma_{i}}pds\), where \(i\) is the corresponding outlet number. The mean and standard deviation values reported are computed from the results of every seed.
Figure 5: Synthetic PC-MRI measurements obtained from the simulation. The axial planes taken from the aorta are shown in (a), while the resulting PC-MRI velocity measurements are shown in from frontal view in (b).
the inlet remarkably well, even though no information was given at that boundary. Also, the estimated parameters match their corresponding reference values well despite the lack of volumetric data and the artifacts introduced to the slice measurements due to the added noise and spatiotemporal interpolation. Finally, Figure 7 shows the velocity streamlines computed from the inlet nodes of the mesh of both reference and mean estimation velocities. This figure shows that the PINNs' solution is similar to the reference velocity while in the vessel's lumen and moderately fails for the nodes closer to the walls.
### Transient problem
Figure 8 shows the parameter estimation evolution. As before, we normalize the parameters by their reference values and split them in the vertical axis to ease the visualization. From the plots, we can see that proximal resistances (\(R_{p}\)) show significantly more oscillation during the first epochs of the training than the distal resistances parameters (\(R_{d}\)), though both show relatively fast convergence to their respective reference values within the half of the training. However, some outlets (\(\Gamma_{3}\) and \(\Gamma_{4}\)) present a growing standard deviation in the last epochs of the training for the estimated resistance parameters. Table 5 reports the final values with their respective statistics.
Figure 9 shows the obtained flows and mean pressure curves. From these curves, a general flow underestimation is shown in the upper outlets of the aorta within the systole, while the inlet and descending aorta (\(\Gamma_{in}\) and \(\Gamma_{5}\)) were remarkably well captured in the whole cardiac cycle. Conversely, mean pressure curves computed as:
\[P_{i}=\frac{1}{area(\Gamma_{i})}\int_{\Gamma_{i}}p\ dS,\]
w
Figure 6: Evolution of Parameter estimation statistics for each total resistances of the aorta in steady flow regime. All values were divided by their reference value and split vertically for better visualization. Dashed black lines represent the exact solution, and continuous color lines represent the estimation mean. The colored area surrounding the curves is determined by the minimum and maximum value found along the multiple realizations of the experiment
tend to be overestimated in the upper aorta and underestimated in the downward portion of the domain. Moreover, an oscillating behavior was found in the flows and pressure in the outlets \(\Gamma_{3}\) and \(\Gamma_{4}\), outlets where the standard deviation of the estimated parameters is the most. A FEM simulation was also performed after the training, using the estimated parameters and inlet velocity profile found in the experiment, which present the median error among all realizations. This solution could be seen as a post-process step of the PINNs' estimation and generally presents reduced oscillations compared with the raw estimated velocity and pressure fields. The obtained curves of the post-process FEM simulation are depicted in colored dashed lines.
Finally, streamlines computed from the nodes of \(\Gamma_{in}\) were calculated at two instants of the cardiac cycle and shown in Figure 10. This was done for the reference velocity, the PINN's estimation, and the FEM post-process simulated velocity. Generally, the neural network
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline & \multicolumn{8}{c}{Transient Problem} \\ \cline{2-9} & \(R_{p}\) (\(10^{3}\) dyn \(\cdot\) s \(\cdot\) cm\({}^{-5}\)) & \multicolumn{2}{c}{\(R_{d}\) (\(10^{3}\) dyn \(\cdot\) s \(\cdot\) cm\({}^{-5}\))} & \multicolumn{2}{c}{\(C_{d}\) (\(10^{-5}\) dyn\({}^{-1}\cdot\) cm\({}^{5}\))} \\ \cline{2-9} Bnd. & Ref. & Mean & Std & Ref & Mean & Std & Ref & Mean & Std \\ \hline \(\Gamma_{1}\) & 0.71 & 1.02 & 0.10 & 12.02 & 15.76 & 1.87 & 8.26 & 6.42 & 0.70 \\ \(\Gamma_{2}\) & 0.71 & 1.10 & 0.12 & 12.02 & 16.75 & 1.58 & 8.26 & 6.01 & 0.52 \\ \(\Gamma_{3}\) & 0.60 & 0.91 & 0.15 & 10.14 & 14.51 & 1.88 & 9.78 & 6.99 & 0.82 \\ \(\Gamma_{4}\) & 0.69 & 0.84 & 0.14 & 11.61 & 14.46 & 1.94 & 8.55 & 7.01 & 0.80 \\ \(\Gamma_{5}\) & 0.10 & 0.12 & 0.02 & 1.65 & 2.02 & 0.22 & 60.15 & 50.18 & 5.51 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Parameter estimation obtained on the transient-flow regime.
Figure 7: Velocity streamlines when the steady-state flow regime is imposed for (a) the reference solution and (b) the mean estimated velocity represented by the PINNs
can capture the main flow features at systole but not so well during diastole, where the velocities are lower and the measurements' velocity-to-noise ratio is worse. When comparing the reference velocity against the FEM post-process solution, no significant gain in quality can be seen while in systole. However, during diastole, the FEM solution captured the main flow features that the PINNs' estimation failed to represent, especially the vortices at the ascending aorta and the near-wall velocity in the entire domain.
## 4 Discussion
In this we work, we develop a novel method to estimate hemodynamical parameters from sparse measurements using PINNs. We note that the original problem was divided into two different flow regimes: stationary and transient. We made this split to study the quality of the results when the problem grows in complexity.
The stationary problem could be seen as a inexpensive version of the transient problem, and in clinical hemodynamics, is usually studied when the transient problem's computational time exceeds the allowed limit imposed by the clinical application itself [51; 52]. In terms of the quality of the estimations, it could be seen from the results in Table 4 that although all parameters were overestimated, the obtained values present a mean error of less than 2% when comparing with their reference. On the other hand, when looking into the reconstructed velocity streamlines in Figure 7, both reconstructed and estimated fields are very similar, with some mild differences at the vessel wall. One of the reasons for that difference is because FEM methods exactly imposes the boundary condition on the walls, being the velocity there set exactly to zero, as it happened with the reference solution.
Figure 8: Evolution of the parameter estimation statistics for each Windkessel outlet of the aorta in transient flow regime. All values were divided by their reference value and split vertically for better visualization. Dashed black lines represent the exact solution, and continuous color lines represent the estimation mean. The colored area surrounding the curves is determined by the minimum and maximum values found along the multiple realizations of the experiment.
However, in the case of the PINNs method presented here, the non-slip condition enters the optimization as a loss term and, consequently, is not completely satisfied. This produces some of the velocity lines to go out of the vessel through the walls. This effect is also present in the transient flow regime.
It is particularly interesting that while the velocity tends to be misrepresented near the vessel wall, the outflow values are not. We hypothesized that since the coupling between the Navier-Stokes and Windkessel model is in terms of pressures and flows [17], the extra flow condition in the total loss in Equation (7) acts as a reinforcement for the training. Consequently, the outlet velocity tends to rise in the lumen, compensating for the low velocity near the wall. This effect is increased during the transient diastole, as some of the upper aortic outlets do not perceive any velocity, as shown in Figure 9(b,e). One strategy to mitigate this effect could be introducing a hard-constrain for the non-slip condition into the optimization, as done for a 2D Navier-Stokes system in Ref [53]. Another strategy could be relaxing the wall definition by introducing a _slip/transpiration_ condition as it was done
Figure 9: Flows (red) and mean pressure (sky blue) curves at every aortic outlet over time. Continuous lines represent mean estimation, while the area plot is defined by the minimum and maximum value encountered in every experiment realization. Black dashed lines correspond to the respective reference value, while colored dashed lines correspond to the curves obtained from the post-process FEM simulation.
in Nolte et al. [54] when studying the impact of segmentation's errors in MRI blood flow measurements.
Another aspect to discuss more in detail is the use of the potential vector representation for the flow velocity in Equation (13). This strategy could be seen as a hard constraint, applied only to the transient problem, to exactly satisfy the mass-conservation term of the Navier-Stokes model, as it was also done in Ref [55] for turbulent flows. We found that this change of variables improved the overall estimation of the method while increasing the computational time by a factor of 4. Also, it is known that the change of variables in Equation (13) present a _gauge freedom_, meaning that any exact gradient of the form \(\nabla\psi\) added to \(\Phi\), will end in the same velocity profile. In this work, we did not explore how
Figure 10: Velocity streamlines of the transient problem for the reference solution (a and d), the mean estimated velocity estimated by the PINNs (b and e), and the velocity obtained after a FEM simulation using the found parameters (c and f) at two time instants: \(t=0.12\ s\), which correspond to peak systole, and at \(t=0.45\ s\), during mid diastole.
different gauges could impact the inverse problem performance. Future work may include studying those and how they can help enhance the properties of the optimization method [56].
The present methodology is not free of limitations and has to be seen as a first step in using neural networks for solving clinical real-world inverse problems. One of the limitations of this work is the lack of experiments with real clinical data. In this line, future work includes the use and assimilation of real patient MRI blood flow images, which we expect will need better strategies in how to deal with such data, as previous work in [22] and in [57] have done to mitigate the presence of artifacts and poor quality of the images.
Finally, since the flexibility in defining a forward and inverse problem within the PINNs workflow, this method could be easily extended to other clinical-relevant tasks such as the denoising and undersampling of MRI blood flow measurements as it was done in [58; 59; 60] with high dependence on a numerical solver and model assumptions. In this line, the PINNs workflow could be seen as a good candidate with less restriction in using contaminated low-resolution data.
## 5 Conclusions
In this work, we present how a physics-informed neural network (PINNs) can be used for estimating hemodynamical information from MRI-like images in a synthetic scenario, where a reduced-order model was coupled with the 3D Navier-Stokes equations to generate the blood flow images. The PINNs were able to solve the coupled system in two regimes: steady and transient flow, using simulated 2D MRI images, a mean pressure curve, the non-slip condition at the wall, and the model equations. Furthermore, at the end of the training, the algorithm was able to estimate the best matching physical parameters, which in this case, represent the elastic response of the adjacent vasculature of the study domain. The overall result shows good accuracy when compared with the reference values in the steady regime and moderately worsened results when dealing with the transient state, where the number of parameters is increased by three-fold, and the data dimensionality is also increased.
To summarize, the proposed framework provides an initial step to truly personalize hemodynamic models from clinical data using a neural network workflow. We envision patient-specific models enabling better diagnostics and treatments for various cardiovascular diseases.
## Acknowledgements
This work was funded by ANID Chile: Millennium Science Initiative Program ICN17_002 (IMFD), ICN2021_004 (iHealth) and NCN19_161 (ACIP), Fondecyt grant 11201250, FONDECYT-Iniciacion 11220816, Basal Funds for Center of Excellence FB210005 (CMM) and Fondecyt Postdoc grant 3230549. |
2303.04614 | Densely Connected $G$-invariant Deep Neural Networks with Signed
Permutation Representations | We introduce and investigate, for finite groups $G$, $G$-invariant deep
neural network ($G$-DNN) architectures with ReLU activation that are densely
connected-- i.e., include all possible skip connections. In contrast to other
$G$-invariant architectures in the literature, the preactivations of
the$G$-DNNs presented here are able to transform by \emph{signed} permutation
representations (signed perm-reps) of $G$. Moreover, the individual layers of
the $G$-DNNs are not required to be $G$-equivariant; instead, the
preactivations are constrained to be $G$-equivariant functions of the network
input in a way that couples weights across all layers. The result is a richer
family of $G$-invariant architectures never seen previously. We derive an
efficient implementation of $G$-DNNs after a reparameterization of weights, as
well as necessary and sufficient conditions for an architecture to be
``admissible''-- i.e., nondegenerate and inequivalent to smaller architectures.
We include code that allows a user to build a $G$-DNN interactively
layer-by-layer, with the final architecture guaranteed to be admissible. We
show that there are far more admissible $G$-DNN architectures than those
accessible with the ``concatenated ReLU'' activation function from the
literature. Finally, we apply $G$-DNNs to two example problems -- (1)
multiplication in $\{-1, 1\}$ (with theoretical guarantees) and (2) 3D object
classification -- % finding that the inclusion of signed perm-reps
significantly boosts predictive performance compared to baselines with only
ordinary (i.e., unsigned) perm-reps. | Devanshu Agrawal, James Ostrowski | 2023-03-08T14:35:03Z | http://arxiv.org/abs/2303.04614v2 | Densely Connected \(\boldsymbol{G}\)-invariant Deep Neural Networks with Signed Permutation Representations
###### Abstract
We introduce and investigate, for finite groups \(\boldsymbol{G}\), \(\boldsymbol{G}\)-invariant deep neural network (\(\boldsymbol{G}\)-DNN) architectures with ReLU activation that are densely connected- i.e., include all possible skip connections. In contrast to other \(\boldsymbol{G}\)-invariant architectures in the literature, the preactivations of the\(\boldsymbol{G}\)-DNNs presented here are able to transform by _signed_ permutation representations (signed perm-reps) of \(\boldsymbol{G}\). Moreover, the individual layers of the \(\boldsymbol{G}\)-DNNs are not required to be \(\boldsymbol{G}\)-equivariant; instead, the preactivations are constrained to be \(\boldsymbol{G}\)-equivariant functions of the network input in a way that couples weights across all layers. The result is a richer family of \(\boldsymbol{G}\)-invariant architectures never seen previously. We derive an efficient implementation of \(\boldsymbol{G}\)-DNNs after a reparameterization of weights, as well as necessary and sufficient conditions for an architecture to be "admissible"- i.e., nondegenerate and inequivalent to smaller architectures. We include code that allows a user to build a \(\boldsymbol{G}\)-DNN interactively layer-by-layer, with the final architecture guaranteed to be admissible. Finally, we apply \(\boldsymbol{G}\)-DNNs to two example problems--(1) multiplication in **{-1,1}** (with theoretical guarantees) and (2) 3D object classification--finding that the inclusion of signed perm-reps significantly boosts predictive performance compared to baselines with only ordinary (i.e., unsigned) perm-reps.
deep learning, group theory, neural network, skip connection, symmetry
## 1 Introduction
When fitting a deep neural network (DNN) to a target function that is known to be \(\boldsymbol{G}\)-invariant with respect to a group \(\boldsymbol{G}\), it only makes sense to enforce \(\boldsymbol{G}\)-invariance on the DNN as prior knowledge. With the rise of geometric deep learning (Bronstein et al., 2021), this is becoming an increasingly common practice, finding applications in various domains including computer vision, where the class of an object in an image may be independent of its orientation (Veeling et al., 2018), or point clouds that are permutation-invariant (Qi et al., 2017). In general-purpose \(\boldsymbol{G}\)-invariant and \(\boldsymbol{G}\)-equivariant architectures such as \(\boldsymbol{G}\)-equivariant convolutional neural networks (\(\boldsymbol{G}\)-CNNs) (Cohen and Welling, 2016) and \(\boldsymbol{G}\)-equivariant graph neural networks (Maron et al., 2019), it is standard to require every linear layer to be \(\boldsymbol{G}\)-equivariant. Moreover, in case of the rectified linear unit (ReLU) activation, every linear layer is \(\boldsymbol{G}\)-equivariant only with respect to permutation representations. It is
commonly assumed that the \(\mathbf{G}\)-invariant architectures constructed in this way are sufficient for consideration (Cohen et al., 2019), but it is unclear if this layerwise construction covers all possible ways of enforcing \(\mathbf{G}\)-invariance on a fully-connected feedforward DNN, and it remains an open conjecture (Kondor and Trivedi, 2018). While these architectures and others are certainly sufficient for universal approximation of \(\mathbf{G}\)-invariant functions (Maron et al., 2019; Ravanbakhsh, 2020; Kicki et al., 2020), the way in which \(\mathbf{G}\)-invariance is enforced is an aspect of the neural architecture and thus likely plays a key role in determining the inductive bias of the model and hence its generalization power on a given problem.
It has recently been discovered that there are, in fact, more ways to enforce \(\mathbf{G}\)-invariance on shallow ReLU neural networks than just permutations on the hidden neurons (Agrawal and Ostrowski, 2022). In their work, Agrawal and Ostrowski (2022) exploit the identity
\[\text{ReLU}(\mathbf{-}\mathbf{x})=\text{ReLU}(\mathbf{x})-\mathbf{x} \tag{1}\]
to show that \(\mathbf{G}\)-invariance can be achieved even if \(\mathbf{G}\) acts on the hidden neurons via a "signed permutation representation" (signed perm-rep), resulting in novel \(\mathbf{G}\)-invariant shallow architectures previously unknown. In an attempt towards a generalization to deep architectures, we observe that the linear term in Eq. (1) can be interpreted as a skip connection; this suggests that skip connections may be the key to novel deep \(\mathbf{G}\)-invariant architectures.
In this paper, as a partial generalization of the work of Agrawal and Ostrowski (2022), we investigate \(\mathbf{G}\)-invariant deep neural network (\(\mathbf{G}\)-DNN) architectures that are "densely connected"- i.e., include all possible skip connections. We note that (non-\(\mathbf{G}\)-invariant) densely connected neural networks exist in the literature and have found immense success especially in medical imaging (Huang et al., 2017). We use ReLU activation, and every preactivation layer is still a \(\mathbf{G}\)-equivariant function of the network input; however, in contrast to previous architectures such as the \(\mathbf{G}\)-CNN, the individual weight matrices of a \(\mathbf{G}\)-DNN need not be \(\mathbf{G}\)-equivariant. Instead, in each layer, only the concatenation of the weight matrix with all skip connections from previous layers need be \(\mathbf{G}\)-equivariant. This dense structure allows us to use _signed_ perm-reps to enforce\(\mathbf{G}\)-equivariance on the preactivation layers, thus granting us access to a much larger family of \(\mathbf{G}\)-invariant architectures than seen previously. Practically, we hypothesize the existence of real-world problems for which some of these novel \(\mathbf{G}\)-DNN architectures encode useful inductive bias.
Implementation of \(\mathbf{G}\)-DNNs is nontrivial, as the group representation by which the weight matrix (concatenated with all skip connections) in each layer transforms is itself a function of the weights in previous layers. That is, due to the skip connections and in contrast to previous architectures such as the \(\mathbf{G}\)-CNN, \(\mathbf{G}\)-invariance is enforced in a way that couples weights across layers. We show, however, that there is a reparameterization of \(\mathbf{G}\)-DNNs in which the equivariance conditions of the weight matrices decouple and admit a simple implementation (Thm. 6 (a)). For efficiency, rather than transforming back to the original weights, we express and implement the forward pass of a \(\mathbf{G}\)-DNN directly in terms of these reparameterized weights (Thm. 6 (b)).
The remainder of the paper is organized as follows: In Sec. 2, we review signed perm-reps and state our central hypothesis with some theoretical support. In Sec. 3, we introduce and describe the implementation of \(\mathbf{G}\)-DNN architectures. We additionally derive necessary and sufficient conditions for a \(\mathbf{G}\)-DNN architecture to be "admissible" or nondegenerate (Thm. 7), in the sense that (1) no neuron is missing an input and (2) no two ReLU neurons
can be combined into a single ReLU neuron or a skip connection. We include code1 that allows a user to build a \(\G\)-DNN interactively such that the final architecture is guaranteed to be admissible. We also verify that batch normalization placed after ReLU is compatible with \(\G\)-DNNs out-of-the-box. In Sec. 4, we test \(\G\)-DNNs on two examples--(1) a simple mathematical function and (2) a real-world computer vision problem--and demonstrate that signed perm-reps can, in fact, carry useful inductive bias. Finally, in Sec. 5, we end with conclusions, implications, and future outlook.
Footnote 1: Code for our implementation and for reproducing all results in this paper is available at: [https://github.com/dagrawa2/gdnm_code](https://github.com/dagrawa2/gdnm_code).
## 2 Signed permutation representations
### Preliminaries
We begin by introducing some notions and notation that will be used throughout this paper. This section is an abbreviation of Secs. 2.1-2.2 of Agrawal and Ostrowski (2022), and we refer readers to all of Sec. 2 of that paper for details of the below material.
Throughout this paper, let \(\G\) be a finite group of \(m\times m\) orthogonal matrices. Let \(\P(n)\) be the group of \(n\times n\) permutation matrices and \(\Z(n)\) the group of \(n\times n\) diagonal matrices with diagonal entries \(\pm 1\). Let \(\PZ(n)=\PZ(n)\times\Z(n)\), which is the group of _signed permutations_- i.e., the group of all permutations and reflections of the standard orthonormal basis \(\{e_{1},\ldots,e_{n}\}\). This group is also called the hyperoctahedral group in the literature (Baake, 1984).
A _signed permutation representation_ (signed perm-rep) of degree \(n\) of \(\G\) is a homomorphism \(\rho:\G\mapsto\PZ(n)\). Two signed perm-reps \(\rho,\rho^{\prime}\) are said to be _equivalent_ or _conjugate_ if there exists \(A\in\PZ(n)\) such that \(\rho^{\prime}(g)=A\rho(g)A^{-1}\forall g\in G\)- i.e., if they are related by a change of basis. A signed perm-rep \(\rho\) is said to be _reducible_ if it is equivalent to a direct sum of signed perm-reps of smaller degrees- i.e., if there exists \(A\in\PZ(n)\) such that \(A\rho(\cdot)A^{-1}\) is simultaneously block-diagonal with at least two blocks. The signed perm-rep is said to be _irreducible_ (signed perm-irrep for short) otherwise.2
Footnote 2: A useful characterization is that a signed perm-rep \(\rho\) is irreducible iff for every \(i,j=1,\ldots,n\), there exists \(g\in G\) such that \(\rho(g)e_{i}=\pm e_{j}\).
The signed perm-irreps of \(\G\) can be completely classified up to equivalence in terms of certain pairs of subgroups of \(\G\). For every pair of subgroups \(K\leq H\leq G\), \(|H:K|\leq 2\), define the signed perm-rep \(\rho_{HK}:G\mapsto\PZ(n)\), \(n=|G/H|\), to be the induced representation
\[\rho_{HK}=I_{H}^{G}\sigma,\text{ where }\sigma:H\mapsto\{-1,1\}\mid\ker( \sigma)=K.\]
Then \(\rho_{HK}\) is irreducible, and every signed perm-irrep is equivalent to some \(\rho_{HK}\). Moreover, \(\rho_{HK}\) and \(\rho_{H^{\prime}K^{\prime}}\) are equivalent iff \((H^{\prime},K^{\prime})=(gHg^{-1},gKg^{-1})\) for some \(g\in G\). We can thus always understand a "signed perm-irrep" to mean \(\rho_{HK}\) for some appropriate subgroups \(H\) and \(K\), and conversely we will always understand the notation \(\rho_{HK}\) to mean the above construction. For alternative (but equivalent) ways to think about the \(\rho_{HK}\), see Sec. 2 of Agrawal and Ostrowski (2022).
Every signed perm-irrep is either type 1 or type 2. A signed perm-rep is _type 1_ if it is equivalent to an _ordinary permutation representation_ (ordinary perm-rep) \(\pi:G\mapsto\PZ(n)\);
it is _type 2_ otherwise. Sign flips in a type 1 signed perm-rep are thus artifacts as they can be removed by a change of basis. An important characterization is that a signed perm-irrep \(\rho_{HK}\) has type \(|H:K|\).
Finally, as general notation, throughout this paper let \(I_{n}\) denote the \(n\times n\) identity matrix and \(\bar{\mathsf{0}}_{n}\) (resp. \(\bar{\mathsf{1}}_{n}\)) the \(n\)-dimensional vector with all elements \(\mathbf{0}\) (resp. \(\mathbf{1}\)).
### Central hypothesis
Previous works in the equivariant deep learning literature, to our knowledge, primarily employ type 1 signed perm-reps--and in particular, ordinary perm-reps--to enforce layer-wise \(G\)-equivariance in ReLU networks. The purpose of this paper is to show that type 2 signed perm-reps can also be implemented for this purpose (although it is trickier as it requires layers to be coupled), and the central hypothesis of this work is that type 2 signed perm-reps can indeed be beneficial for DNN performance. In this section, we motivate this hypothesis with theory and discussions that suggest that type 2 signed perm-reps can help boost expressive power without necessarily sacrificing generalization power.
Our first theorem states that two inequivalent signed perm-irreps induced from different reps of the same subgroup \(H\leq G\) correspond to \(G\)-equivariant matrices that are in a sense orthogonal.3
Footnote 3: All proofs can be found in the supplementary material.
**Theorem 1** (Corollary to Prop. 6 of Agrawal and Ostrowski (2022)): _Let \(\rho_{HK_{1}}\) and \(\rho_{HK_{2}}\) be inequivalent signed perm-irreps of degree \(n\) of \(G\). For each \(i\in\{1,2\}\), let \(W_{i}\in\mathbb{R}^{n\times m}\) such that \(\rho_{HK_{i}}W_{i}=W_{i}g\forall g\in G\). Then there exists \(P\in P(n)\) such that_
\[\operatorname{diag}(PW_{1}W_{2}^{\top})=0.\]
Our second theorem describes how a signed perm-rep can be "unraveled" into an ordinary perm-rep of twice the degree, by essentially mapping sign flips to transpositions of two dimensions. This unraveling is particularly meaningful for type 2 signed perm-irreps, which remain irreducible even after unraveling; in contrast, type 1 signed perm-irreps merely unravel into reducible copies of themselves. Let \(\mathcal{H}\) be the elementwise Heaviside step function, where we let \(\mathcal{H}(0)=0\).
**Theorem 2**: _Let \(\rho:G\mapsto PZ(n)\) be a signed perm-rep. Then:_
1. _The function_4__ Footnote 4: In the expression for \(\pi_{\rho}\), the operation \(\mathfrak{G}\) is the Kronecker product. \[\pi_{\rho}(g)=\mathcal{H}\left(\begin{bmatrix}1&-1\\ -1&1\end{bmatrix}\otimes\rho(g)\right)\] _defines an ordinary perm-rep_ \(\pi_{\rho}:G\mapsto P(2n)\)_._ Footnote 5: In the expression for \(\pi_{\rho}\), the operation \(\mathfrak{G}\) is the Kronecker product.
2. _Let_ \(W\in\mathbb{R}^{n\times m}\) _such that_ \(\rho(g)W=Wg\forall g\in G\)_. Then_ \[\pi_{\rho}(g)\left[\begin{bmatrix}W\\ -W\end{bmatrix}=\begin{bmatrix}W\\ -W\end{bmatrix}g\forall g\in G.\]
3. _Suppose_ \(\rho\) _is irreducible. Then_ \(\pi_{\rho}\) _is irreducible iff_ \(\rho\) _is type 2._
4. _Suppose_ \(\rho\) _is type 2 irreducible and_ \(\rho=\rho_{HK}\)_. Then_ \(\pi_{\rho}=\rho_{KK}\)_._
To understand Thms. 1-2 and their implications for the expressive and generalization powers of a \(G\)-DNN with type 2 signed perm-reps, consider the example group \(G\) of all \(6\times 6\) cyclic permutation matrices (see Sec. 4.1 of Agrawal and Ostrowski (2022) for a detailed discussion of this example). For this example, the signed perm-irreps of \(G\) are completely characterized in terms of their degrees and types; we thus let \(\rho_{(n,t)}\) denote the irrep \(\rho_{HK}\) where \(n=\frac{6}{|H|}\) is the degree and \(t=|H:K|\) is the type. Alternatively, if \(G\cong\mathbb{Z}_{6}\), then \(\rho_{(n,t)}\) denotes the irrep \(\rho_{HK}\) where \(H\cong\mathbb{Z}_{\frac{6}{n}}\) and \(K\cong\mathbb{Z}_{\frac{6}{n!}}\). In this notation, the signed perm-irreps of \(G\) are \(\rho_{(6,1)}\), \(\rho_{(3,1)}\), \(\rho_{(3,2)}\), \(\rho_{(2,1)}\), \(\rho_{(1,1)}\), and \(\rho_{(1,2)}\).
For each irrep \(\rho_{(n,t)}\), let \(W_{(n,t)}\in\mathbb{R}^{n\times m}\) such that \(\rho_{(n,t)}W_{(n,t)}=W_{(n,t)}\forall G\in G\). The irreps \(\rho_{(3,1)}\) and \(\rho_{(3,2)}\) satisfy the hypotheses of Thm. 1, and indeed we see--in terms of their weigthsharing patterns--each row of \(W_{(3,1)}\) is orthogonal to the corresponding row of \(W_{(3,2)}\) (Fig. 1). In practice, this means if we were given a dataset sampled from a \(G\)-invariant shallow neural network (\(G\)-SNN)
\[f(x)=a\mathbb{I}^{\top}\operatorname{ReLU}(Wx+b\mathbb{I})+c^{\top}x+d,\]
with \(W=W_{(3,2)}\) as the ground truth weight matrix, then a \(G\)-SNN with weight matrix with the weigthsharing pattern of \(W_{(3,1)}\) would not have the expressive power to fit the ground truth, regardless of the number of "channels" (i.e., independent copies of \(W_{(3,1)}\)) used. With a limited budget of three hidden neurons per channel, type 2 signed perm-reps thus help to increase expressive power.
On the other hand, if we double our budget of hidden neurons, then Thm. 2 suggests \(W_{(6,1)}\) has the capacity to express \(W_{(3,2)}\); indeed, if we constrain the first row of \(W_{(6,1)}\)
Figure 1: Weightsharing patterns of equivariant matrices for three example signed perm-irreps of the group \(G\) of \(6\times 6\) cyclic permutation matrices (see main text for details). In each pattern, weights of the same color and texture (solid vs. hatched) are constrained to be equal; weights of the same color but different texture are constrained to be opposites (colors should not be compared across different matrices).
to have the same weightsharing pattern as the first row of \(\mathbf{W_{(3,2)}}\) (Fig. 1), then \(\mathbf{W_{(6,1)}}\) effectively becomes equivalent to \(\mathbf{W_{(3,2)}}\). The problem is, however, that \(\mathbf{W_{(6,1)}}\) can similarly be constrained to match \(\mathbf{W_{(3,1)}}\), and with probability \(\mathbf{1}\) (e.g., under a Gaussian), \(\mathbf{W_{(6,1)}}\) is equivalent to neither \(\mathbf{W_{(3,1)}}\) nor \(\mathbf{W_{(3,2)}}\). Thus, while this approach allows us to use a type 1 signed perm-rep with the capacity to express the ground truth, it comes at the cost of generalization power, as \(\mathbf{W_{(6,1)}}\) may have multiple configurations consistent with a finite training set.
In sum, type 2 signed perm-reps may help to refine the range of expressive powers available to us; they help to express functions different from type 1 signed perm-reps of the same degree without sacrificing generalization power the way larger type 1 signed perm-reps would.
## 3 \(\mathbf{G}\)-invariant deep neural networks
### Parameterization redundancies
In this section, we define densely connected deep neural networks, or simply deep neural networks, and we list some of the so-called parameterization redundancies of such networks that will be key to enforcing \(\mathbf{G}\)-invariance. All notation introduced here will persist throughout the paper.
Let \(f:\mathbb{R}^{m}\xrightarrow{}\mathbb{R}\) be a _deep neural network_ (DNN) of depth \(\mathbf{d}\) constructed as follows: Let \(\{\mathbf{n_{1}},\ldots,\mathbf{n_{d+1}}\}\) be a set of \(\mathbf{d+1}\) positive integers where5\(n_{1}=m\) and \(n_{\mathbf{d+1}}=\mathbf{1}\), and let \(\mathbf{N}_{i}=n_{1}+\cdots+n_{i}\) for every \(i\in\{\mathbf{1},\ldots,\mathbf{d}\}\). Let \(\mathbf{W^{(l)}}\in\mathbb{R}^{n_{i+1}\times\mathbf{N}_{i}}\) and \(\mathbf{b^{(l)}}\in\mathbb{R}^{n_{i+1}}\) for every \(i\in\{\mathbf{1},\ldots,\mathbf{d}\}\). Define \(\mathbf{f^{(1)}}:\mathbb{R}^{m}\xrightarrow{}\mathbb{R}^{n_{1}}\) by \(\mathbf{f^{(1)}(x)}=\mathbf{x}\). For every \(i\in\{\mathbf{1},\ldots,\mathbf{d-1}\}\), define \(\mathbf{f^{(i+1)}}:\mathbb{R}^{m}\xrightarrow{}\mathbb{R}^{N_{i+1}}\) by
Footnote 5: Note the input dimension \(n_{1}=m\) is the same number as the degree of the matrix group \(\mathbf{G}\) in Sec. 2.
\[\mathbf{f^{(i+1)}(x)}=\begin{bmatrix}\mathsf{ReLU}(\mathbf{W^{(l)}}\mathbf{f^{(l)}(x)}+ \mathbf{b^{(l)}})\\ \mathbf{f^{(l)}(x)}\end{bmatrix}. \tag{2}\]
Then, let \(\mathbf{f(x)}=\mathbf{W^{(\mathbf{d})}}\mathbf{f^{(\mathbf{d})}(x)}+\mathbf{b^{(\mathbf{d})}}\).
We thus define a DNN to be a feedforward ReLU network having all possible _skip connections_. We call the \(\mathbf{W^{(i)}}\)_weight matrices_, their rows _weight vectors_, and the \(\mathbf{b^{(l)}}\)_bias vectors_. The numbers \(n_{2},\ldots,\mathbf{n_{d}}\) are the widths of the hidden layers, and the numbers \(\mathbf{N_{2}},\ldots,\mathbf{N_{d}}\) are the cumulative widths accounting for the skip connections. Note the traditional definition of a DNN can be recovered by setting all skip connections to zero.
Our first proposition below establishes a set of parameterization redundancies (i.e., reparameterizations of the DNN leaving the input-output function invariant) enjoyed by the above DNN architecture. For every positive integer \(n\), let \(\mathcal{C}(n)\) be the group of \(n\times n\) diagonal matrices with positive diagonal entries.
**Proposition 3**: _Let \(i\in\{1,\ldots,d-1\}\), \(C\in\mathcal{C}(n_{i+1})\), \(P\in\mathcal{P}(n_{i+1})\), and \(Z\in\mathcal{Z}(n_{i+1})\). Then the DNN \(f\) is invariant under the transformation_
\[W^{(i)} \rightarrow CPZW^{(i)}\] \[b^{(i)} \rightarrow CPZb^{(i)}\] \[W^{(i+1)} \rightarrow W^{(i+1)}\begin{bmatrix}(CP)^{-1}&\mathcal{H}(-Z)W^{(i)}\\ 0&I_{N_{i}}\end{bmatrix}\] \[b^{(i+1)} \rightarrow b^{(i+1)}+W^{(i+1)}\begin{bmatrix}\mathcal{H}(-Z)b^{(i)}\\ 0\end{bmatrix}.\]
Proposition 3 is the key to understanding how signed perm-reps can be used in \(\mathsf{G}\)-DNNs. Specificly, if we want the parameters \((W^{(i)},b^{(i)})\) of the \(i\)th layer to transform by a signed perm-rep, then the parameters \((W^{(i+1)},b^{(i+1)})\) of the subsequent layer must transform as in Prop. 3 to compensate and maintain invariance. This idea is the basis of the next section.
### \(\mathsf{G}\)-invariant architectures
The following lemma gives a sufficient condition for each \(f^{(i)}\) (the subnetwork comprising the first \(i-1\) layers of \(f\) and acting as input of the \(i\)th layer) to be \(\mathsf{G}\)-equivariant; the sufficient condition takes the form of equivariant constraints on the network parameters.
**Lemma 4**: _Let \(\{\rho^{(1)},\ldots,\rho^{(d)}\}\) be a set of signed perm-reps \(\rho^{(i)}:\mathsf{G}\rightarrow\mathsf{PZ}(n_{i+1})\) with \(\rho^{(d)}\) the trivial rep. For each \(i\in\{1,\ldots,d\}\), let \(\pi^{(i)}:\mathsf{G}\rightarrow\mathcal{P}(n_{i+1})\) and \(\zeta^{(i)}:\mathsf{G}\rightarrow\mathcal{Z}(n_{i+1})\) be the unique functions such that \(\rho^{(i)}(g)=\pi^{(i)}(g)\zeta^{(i)}(g)\forall g\in G\). Let \(\{\psi^{(1)},\ldots,\psi^{(d)}\}\) be a set of reps \(\psi^{(i)}:\mathsf{G}\rightarrow\mathcal{GL}(N_{i})\) defined as:_
\[\psi^{(1)}(g) =g\] \[\psi^{(i+1)}(g) =\begin{bmatrix}\pi^{(i)}(g)&\frac{1}{2}(W^{(i)}\psi^{(i)}(g)- \pi^{(i)}(g)W^{(i)})\\ 0&\psi^{(i)}(g)\end{bmatrix}\forall i\in\{1,\ldots,d-1\}.\]
_Suppose \(\rho^{(i)}(g)W^{(i)}=W^{(i)}\psi^{(i)}(g)\) and \(\rho^{(i)}(g)b^{(i)}=b^{(i)}\) for all \(g\in G\) and \(i\in\{1,\ldots,d\}\). Then \(f^{(i)}(g\mathsf{x})=\psi^{(i)}(g)f^{(i)}(\mathsf{x})\) for all \(g\in G\), \(\mathsf{x}\in\mathbb{R}^{m}\), and \(i\in\{1,\ldots,d\}\)._
Since \(f^{(d)}(g\mathsf{x})=\psi^{(d)}f^{(d)}(\mathsf{x})\) and \(W^{(d)}\psi^{(d)}(g)=W^{(d)}\) for all \(g\in G\) and \(\mathsf{x}\in\mathbb{R}^{m}\), then we see that Lemma 4 gives a sufficient condition for \(f\) to be a \(\mathsf{G}\)_-invariant deep neural network_ (\(\mathsf{G}\)-DNN). Note the sequence of signed perm-reps \(\{\rho^{(1)},\ldots,\rho^{(d)}\}\) completely determines the \(\mathsf{G}\)-DNN architecture (i.e., depth, number of hidden neurons per layer, and weightsharing patterns), and hence we will also refer to such sequences of reps as \(\mathsf{G}\)_-DNN architectures_. Observe that the rep \(\psi^{(i)}\) depends on the weight matrix \(W^{(i-1)}\), and hence the equivariance condition on \(W^{(i)}\) introduces a coupling between \(W^{(i-1)}\) and \(W^{(i)}\). This coupling makes it difficult to implement the equivariance constraints on the weight matrices directly, and we thus proceed to find a reparameterization admitting uncoupled equivariant weight matrices. To do this, we first state another lemma below, which gives an explicit formula for the reps \(\psi^{(i)}\) (as opposed to a recursive formula) and shows each \(\psi^{(i)}\) to be equivalent to a direct sum of layerwise reps.
For every \(i\in\{1,\ldots,d\}\), decompose \(W^{(i)}\) into blocks as
\[W^{(i)}=\left[W^{(1)}_{i}\ \ \ W^{(1)}_{i-1}\ \ \cdots\ \ W^{(1)}_{1}\right],\]
where \(W^{(i)}_{j}\in\mathbb{R}^{n_{i+1}\times n_{j}}\). Define the block matrix
\[A^{(i)}=\begin{bmatrix}I_{n_{i+1}}&-\frac{1}{2}W^{(1)}_{i}&-\frac{1}{2}W^{(1) }_{i-1}&\cdots&-\frac{1}{2}W^{(1)}_{1}\\ \mathbf{0}&I_{n_{i}}&-\frac{1}{2}W^{(i-1)}_{i-1}&\cdots&-\frac{1}{2}W^{(i-1)}_ {1}\\ \mathbf{0}&0&I_{n_{i-1}}&\cdots&-\frac{1}{2}W^{(i-2)}_{1}\\ \mathbf{0}&0&0&\ddots&\vdots\\ \mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{0}&I_{n_{1}}\end{bmatrix}.\]
Define the rep \(\Pi^{(i)}:G\mapsto\mathcal{O}(N_{i+1})\) by
\[\Pi^{(i)}(g)=\operatorname{diag}(\pi^{(i)}(g),\,\pi^{(i-1)}(g),\,\ldots,\, \pi^{(1)}(g),g).\]
**Lemma 5**: _For every \(i\in\{1,\ldots,d-1\}\), we have6_
Footnote 6: The notation \(A^{(i)-1}\) is shorthand for \((A^{(i)})^{-1}\), the matrix inverse of \(A^{(i)}\).
\[\psi^{(i+1)}(g)=A^{(0-1)}\Pi^{(i)}(g)A^{(0)}\forall g\in G.\]
Let \(\pi^{(0)}:G\mapsto\mathcal{P}(n_{1})\) be the trivial rep, and let \(A^{(0)}=I_{n_{1}}\). Then Lemma 5 holds for \(i=0\) as well.
We are now ready to state this section's key theorem, which gives a reparameterization of a \(G\)-DNN in which it admits uncoupled equivariant weight matrices.
**Theorem 6**: _Let \(f\) be \(G\)-invariant, with its parameters \(\{(W^{(1)},b^{(1)}),\ldots,(W^{(d)},b^{(d)})\}\) satisfying the conditions of Lemma 4 with respect to the signed perm-reps \(\{\rho^{(1)},\ldots,\rho^{(d)}\}\)._
1. _For every_ \(i\in\{1,\ldots,d\}\)_, there exists_ \(V^{(i)}\in\mathbb{R}^{n_{i+1}\times N_{i}}\) _with block structure_ \[V^{(i)}=\left[V^{(i)}_{i}\ \ \ V^{(i)}_{i-1}\ \ \cdots\ \ V^{(1)}_{1}\right]\] _where_ \(V^{(i)}_{j}\in\mathbb{R}^{n_{i+1}\times n_{j}}\) _such that_ \[\rho^{(i)}(g)V^{(i)}_{j} =V^{(i)}_{j}\pi^{(i-1)}(g)\forall g\in G\] \[W^{(i)} =V^{(i)}A^{(i-1)}.\]
2. _For every_ \(i\in\{1,\ldots,d\}\)_, define_ \(g^{(i)}:\mathbb{R}^{m}\mapsto\mathbb{R}^{n_{i+1}}\) _by_ \(g^{(i)}(x)=W^{(i)}f^{(i)}(x)\) _and_ \(h^{(i)}:\mathbb{R}^{m}\mapsto\mathbb{R}^{N_{i}}\) _by_ \(h^{(i)}(x)=A^{(i-1)}f^{(i)}(x)\)_. Then we have the recursion_ \[g^{(i)}(x) =V^{(i)}h^{(i)}(x)\forall i\in\{1,\ldots,d\}\] \[h^{(i+1)}(x) =\left[\begin{matrix}\operatorname{Re}\!\operatorname{LU}(g^{(i)} (x)+b^{(i)})-\frac{1}{2}g^{(i)}(x)\\ h^{(i)}(x)\end{matrix}\right]\forall i\in\{1,\ldots,d-1\}.\]
The \(\boldsymbol{V^{(l)}}\) are the latent weight matrices, and in Thm. 6 (a) we see that each \(\boldsymbol{V^{(l)}}\) satisfies an equivariant condition that is easy to implement. The apparent weight matrices \(\boldsymbol{W^{(l)}}\) can then be reconstructed as \(\boldsymbol{W^{(l)}}=\boldsymbol{V^{(l)}}\boldsymbol{A^{(i-1)}}\). In practice, however, there is no need to perform this reconstruction; instead, we implement the recursion in Thm. 6 (b) as the forward pass of the \(\boldsymbol{G}\)-DNN, where the final network output is \(\boldsymbol{f(x)}=\boldsymbol{g^{(d)}(x)}+\boldsymbol{b^{(d)}}\). Observe that the recursion is given directly in terms of the \(\boldsymbol{V^{(l)}}\) as opposed to the \(\boldsymbol{W^{(l)}}\). This recursion leverages the block-triangular structure of \(\boldsymbol{A^{(i-1)}}\) in the transformation \(\boldsymbol{W^{(l)}}=\boldsymbol{V^{(l)}}\boldsymbol{A^{(i-1)}}\).
### Admissible architectures
Theorem 6 tells us how to construct a \(\boldsymbol{G}\)-DNN, but it first requires us to select a sequence of reps or "architecture" \(\boldsymbol{\rho^{(1)}},\ldots,\boldsymbol{\rho^{(d)}}\). Selecting the optimal sequence is the problem of \(\boldsymbol{G}\)-invariant neural architecture design, which is beyond the scope of this work. At the very least, however, we would like to avoid sequences that correspond to "degenerate" network architectures. In particular, we require that (1) every row of each weight matrix \(\boldsymbol{W^{(l)}}\) be nonzero and (2) no two rows of the augmented weight matrix \([\boldsymbol{W^{(l)}}\mid\boldsymbol{b^{(l)}}]\) be parallel. The first condition ensures there are no neurons disconnected from all previous layers, and the second condition ensures no two hidden neurons in a given layer can be combined into a single hidden neuron or skip connection. We call a sequence of reps \(\boldsymbol{\rho^{(1)}},\ldots,\boldsymbol{\rho^{(d)}}\) an _admissible architecture_ if it admits a \(\boldsymbol{G}\)-DNN with weight matrices satisfying these two conditions. Below, Thm. 7 provides a characterization of admissible architectures that we implement in practice. First, however, we introduce additional notions and notation.
For every \(\boldsymbol{A}\in\mathbb{R}^{m\times m}\), let \(\mathtt{St}_{\boldsymbol{G}}\)(\(\boldsymbol{A}\)) be the stabilizer subgroup
\[\mathtt{St}_{\boldsymbol{G}}(\boldsymbol{A})=\{g\in\boldsymbol{G}:g \boldsymbol{A}=\boldsymbol{A}\},\]
and for every finite orthogonal matrix group \(\Gamma\), let \(P_{\Gamma}\) be the orthogonal projection operator onto the vector subspace pointwise-invariant under the action of \(\Gamma\):
\[P_{\Gamma}=\frac{1}{|\Gamma|}\sum_{g\in\Gamma}g.\]
Let \(\mathcal{S}(\boldsymbol{G})\) denote the set of all subgroups of \(\boldsymbol{G}\). Then, we define the function \(\boldsymbol{\theta}:\{(H,K,J)\in\mathcal{S}(\boldsymbol{G})^{3}:|H:K|\leq 2 \}\mapsto\mathcal{S}(\boldsymbol{G})\) by
\[\boldsymbol{\theta}(H,K,J) =\pi_{J}^{-1}[\mathtt{st}_{\pi_{J}(G)}(P_{\pi_{J}(K)}-(|H:K|-1)P_ {\pi_{J}(H)})]\] \[=\{g\in\boldsymbol{G}:\pi_{J}(g)(P_{\pi_{J}(K)}-(|H:K|-1)P_{\pi_{J }(H)})=P_{\pi_{J}(K)}-(|H:K|-1)P_{\pi_{J}(H)}\},\]
where the ordinary perm-rep \(\pi_{J}:G\mapsto\mathcal{P}(|G/|J|)\) is defined as usual- i.e., defined to be equivalent to the action of \(\boldsymbol{G}\) on \(G/J\). Note \(\pi_{J}\) is defined only up to conjugation by a perm-matrix, since no ordering on the cosets in \(G/J\) is specified. It turns out, however, that \(\boldsymbol{\theta}\) is invariant under conjugation of \(\pi_{J}\), and more generally \(\boldsymbol{\theta}\) is invariant and equivariant with respect to certain conjugations of the input subgroups (see Prop. 8 in App. B.3).
Let \(\boldsymbol{f}:\mathbb{R}^{m}\mapsto\mathbb{R}\) be a \(\boldsymbol{G}\)-DNN with sequence of signed perm-reps \(\boldsymbol{\rho^{(1)}},\ldots,\boldsymbol{\rho^{(d)}}\). For every \(i\in\{1,\ldots,d\}\), the signed perm-rep \(\boldsymbol{\rho^{(i)}}\) admits the following decomposition into irreducibles:
\[\boldsymbol{\rho^{(i)}}=\bigoplus_{j=1}^{r^{(0)}}\kappa_{j}^{(i)}\boldsymbol{ \rho^{(i)}_{j}},\quad\boldsymbol{\rho^{(i)}_{j}}=\boldsymbol{\rho^{(i)}_{\mu_{ j}^{(0)}}\kappa_{j}^{(i)}}.\]
The irreps \(\rho^{(\mathbf{0})}_{\mathbf{1}},\ldots,\rho^{(\mathbf{i})}_{r^{(\mathbf{i})}}\) are inequivalent, and each \(k^{(\mathbf{i})}_{j}\) is a positive integer where we define the notation
\[k^{(\mathbf{i})}_{j}\rho^{(\mathbf{i})}_{j}=\bigoplus_{k=\mathbf{1}}^{k^{(\mathbf{i})}_{j}}\rho^{ (\mathbf{i})}_{j}.\]
We say the \(i\)th layer of the \(G\)-DNN \(f\) has \(r^{(\mathbf{i})}\) distinct _irreps_ and \(k^{(\mathbf{i})}_{\mathbf{1}}+\cdots+k^{(\mathbf{i})}_{r^{(\mathbf{i})}}\)_channels_. Now for every \(i\in\{\mathbf{1},\ldots,\mathbf{d}\}\), we define the functions \(\phi^{(\mathbf{i})}:\{(H,K)\in S(\mathbf{G})^{2}:|H:K|\leq\mathbf{2}\}\mapsto S(\mathbf{G})\) recursively by
\[\phi^{(\mathbf{1})}(H,K) =\operatorname{st}_{G}(P_{K}-(|H:K|-1)P_{H})\] \[\phi^{(i+1)}(H,K) =\phi^{(\mathbf{i})}(H,K)\cap\bigcap_{j=1}^{r^{(\mathbf{i})}}\phi(H,K,H^{ (\mathbf{i})}_{j})\forall i\in\{1,\ldots,\mathbf{d}-1\}.\]
Following from the equivariance of \(\theta\) (Prop. 8) and that inner automorphisms respect subgroup intersection, we have the important property that each \(\phi^{(\mathbf{i})}\) is equivariant with respect to pairwise conjugation:
\[\phi^{(\mathbf{i})}(gHg^{-1},gKg^{-1})=g\phi^{(\mathbf{i})}(H,K)g^{-1}\forall g\in G.\]
We are ready to state the theorem characterizing single-channel admissible architectures; note the following is a generalization of Thm. 4a of Agrawal and Ostrowski (2022).
**Theorem 7**: _Maintain the notation introduced in the last paragraph, and suppose \(k^{(\mathbf{i})}_{j}=\mathbf{1}\forall i,j\) (single-channel architecture). Then the architecture7\(\{\rho^{(\mathbf{1})},\ldots,\rho^{(\mathbf{d})}\}\) is admissible iff the following conditions hold:_
Footnote 7: By definition of \(G\)-DNN architecture, \(\rho^{(\mathbf{d})}\) is trivial; see Lemma 4 and the discussion immediately proceeding it.
1. \(\phi^{(\mathbf{i})}(H^{(\mathbf{i})}_{j},K^{(\mathbf{i})}_{j})=K^{(\mathbf{i})}_{j}\forall i,j\).
2. _If_ \(H^{(\mathbf{1})}_{j}=G\) _for some_ \(j\)_, then_ \(P_{G}\neq\mathbf{0}\)_._8__
Footnote 8: This condition is trivially satisfied if \(G\) is a permutation matrix group.
Theorem 7 gives us a practical way to build admissible \(G\)-DNN architectures layer-by-layer as follows: Suppose we have already built the first \(i\) layers- i.e., we have selected the signed perm-reps \(\rho^{(\mathbf{1})},\ldots,\rho^{(\mathbf{i})}\) satisfying Thm. 7. Then the function \(\phi^{(i+1)}\) is defined. We enumerate all signed perm-irreps \(\rho_{H\!K}\), up to equivalence,9 such that \(\phi^{(\mathbf{i},K)=K}\), and we then select a subset to comprise \(\rho^{(\mathbf{i+1})}\). In terms of implementation, the computation of \(\phi^{(\mathbf{i})}(H,K)\) boils down to the computation of \(\theta(H,K,H^{(\mathbf{i-1})}_{j})\) for each \(j\), which can be accomplished using Alg. 1 (see App. B.3 for details). We should think of the outputs \(\theta(H,K,J)\) as entries of a precomputed table with a row for each \((H,K)\) (up to pairwise conjugation) and a column for each \(J\) (up to conjugation). We give additional implementation details in the next section, including how we accomodate multiple input and output channels per layer.
The number of admissible architectures can be significantly less than the total number of architectures. For example, we consider one particular permutation representation of every group of order \(\mathbf{8}\) up to isomorphism, following the constructions described in App. C.1 of Agrawal and Ostrowski (2022). For each group, we only count the architectures corresponding to sequences of irreps of strictly decreasing degree, and we report the fraction of these that are admissible (Table 1). Observe that the reduction from the total number of architectures to only the admissible ones is often significant, which could be exploited perhaps in a future implementation of \(\mathbf{G}\)-invariant neural architecture search. Understanding why all architectures of maximum depth are admissible requires further investigation.
### Additional remarks
ChannelsThe \(\mathbf{G}\)-DNN supports multiple channels in a way generalizing the notion from traditional convolutional neural networks. As already mentioned in Sec. 3.3, the \(i\)th layer of the \(\mathbf{G}\)-DNN \(\mathbf{f}\) is said to have \(\mathbf{k^{(i)}_{j}}\) output channels or copies of the irrep \(\mathbf{\rho^{(i)}_{j}}\) and \(\mathbf{k^{(i-1)}_{j}}\) input channels from the irrep \(\mathbf{\rho^{(i-1)}_{j}}\). As a simpler case, suppose \(\mathbf{\rho^{(i)}}\) contains the same number \(\mathbf{k^{(i)}}\) of copies of each of its constituent irreps, and suppose the input \(\mathbf{x}\) has \(\mathbf{k^{(0)}}\) channels. Then the \(i\)th layer can be said to have \(\mathbf{k^{(i)}}\) output channels and \(\mathbf{k^{(i-1)}}\) input channels. In practice, we implement the \(\mathbf{G}\)-DNN assuming only one copy of each irrep \(\mathbf{\rho^{(i)}_{j}}\), and then we regard each element of the input \(\mathbf{x}\) as a \(\mathbf{k^{(0)}}\)-dimensional vector and each element of the latent weight matrix block \(\mathbf{V^{(i)}_{j}}\) (see Thm. 6 (a) for notation) as a \(\mathbf{k^{(i)}}\mathbf{\times}\mathbf{k^{(i-1)}}\) matrix.
Batch normalizationIt is possible to apply batch normalization (batchnorm) immediately after ReLU in a \(\mathbf{G}\)-DNN without breaking \(\mathbf{G}\)-invariance. That batchnorm is compatible with \(\mathbf{G}\)-DNNs, and in particular type 2 signed perm-reps, out-of-the-box is not obvious, and we verify the compatibility in Prop. 10 (see App. B.4). The proof relies on the facts that (1) the first \(\mathbf{n}_{i}\) columns of \(\mathbf{W^{(0)}}\) and \(\mathbf{V^{(0)}}\) are equal and (2) the first \(\mathbf{n}_{i}\) columns of \(\mathbf{V^{(0)}}\) sum to zero if \(\mathbf{\rho^{(i)}}\) is a type 2 signed perm-irrep.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Depth & \(\mathbf{C_{8}}\) & \(\mathbf{C_{2}\times C_{4}}\) & \(\mathbf{C_{2}^{3}}\) & \(\mathbf{D_{4}}\) & \(\mathbf{Q_{8}}\) \\ \hline
2 & 5/5 & 8/15 & 11/43 & 14/21 & 9/9 \\
3 & 8/8 & 30/62 & 93/434 & 65/104 & 20/20 \\
4 & 4/4 & 48/48 & 392/392 & 84/84 & 12/12 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Ratio of the number of admissible \(\mathbf{G}\)-DNN architectures to the total number of architectures for every depth and every group \(\mathbf{G}\), \(|\mathbf{G}|=\mathbf{8}\), up to isomorphism. Only architectures corresponding to sequences of irreps of strictly decreasing degree are considered.
## 4 Examples
### Binary multiplication
Our first example is an exact result and demonstrates that \(G\)-DNNs with type 2 signed perm-reps occur even in the context of simple mathematical operations.
**Example 1**: _For \(d\geq 3\), let \(m=2^{d}\) and_
\[\mathcal{X}=\{\chi\in\{0,1\}^{m}:\chi_{2i-1}+\chi_{2i}=1\forall i\in\{1,\dots,m/2 \}\}.\]
_For \(\{\mathbf{e_{1},\dots,e_{m}\}\}\) the standard orthonormal basis, Let \(G<\mathcal{P}(m)\) be the subgroup of all even permutation matrices that only transpose \(\mathbf{e_{2i-1}}\) and \(\mathbf{e_{2i}}\). Define the function \(f:\mathcal{X}\mapsto\{-1,1\}\) by_
\[f(x)=\prod_{i=1}^{m/2}(\chi_{2i-1}-\chi_{i}).\]
_Then \(f\) admits an expression as a \(G\)-DNN of depth \(d\) where:_
1. _For_ \(i\in\{1,\dots,d\}\) _and_ \(j\in\{1,\dots,i\}\)_, the blocks_ \(V_{j}^{(i)}\) _of the latent weight matrices (see Thm._ 6 _(a)) are given by_ \[V_{j}^{(i)}=\begin{cases}I_{m/2^{i+1}}\otimes\begin{bmatrix}1&-1&1&-1\\ 1&-1&-1&1\end{bmatrix},&\text{ if }1\leq i=j<d\\ \begin{bmatrix}1&-1\end{bmatrix},&\text{ if }i=j=d\\ 0,&\text{ otherwise.}\end{cases}\]
2. _For_ \(i\in\{1,\dots,d-1\}\)_, the_ \(i\)_th layer transforms by the signed perm-rep_ \[\rho^{(i)}=\begin{cases}\bigoplus_{j=1}^{m/2^{i+1}}\rho_{H_{j}^{(i)}}\kappa_{ j}^{(i)},&\text{ if }i\leq d-2\\ \rho_{H_{1}^{(d-1)}}\kappa_{1}^{(d-1)}\oplus\rho_{H_{1}^{(d-1)}}\kappa_{1}^{(d -1)},&\text{ if }i=d-1,\end{cases}\] _where_ \[K_{j}^{(1)} =\{g\in G:g\nu_{j}=\nu_{j}\}\] \[H_{j}^{(1)} =\{g\in G:g\nu_{j}=\pm\nu_{j}\},\] _where_ \(\nu_{j}=e_{4j-3}-e_{4j-2}+e_{4j-1}-e_{4j}\)_, and_ \[K_{j}^{(i+1)} =H_{2j-1}^{(i)}\cap H_{2j}^{(i)}\] \[H_{j}^{(i+1)} =(H_{2j-1}^{(i)}\cap H_{2j}^{(i)})\cup((G\setminus H_{2j-1}^{(i) })\cap(G\setminus H_{2j}^{(i)})),\] _for_ \(i\in\{1,\dots,d-2\}\)_. The rep_ \(\rho^{(d)}\) _is trivial as usual._
Here, \(\mathbf{f(\times)}\) is the product of \(\frac{m}{2}\) binary elements in \(\{\mathbf{-1,1\}\}\) but where the elements are each represented as 2D one-hot vectors and are then concatenated into the \(m\)-dimensional input \(\mathbf{\mathsf{x}}\). From the perspective of multiplication of elements in \(\{\mathbf{-1,1\}\}\), the group \(\mathbf{G}\) corresponds to an even number of sign flips, which clearly leaves the final product invariant. Example 1 gives an explicit construction of a \(\mathbf{G}\)-DNN that implements the function \(\mathbf{f}\). Each layer of the \(\mathbf{G}\)-DNN partitions its input into pairs and takes each of their products; the network thus iteratively coarsegrains the input until the final scalar product is returned.10 Each latent weight matrix of the \(\mathbf{G}\)-DNN has a block-diagonal structure and no latent skip connections; however, by Thm. 6 (a), the \(\mathbf{G}\)-DNN does have skip connections in terms of the apparent weight matrices. All signed perm-irreps in the network-except \(\mathbf{\rho^{(d)}}\) which must be trivial-are type 2, and the image of each one is isomorphic to the Klein-4 group. The image of the penultimate rep \(\mathbf{\rho^{(d-1)}}\) is also isomorphic to the Klein-4 group, but the rep itself is not irreducible and decomposes into two copies of a type 2 scalar irrep.
Footnote 10: Although \(\mathbf{f}\) is a simple function, we were unable to construct a \(\mathbf{G}\)-DNN—in particular, a shallower \(\mathbf{G}\)-invariant architecture that (1) has \(\mathbf{G}\)-equivariant preactivations and (2) does not have an exponential number of hidden neurons per layer—simpler than the one given in Ex. 1 that fits \(\mathbf{f}\). For example, one could compute \(\mathbf{f(\times)}\) with a shallow network that first maps \(\mathbf{\mathsf{x}}\) linearly into \(\{\mathbf{-1,1\}\}^{\frac{m}{2}}\), then takes the sum \(\mathbbm{1}^{\top}\mathbf{\mathsf{x}}\), and finally returns the parity of the sum using a suitable combination of ReLU neurons; however, the function \(\mathbf{\mathsf{x}}\rightarrow\mathbbm{1}^{\top}\mathbf{\mathsf{x}}\) is not \(\mathbf{G}\)-equivariant.
We investigate Ex. 1 empirically by generating the complete dataset \(\{\mathbf{(\mathsf{x},\mathbf{f(\times)})}:\mathbf{\mathsf{x}}\in\mathbf{\mathsf{x}}\}\) for \(\mathbf{m=16}\). Since \(\mathbf{f(\times)=\pm 1}\), then we regard the estimation of \(\mathbf{f}\) as a binary classification problem. We use a random 20% of the dataset (with class stratification) for training and the rest for validation. We instantiate the "type 2" architecture with the type 2 signed perm-irreps in Ex. 1 (b) and randomly initialize its weights. We compare the type 2 architecture to two type 1 baselines: (1) the "type 1" architecture obtained by sending11\(\mathbf{\rho_{HK}\rightarrow\rho_{HH}}\) (see Thm. 1) and (2) the "unraveled" architecture obtained by sending \(\mathbf{\rho_{HK}\rightarrow\rho_{KK}}\) (seeThm. 2 (d)). The type 1, type 2, and unrave
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Architecture & Initial train & Initial val & Final train & Final val \\ \hline Type 1 & \(\mathbf{5.11\pm 4.68}\) & \(\mathbf{5.11\pm 4.68}\) & \(\mathbf{0.71\pm 0.04}\) & \(\mathbf{0.71\pm 0.04}\) \\
**Type 2** & \(\mathbf{1.33\pm 0.60}\) & \(\mathbf{1.33\pm 0.60}\) & \(\mathbf{0.00\pm 0.00}\) & \(\mathbf{0.00\pm 0.00}\) \\ Unraveled (random init) & \(\mathbf{11.36\pm 12.34}\) & \(\mathbf{11.36\pm 12.34}\) & \(\mathbf{0.71\pm 0.03}\) & \(\mathbf{0.71\pm 0.03}\) \\ Unraveled (type 2 init) & \(\mathbf{1.33\pm 0.60}\) & \(\mathbf{1.33\pm 0.60}\) & \(\mathbf{0.70\pm 0.00}\) & \(\mathbf{0.70\pm 0.00}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Binary cross-entropy losses (mean and standard deviation over **24** random initialization seeds) of four \(\mathbf{G}\)-DNN architectures (see main text) on the binary multiplication problem. We report both the training loss and validation loss before training (initial) and after **5** epochs of training (final). Training and validation losses are equal because each of the two class labels corresponds to exactly one \(\mathbf{G}\)-orbit, over which each \(\mathbf{G}\)-DNN is constant– regardless of the dataset split. All values correspond to a classification accuracy of **50%**, except the final losses of zero for the type 2 architecture corresponding to 100% accuracy.
thus analogous to the three weightsharing patterns in Fig. 1 (see Sec. 2.2); i.e., the type 1 architecture has the same number of hidden neurons as the type 2 architecture but expresses different functions, and the unraveled architecture has double the number of hidden neurons and the capacity to express both the smaller two architectures as well as much more.
We trained all architectures for \(\mathbf{5}\) epochs with the Adam optimizer with minibatch size \(\mathbf{64}\), learning rate \(\mathbf{0.01}\), and learning rate decay \(\mathbf{0.99}\) per step. Starting at 50% classification accuracy, the type 2 architecture quickly achieves 100% traning and validation accuracy as well as a final binary cross-entropy loss of \(\mathbf{0.00}\) (Table 2). All other architectures remain stuck at 50% accuracy even after training. To explain these results, we first note that in the type 2 architecture, the latent weight matrices for \(i\in\{\mathbf{1},\ldots,d-\mathbf{1}\}\) are constrained to have the form
\[\mathbf{V}_{j}^{(\mathbf{i})}=\begin{cases}I_{m/2^{i+1}}\otimes\begin{bmatrix} u_{j}^{(\mathbf{i})}&-u_{j}^{(\mathbf{i})}&\mathbf{v}_{j}^{(\mathbf{i})}&- \mathbf{v}_{j}^{(\mathbf{i})}\\ u_{j}^{(\mathbf{i})}&-u_{j}^{(\mathbf{i})}&-\mathbf{v}_{j}^{(\mathbf{i})}& \mathbf{v}_{j}^{(\mathbf{i})}\\ \mathbf{0},&\text{if }\mathbf{1}\leq j<i<d,\end{cases}\end{cases},\]
where the \(\mathbf{u}_{j}^{(\mathbf{i})}\) and \(\mathbf{v}_{j}^{(\mathbf{i})}\) are free learnable parameters. In the type 1 architecture, the negative signs in front of \(\mathbf{v}_{j}^{(\mathbf{i})}\) are omitted, and the off-diagonal blocks are no longer zero but are instead constrained in the same way as the diagonal blocks. As a result, the very first linear transformation \(g^{(\mathbf{1})}(\mathbf{x})=\mathbf{V}^{(\mathbf{1})}\mathbf{x}\) in the architecture is invariant to every transposition \((\mathbf{x}_{2i-1},\mathbf{x}_{2i})\to(\mathbf{x}_{2i},\mathbf{x}_{2i-1})\)- not just even numbers of them; the rest of the architecture is thus unable to distinguish the two classes of inputs, resulting in a flat \(\mathbf{50\%}\) accuracy. The type 1 architecture simply does not have the capacity to solve the binary classification problem.
In contrast to the type1 and type 2 architectures, the latent weight matrices \(\mathbf{V}_{i}^{(\mathbf{i})}\) of the unraveled architectures are not even constrained to be block-diagonal, and the \(\mathbf{V}_{j}^{(\mathbf{i})}\) for \(j<i\) are not constrained to be zero. The unraveled architecture thus has about \(\mathbf{6.5\times}\) the number of free learnable parameters compared to the type 2 architecture, and it must learn the block-diagonal weight structure from data alone. Based on the training loss, the complete failure of the unraveled architecture is likely due to a severe trainability issue. To confirm the unraveled architecture indeed has the capacity to express the solution and in particular the type 2 architecture, we loaded the trained weights of the type 2 architecture into the unraveled architecture by an appropriate mapping and confirmed that the resulting architecture indeed achieves 100% training and validation accuracy. To test if the failure to train is due to poor initialization, we implemented the unraveled architecture with both random initialization (called "random init" in Table 2) and with the randomly initialized weights of the type 2 architecture (called "type 2 init" in Table 2). While initialization with the initial weights of the type 2 architecture reduces the initial loss of the unraveled architecture to match the type 2 architecture, it does not solve the trainability issue. We thus conclude the unraveled architecture simply does not have sufficient inductive bias to solve the binary classification problem.
Our second example demonstrates that \(\mathbf{G}\)-DNNs with type 2 signed perm-reps carry inductive bias that can be useful "in the wild". We consider the ModelNet40 dataset(Wu et al., 2015), which contains 9843 training and 2468 validation samples of 3D CAD mesh representations of \(\mathbf{40}\) different objects ranging from airplanes to toilets. The problem is to predict the object class given an input mesh. We preprocess the data in identical fashion to Jiang et al. (2019). Specifically, we first bound each mesh in the unit sphere and discretize the sphere into an icosahedron. To increase resolution, we include the midpoint on each icosahedral edge as a vertex, normalize all vertices to have unit norm, and then repeat once more on the new polyhedron; this yields 112 vertices in all. From each of these vertices, we perform ray-tracing to the origin; we record the distance from the sphere to the mesh as well as the sine and cosine of the incident angle. The representation is further augmented with the three channels corresponding to the convex hull of the input mesh, yielding a \(\mathbf{6}\)-channel input representation over 112 points.
The relevant symmetry group \(\mathbf{G}\) is the symmetry group of the icosahedron, generated by (1) the order-\(\mathbf{3}\) rotation about the normal vector to one of the faces and the order-\(\mathbf{5}\) rotation about one of the vertices; the group has 60 elements and is isomorphic to the alternating group \(\mathbf{A_{5}}\). We consider \(\mathbf{G}\)-DNN architectures with four layers and 16, 32, 64, and 40 output channels per rep in each respective layer. We specify apriori the degrees of the signed perm-irreps to be used in each layer as 30, 15, 10, and 1; we then enumerate the admissible architectures as described after Thm. 7, and we select the following designs: For the "mixed" architecture, we select one type 1 and one type 2 signed perm-irrep in each of the first three layers12we select the trivial rep in the last layer, as always. We compare the mixed architecture to the corresponding "type 1" and "unraveled" architectures, which are constructed exactly as done in Sec. 4.1.
Footnote 12: In each layer, the selected type 1 and type 2 signed perm-irreps are cohomologous as defined in Agrawal and Ostrowski (2022); i.e., they have the forms \(\rho_{\mathbf{HH}}\) and \(\rho_{\mathbf{HK}}\) respectively.
We trained all architectures for 500 epochs with the Adam optimizer with minibatch size 64, learning rate 0.01, and learning rate decay 0.99 per step. We included batchnorm as described in Sec. 3.4 after each ReLU layer. In each run, we performed retroactive early stopping by recording the highest validation accuracy achieved over all epochs. We
Figure 2: Validation classification accuracies (mean and standard deviation over 24 random initialization seeds) for three \(\mathbf{G}\)-DNN architectures (see main text) on the ModelNet40 dataset using different percentages of the full training data. The mixed architecture—the only one including type 2 signed perm-irreps—clearly outperforms the two baselines, with the performance gap increasing as less training data is available.
find that the mixed architecture--the only one containing type 2 signed perm-irreps--significantly outperforms the two baseline architectures in terms of validation accuracy, with the performance gap increasing as less training data is used. This suggests that the mixed architecture carries stronger inductive bias, still consistent with the ground truth, as compared to the baselines.
## 5 Conclusion
We have introduced the \(\boldsymbol{G}\)-DNN, a \(\boldsymbol{G}\)-invariant densely connected DNN architecture. In contrast to previous \(\boldsymbol{G}\)-invariant architectures in the literature such as the \(\boldsymbol{G}\)-CNN (Cohen and Welling, 2016), the \(\boldsymbol{G}\)-DNN is built with _signed_ perm-reps that do not require individual layers of the network to be \(\boldsymbol{G}\)-equivariant. The result is a richer family of \(\boldsymbol{G}\)-invariant architectures never seen before, and we have demonstrated with both theoretical and empirical examples that some of these novel architectures can boost predictive performance.
To be clear, we do not claim the \(\boldsymbol{G}\)-DNN to be a new state-of-the-art (SOTA) for \(\boldsymbol{G}\)-invariant deep learning. Rather, our work is a demonstration that signed perm-reps are mathematically natural and practically useful building blocks for deep \(\boldsymbol{G}\)-invariant architectures, and we think practitioners should be aware of them and know how to use them. Indeed, we suspect practitioners will observe performance boosts in their own models if they incorporate the ideas and structures presented in this paper into their respective domain-specific SOTA \(\boldsymbol{G}\)-invariant architectures.
Even at the domain-agnostic level, however, several open questions remain for future research. First, can we extend \(\boldsymbol{G}\)-DNNs from \(\boldsymbol{G}\)-invariance to \(\boldsymbol{G}\)-equivariance? The only obstacle here is the construction of architectures guaranteed to be admissible. Second, can we perform neural architecture search to find the optimal signed perm-irreps to use in each layer? Third and finally, are there ways of enforcing \(\boldsymbol{G}\)-invariance that go even beyond the \(\boldsymbol{G}\)-DNN architectures described in this paper? A complete classification of _all_\(\boldsymbol{G}\)-invariant architectures would give us the finest possible control on inductive bias, thereby allowing us--in principle--to optimize predictive performance on \(\boldsymbol{G}\)-invariant problems.
## Acknowledgments and Disclosure of Funding
D.A. was supported by NSF award No. 2202990. J.O. was supported by DOE grant DE-SC0018175.
## Appendix A Signed permutation representations
### Central hypothesis
[Proof of Thm. 1] For every \(\boldsymbol{J}\leq\boldsymbol{G}\), let \(P_{\boldsymbol{J}}\) be the \(m\times m\) orthogonal projection operator onto the subspace of \(\mathbb{R}^{m}\) pointwise-invariant under the action of \(\boldsymbol{J}\):
\[P_{\boldsymbol{J}}=\frac{1}{|\boldsymbol{J}|}\sum_{\boldsymbol{g}\in \boldsymbol{J}}\boldsymbol{g}.\]
By Thm. 4b of Agrawal and Ostrowski (2022), there exists \(w_{2}\in\mathsf{ran}(P_{K_{2}}-(|H:K_{2}|-1)P_{H})\) and a transversal \(\{g_{1},\ldots,g_{n}\}\) of \(G/H\) such that
\[W_{2}=\begin{bmatrix}g_{1}w_{2}&g_{2}w_{2}&\cdots&g_{n}w_{2}\end{bmatrix}^{ \mathsf{T}}.\]
Moreover, there exists \(w_{1}\in\mathsf{ran}(P_{K_{1}}-(|H:K_{1}|-1)P_{H})\) and \(P\in P(n)\) such that
\[PW_{1}=\begin{bmatrix}g_{1}w_{1}&g_{2}w_{1}&\cdots&g_{n}w_{1}\end{bmatrix}^{ \mathsf{T}}.\]
(The permutation matrix \(P\) is in general necessary so we can use the same transversal of \(G/H\).) We have
\[\mathsf{diag}(PW_{1}W_{2}^{\mathsf{T}}) =\{(g_{i}w_{1})^{\mathsf{T}}g_{i}w_{2}\}_{i=1}^{n}\] \[=\{w_{1}^{\mathsf{T}}g_{i}^{\mathsf{T}}g_{i}w_{2}\}_{i=1}^{n}\] \[=\{w_{1}^{\mathsf{T}}w_{2}\}_{i=1}^{n}\] \[=0,\]
where the last step follows by Prop. 6 of Agrawal and Ostrowski (2022).
**Proof** [Proof of Thm. 2] (a) Let \(\pi:G\mapsto P(n)\) and \(\zeta:G\mapsto\mathcal{Z}(n)\) be the unique functions satisfying \(\rho(g)=\pi(g)\zeta(g)\forall g\in G\) (note \(\pi\) should not be confused with \(\pi_{\rho}\) appearing in the theorem statement). Evaluating the Kronecker product, the function \(\pi_{\rho}\) can be rewritten as
\[\pi_{\rho}(g) =\mathcal{H}\biggl{(}\begin{bmatrix}\rho(g)&-\rho(g)\\ -\rho(g)&\rho(g)\end{bmatrix}\biggr{)}\] \[=\mathcal{H}\biggl{(}\begin{bmatrix}\pi(g)\zeta(g)&-\pi(g)\zeta(g )\\ -\pi(g)\zeta(g)&\pi(g)\zeta(g)\end{bmatrix}\biggr{)}\] \[=\mathcal{H}\biggl{(}\begin{bmatrix}\pi(g)&0\\ 0&\pi(g)\end{bmatrix}\begin{bmatrix}\zeta(g)&-\zeta(g)\\ -\zeta(g)&\zeta(g)\end{bmatrix}\biggr{)}\] \[=\begin{bmatrix}\pi(g)&0\\ 0&\pi(g)\end{bmatrix}\mathcal{H}\biggl{(}\begin{bmatrix}\zeta(g)&-\zeta(g)\\ -\zeta(g)&\zeta(g)\end{bmatrix}\biggr{)},\]
where in the last step we exploited the permutation-equivariance of elementwise operations. Now since \(\mathcal{H}(z)=\frac{1+z}{2}\) for \(z\in\{-1,1\}\), then we have
\[\pi_{\rho}(g) =\begin{bmatrix}\pi(g)&0\\ 0&\pi(g)\end{bmatrix}\frac{1}{2}\biggl{(}\begin{bmatrix}I_{n}&I_{n}\\ I_{n}&I_{n}\end{bmatrix}+\begin{bmatrix}\zeta(g)&-\zeta(g)\\ -\zeta(g)&\zeta(g)\end{bmatrix}\biggr{)}\] \[=\frac{1}{2}\biggl{(}\begin{bmatrix}\pi(g)&\pi(g)\\ \pi(g)&\pi(g)\end{bmatrix}+\begin{bmatrix}\pi(g)\zeta(g)&-\pi(g)\zeta(g)\\ -\pi(g)\zeta(g)&\pi(g)\zeta(g)\end{bmatrix}\biggr{)}\] \[=\frac{1}{2}\biggl{[}\begin{matrix}\pi(g)+\rho(g)&\pi(g)-\rho(g )\\ \pi(g)-\rho(g)&\pi(g)+\rho(g)\end{matrix}\biggr{]}.(*)\]
Now, let \(\pi(g)_{ij}\) (resp. \(\rho(g)_{ij}\)) be the unique nonzero element in the \(i\)th row of \(\pi(g)\) (resp. \(\rho(g)\)). Since \(\rho(g)_{ij}=\pm\pi(g)_{ij}\), then exactly one of \(\frac{1}{2}[\pi(g)_{ij}\pm\rho(g)_{ij}]\) is unity, while the
other is zero. Thus, every row of (*) is a one-hot vector. The same argument can be made for the columns, and hence (*) is a bona fide permutation matrix; i.e., \(\pi_{\rho}:G\mapsto\mathcal{P}(2n)\) is a well-defined function.
All that remains is to show \(\pi_{\rho}\) is a homomorphism. We rewrite (*) as
\[\pi_{\rho}(g)=\frac{1}{2}\left(\begin{bmatrix}I_{n}\\ I_{n}\end{bmatrix}\pi(g)\begin{bmatrix}I_{n}&I_{n}\end{bmatrix}+\begin{bmatrix} I_{n}\\ -I_{n}\end{bmatrix}\rho(g)\begin{bmatrix}I_{n}&-I_{n}\end{bmatrix}\right).\]
Then for \(g,h\in G\), it is easy to verify that
\[\pi_{\rho}(g)\pi_{\rho}(h) =\frac{1}{4}\left(\begin{bmatrix}I_{n}\\ I_{n}\end{bmatrix}\pi(g)\begin{bmatrix}I_{n}&I_{n}\end{bmatrix}+\begin{bmatrix} I_{n}\\ -I_{n}\end{bmatrix}\rho(g)\begin{bmatrix}I_{n}&-I_{n}\end{bmatrix}\right)\left( \begin{bmatrix}I_{n}\\ I_{n}\end{bmatrix}\pi(h)\begin{bmatrix}I_{n}&I_{n}\end{bmatrix}\right.\] \[\quad+\begin{bmatrix}I_{n}\\ -I_{n}\end{bmatrix}\rho(h)\begin{bmatrix}I_{n}&-I_{n}\end{bmatrix}\right)\] \[=\frac{1}{4}\left(2\begin{bmatrix}I_{n}\\ I_{n}\end{bmatrix}\pi(g)\pi(h)\begin{bmatrix}I_{n}&I_{n}\end{bmatrix}+0+2 \begin{bmatrix}I_{n}\\ -I_{n}\end{bmatrix}\rho(g)\rho(h)\begin{bmatrix}I_{n}&-I_{n}\end{bmatrix}\right)\] \[=\pi_{\rho}(gh).\]
**(b)**: Using (*) from part (a), we have
\[\pi_{\rho}(g)\begin{bmatrix}W\\ -W\end{bmatrix} =\frac{1}{2}\begin{bmatrix}\pi(g)+\rho(g)&\pi(g)-\rho(g)\\ \pi(g)-\rho(g)&\pi(g)+\rho(g)\end{bmatrix}\begin{bmatrix}W\\ -W\end{bmatrix}\] \[=\frac{1}{2}\begin{bmatrix}\pi(g)W+\rho(g)W-\pi(g)W+\rho(g)W\\ \pi(g)W-\rho(g)W-\pi(g)W-\rho(g)W\end{bmatrix}\] \[=\frac{1}{2}\begin{bmatrix}2\rho(g)W\\ -2\rho(g)W\end{bmatrix}\] \[=\rho(g)\begin{bmatrix}W\\ -W\end{bmatrix}\] \[=\begin{bmatrix}W\\ -W\end{bmatrix}g,\]
proving the claim.
**(c)**: For the forward implication, we prove its contrapositive; suppose \(\rho\) is type 1. Then \(\pi=\rho\) so that (*) implies
\[\pi_{\rho}=\frac{1}{2}\begin{bmatrix}2\pi(g)&0\\ 0&2\pi(g)\end{bmatrix}=\begin{bmatrix}\pi(g)&0\\ 0&\pi(g)\end{bmatrix}.\]
which is clearly reducible.
For the reverse implication, suppose \(\rho\) is type 2. To show \(\pi_{\rho}\) is irreducible, we will show it is transitive on the standard orthonormal basis \(\{\omega_{1},\ldots,\omega_{2n}\}\). Let \(i,j\in\{1,\ldots,2n\}\), and without loss of generality suppose \(i\leq n\).
**Case 1:** Suppose \(j\leq n\). Then \(\omega_{i}\) is just \(\mathbf{e}_{i}\) (the \(i\)th standard orthonormal basis vector of dimension \(n\)) concatenated with \(\bar{\bar{\mathbf{0}}}_{n}\), and similar for \(\omega_{j}\). Now since \(\rho\) is irreducible, then
there exists \(g\in G\) such that \(\rho(g)e_{i}=e_{j}\); note \(\pi(g)e_{i}=e_{j}\) as well. We thus have
\[\pi_{\rho}(g)\omega_{i} =\frac{1}{2}\begin{bmatrix}\pi(g)+\rho(g)&\pi(g)-\rho(g)\\ \pi(g)-\rho(g)&\pi(g)+\rho(g)\end{bmatrix}\begin{bmatrix}e_{i}\\ \bar{0}_{n}\end{bmatrix}\] \[=\frac{1}{2}\begin{bmatrix}\pi(g)e_{i}+\rho(g)e_{i}\\ \pi(g)e_{i}-\rho(g)e_{i}\end{bmatrix}\] \[=\frac{1}{2}\begin{bmatrix}e_{j}\\ e_{j}-e_{j}\end{bmatrix}\] \[=\begin{bmatrix}e_{j}\\ \bar{0}_{n}\end{bmatrix}\] \[=\omega_{j},\]
establishing transitivity in this case.
**Case 2:** Suppose instead \(j>n\). Then \(\omega_{j}\) is the concatenation of \(\bar{0}_{n}\) and \(e_{j-n}\). Since \(\rho\) is irreducible, then there exists \(g\in G\) such that \(\rho(g)e_{i}=-e_{n-j}\). The rest of the proof proceeds in analogy to case 1.
**(d)** For every \(v\in\mathbb{R}^{n}\) and linear rep \(\tau:G\mapsto\mathcal{GL}(n,\mathbb{R})\), define the stabilizer subgroup
\[\operatorname{st}_{\tau}(v)=\{g\in G:\tau(g)v=v\}.\]
Since \(\rho=\rho_{HK}\), then \(\operatorname{st}_{\rho}(e_{1})=K\). Since \(\pi_{\rho}\) is an ordinary perm-rep, then all we must show is \(\operatorname{st}_{\pi_{\rho}}(\omega_{1})=K\) to establish the claim. Similar to part (c), \(\operatorname{st}_{\pi_{\rho}}(\omega_{1})\) is the set of all \(g\in G\) such that
\[\pi_{\rho}(g)\omega_{1} =\omega_{1}\] \[\frac{1}{2}\begin{bmatrix}\pi(g)+\rho(g)&\pi(g)-\rho(g)\\ \pi(g)-\rho(g)&\pi(g)+\rho(g)\end{bmatrix}\begin{bmatrix}e_{1}\\ \bar{0}_{n}\end{bmatrix} =\begin{bmatrix}e_{1}\\ \bar{0}_{n}\end{bmatrix}\] \[\frac{1}{2}\begin{bmatrix}\pi(g)e_{1}+\rho(g)e_{1}\\ \pi(g)e_{1}-\rho(g)e_{1}\end{bmatrix} =\begin{bmatrix}e_{1}\\ \bar{0}_{n}\end{bmatrix}.\]
Taking the difference of the two rows, we obtain \(\rho(g)e_{1}=e_{1}\); hence, \(\operatorname{st}_{\pi_{\rho}}(\omega_{1})=\operatorname{st}_{\rho}(e_{1})= \operatorname{\mathsf{K}}\), completing the proof.
## Appendix B \(G\)-invariant deep neural networks
### Parameterization redundancies
To understand the inclusion of skip connections as a reparameterization, we rewrite Eq. 2 as
\[f^{(i+1)}(x) =\begin{bmatrix}\operatorname{ReLU}(W^{(i)}f^{(i)}(x)+b^{(i)}) \\ \operatorname{ReLU}(f^{(i)}(x))-\operatorname{ReLU}(-f^{(i)}(x))\end{bmatrix}\] \[=\begin{bmatrix}I_{n_{i+1}}&0&0\\ 0&I_{N_{i}}&-I_{N_{i}}\end{bmatrix}\operatorname{ReLU}\left(\begin{bmatrix}W ^{(i)}\\ I_{N_{i}}\\ -I_{N_{i}}\end{bmatrix}f^{(i)}(x)+\begin{bmatrix}b^{(i)}\\ 0\\ 0\end{bmatrix}\right).\]
The outer matrix in the last equation can be combined with the matrix in the next layer; the result is a DNN having the same depth as the original--and representing the same input-output function--but with no skip connections, as they have been transformed into additional ReLU neurons.
[Proof of Prop. 3] We will show \(W^{(i+1)}f^{(i+1)}(x)+b^{(i+1)}\) is invariant under the transformation. The function \(f^{(i+1)}\) transforms as
\[f^{(i+1)}(x) \rightarrow\begin{bmatrix}\text{ReLU}(CPZW^{(i)}f^{(1)}(x)+CPZb^ {(i)})\\ f^{(i)}(x)\end{bmatrix}\] \[=\begin{bmatrix}CP&0\\ 0&I_{N_{i}}\end{bmatrix}\begin{bmatrix}\text{ReLU}(Z(W^{(i)}f^{(1)}(x)+b^{(i)} )\\ f^{(i)}(x)\end{bmatrix}\] \[=\begin{bmatrix}CP&0\\ 0&I_{N_{i}}\end{bmatrix}\begin{bmatrix}\begin{bmatrix}\text{ReLU}(W^{(i)}f^{( 1)}(x)+b^{(1)})\\ f^{(i)}(x)\end{bmatrix}-\begin{bmatrix}\mathcal{H}(-Z)(W^{(i)}f^{(1)}(x)+b^{( 1)})\\ 0\end{bmatrix}\end{bmatrix}\] \[=\begin{bmatrix}CP&0\\ 0&I_{N_{i}}\end{bmatrix}\begin{bmatrix}\begin{bmatrix}I_{n_{i+1}}&-\mathcal{H} (-Z)W^{(i)}\\ 0&I_{N_{i}}\end{bmatrix}\begin{bmatrix}\text{ReLU}(W^{(i)}f^{(1)}(x)+b^{(1)})\\ f^{(i)}(x)\end{bmatrix}-\begin{bmatrix}\mathcal{H}(-Z)b^{(1)}\\ 0\end{bmatrix}\end{bmatrix}\] \[=\begin{bmatrix}CP&0\\ 0&I_{N_{i}}\end{bmatrix}\begin{bmatrix}\begin{bmatrix}I_{n_{i+1}}&-\mathcal{H} (-Z)W^{(1)}\\ 0&I_{N_{i}}\end{bmatrix}\begin{bmatrix}\text{ReLU}(W^{(i)}f^{(1)}(x)+b^{(1)})\\ f^{(i)}(x)\end{bmatrix}\] \[\quad-\begin{bmatrix}I_{n_{i+1}}&-\mathcal{H}(-Z)W^{(1)}\\ 0&I_{N_{i}}\end{bmatrix}\begin{bmatrix}\mathcal{H}(-Z)b^{(1)}\\ 0\end{bmatrix}\end{bmatrix}\] \[=\begin{bmatrix}CP&0\\ 0&I_{N_{i}}\end{bmatrix}\begin{bmatrix}I_{n_{i+1}}&-\mathcal{H}(-Z)W^{(i)}\\ 0&I_{N_{i}}\end{bmatrix}\begin{bmatrix}\begin{bmatrix}\text{ReLU}(W^{(i)}f^{( 1)}(x)+b^{(1)})\\ f^{(i)}(x)\end{bmatrix}-\begin{bmatrix}\mathcal{H}(-Z)b^{(1)}\\ 0\end{bmatrix}\end{bmatrix}\] \[=\begin{bmatrix}CP&-CP\mathcal{H}(-Z)W^{(1)}\\ 0&I_{N_{i}}\end{bmatrix}\begin{bmatrix}f^{(i+1)}(x)-\begin{bmatrix}\mathcal{H} (-Z)b^{(1)}\\ 0\end{bmatrix}\end{bmatrix}.\]
We thus have
\[W^{(i+1)}f^{(i+1)}(x)+b^{(i+1)} \rightarrow W^{(i+1)}\begin{bmatrix}(CP)^{-1}&\mathcal{H}(-Z)W^{(i)}\\ 0&I_{N_{i}}\end{bmatrix}\begin{bmatrix}CP&-CP\mathcal{H}(-Z)W^{(i)}\\ 0&I_{N_{i}}\end{bmatrix}(f^{(i+1)}(x)\] \[\quad-\begin{bmatrix}\mathcal{H}(-Z)b^{(1)}\\ 0\end{bmatrix}\end{bmatrix}+b^{(i+1)}+W^{(i+1)}\begin{bmatrix}\mathcal{H}(-Z)b^ {(1)}\\ 0\end{bmatrix}\] \[=W^{(i+1)}\begin{pmatrix}f^{(i+1)}(x)-\begin{bmatrix}\mathcal{H} (-Z)b^{(1)}\\ 0\end{bmatrix}\end{pmatrix}+W^{(i+1)}\begin{bmatrix}\mathcal{H}(-Z)b^{(1)}\\ 0\end{bmatrix}+b^{(i+1)}\] \[=W^{(i+1)}f^{(i+1)}(x)+b^{(i+1)}.\]
### \(G\)-invariant architectures
[Proof of Lemma 4] Since \(f^{(1)}\) and \(\psi^{(1)}\) are both the identity functions, then the claim is immediate for \(i=1\). Suppose the claim is true for some \(i\in\{1,\ldots,d-1\}\). We
have for all \(g\in G\) and \(x\in\mathbb{R}^{m}\):
\[f^{(i+1)}(gx)= \begin{bmatrix}\operatorname{ReLU}(W^{(i)}f^{(i)}(gx)+b^{(i)})\\ f^{(i)}(gx)\end{bmatrix}\] \[= \begin{bmatrix}\operatorname{ReLU}(W^{(i)}\psi^{(i)}(g)f^{(i)}(x)+ b^{(i)})\\ \psi^{(i)}(g)f^{(i)}(x)\end{bmatrix}\] \[= \begin{bmatrix}\operatorname{ReLU}(\rho^{(i)}(g)W^{(i)}f^{(i)}(x)+ b^{(i)})\\ \psi^{(i)}(g)f^{(i)}(x)\end{bmatrix}\] \[= \begin{bmatrix}\operatorname{ReLU}(\rho^{(i)}(g)W^{(i)}f^{(i)}(x)+ \rho^{(i)}(g)b^{(i)})\\ \psi^{(i)}(g)f^{(i)}(x)\end{bmatrix}\] \[= \begin{bmatrix}\pi^{(i)}(g)\operatorname{ReLU}(\zeta^{(i)}(g)(W^{ (i)}f^{(i)}(x)+b^{(i)}))\\ \psi^{(i)}(g)f^{(i)}(x)\end{bmatrix}\] \[= \begin{bmatrix}\pi^{(i)}(g)\operatorname{ReLU}(W^{(i)}f^{(i)}(x)+ b^{(i)})\\ \psi^{(i)}(g)f^{(i)}(x)\end{bmatrix}-\begin{bmatrix}\pi^{(i)}(g)\mathcal{H}(- \zeta^{(i)}(g))W^{(i)}f^{(i)}(x)\\ 0\end{bmatrix}\] \[-\begin{bmatrix}\pi^{(i)}(g)\mathcal{H}(-\zeta^{(i)}(g))b^{(i)} \\ 0\end{bmatrix}\] \[= \begin{bmatrix}\pi^{(i)}(g)&-\pi^{(i)}(g)\mathcal{H}(-\zeta^{(i)} (g))W^{(i)}\\ 0&I_{N_{i}}\end{bmatrix}f^{(i+1)}(x)+\begin{bmatrix}-\pi^{(i)}(g)\mathcal{H}(- \zeta^{(i)}(g))b^{(i)}\\ 0\end{bmatrix}.\]
Note that
\[-\pi^{(i)}(g)\mathcal{H}(-\zeta^{(i)}(g))= -\frac{1}{2}\pi^{(i)}(g)(I_{n_{i+1}}-\zeta^{(i)}(g))\] \[= \frac{1}{2}\pi^{(i)}(g)(\zeta^{(i)}(g)-I_{n_{i+1}})\] \[= \frac{1}{2}(\rho^{(i)}(g)-\pi^{(i)}(g)).\]
We thus have
\[-\pi^{(i)}(g)\mathcal{H}(-\zeta^{(i)}(g))W^{(i)} =\frac{1}{2}(\rho^{(i)}(g)-\pi^{(i)}(g))W^{(i)}\] \[=\frac{1}{2}(\rho^{(i)}(g)W^{(i)}-\pi^{(i)}(g)W^{(i)})\] \[=\frac{1}{2}(W^{(i)}\psi^{(i)}(g)-\pi^{(i)}(g)W^{(i)}).\]
For the bias term, let \(\rho^{(i)}=\rho^{(i)}_{1}\oplus\cdots\oplus\rho^{(i)}_{k}\) be the decomposition of \(\rho^{(i)}\) into irreducibles; decompose \(\pi^{(i)}\) and \(b^{(i)}\) correspondingly. Then we have
\[-\pi^{(i)}(g)\mathcal{H}(-\zeta^{(i)}(g))b^{(i)} =\frac{1}{2}(\rho^{(i)}(g)-\pi^{(i)}(g))b^{(i)}\] \[=\frac{1}{2}\bigoplus_{j=1}^{k}(\rho^{(i)}_{j}(g)-\pi^{(i)}_{j}(g ))b^{(i)}_{j}.\]
If \(\rho_{j}^{(i)}\) is type 1, then \(\rho_{j}^{(i)}=\pi_{j}^{(i)}\) so that the \(j\)th term in the above direct sum is zero. On the other hand, if \(\rho_{j}^{(i)}\) is type 2, then \(\rho_{j}^{(i)}(g)b_{j}^{(i)}=b_{j}^{(i)}\) implies \(b_{j}^{(i)}=0\), so that again the \(j\)th summand is zero. Therefore, \(-\pi^{(i)}(g)\pi(-\zeta^{(i)}(g))b^{(i)}=0\forall g\in G\). We thus have
\[f^{(i+1)}(gx)= \begin{bmatrix}\pi^{(i)}(g)&\frac{1}{2}(W^{(i)}\psi^{(i)}(g)-\pi^ {(i)}(g)W^{(i)})\\ 0&I_{N_{i}}\end{bmatrix}f^{(i+1)}(x)+0\] \[= \psi^{(i+1)}(g)f^{(i+1)}(x).\]
The conclusion follows by induction.
[Proof of Lemma 5] We will verify that \(\psi^{(i+1)}\) defined as claimed satisfies the recursion in Lemma 4. For \(i=1\), we have
\[\psi^{(2)}(g)= \ A^{(1)-1}\Pi^{(1)}(g)A^{(1)}\] \[= \begin{bmatrix}I_{I_{2}}&-\frac{1}{2}W^{(1)}\\ 0&I_{n_{1}}\end{bmatrix}^{-1}\begin{bmatrix}\pi^{(1)}(g)&0\\ 0&g\end{bmatrix}\begin{bmatrix}I_{I_{2}}&-\frac{1}{2}W^{(1)}\\ 0&I_{n_{1}}\end{bmatrix}\] \[= \begin{bmatrix}I_{I_{2}}&\frac{1}{2}W^{(1)}\\ 0&I_{n_{1}}\end{bmatrix}\begin{bmatrix}\pi^{(1)}(g)&0\\ 0&g\end{bmatrix}\begin{bmatrix}I_{I_{2}}&-\frac{1}{2}W^{(1)}\\ 0&I_{n_{1}}\end{bmatrix}\] \[= \begin{bmatrix}\pi^{(1)}(g)&\frac{1}{2}(W^{(1)}g-\pi^{(1)}(g)W^{ (1)})\\ 0&g\end{bmatrix}\] \[= \begin{bmatrix}\pi^{(1)}(g)&\frac{1}{2}(W^{(1)}\psi^{(1)}(g)-\pi^ {(1)}(g)W^{(1)})\\ 0&\psi^{(1)}(g)\end{bmatrix},\]
which indeed agrees with Lemma 4.
Now suppose the claim holds for some \(i\in\{2,\ldots,d-1\}\). Observe that
\[A^{(i)}= \begin{bmatrix}I_{n_{i+1}}&-\frac{1}{2}W^{(i)}\\ 0&A^{(i-1)}\end{bmatrix}\] \[\Pi^{(i)}(g)= \begin{bmatrix}\pi^{(i)}(g)&0\\ 0&\Pi^{(i-1)}(g)\end{bmatrix}.\]
We thus have
\[\psi^{(i+1)}(g)= \ A^{(i)-1}\Pi^{(i)}(g)A^{(i)}\] \[= \begin{bmatrix}I_{n_{i+1}}&-\frac{1}{2}W^{(i)}\\ 0&A^{(i-1)}\end{bmatrix}^{-1}\begin{bmatrix}\pi^{(i)}(g)&0\\ 0&\Pi^{(i-1)}(g)\end{bmatrix}\begin{bmatrix}I_{n_{i+1}}&-\frac{1}{2}W^{(i)}\\ 0&A^{(i-1)}\end{bmatrix}\] \[= \begin{bmatrix}I_{n_{i+1}}&\frac{1}{2}W^{(i)}A^{(i-1)-1}\\ 0&A^{(i-1)-1}\end{bmatrix}\begin{bmatrix}\pi^{(i)}(g)&0\\ 0&\Pi^{(i-1)}(g)\end{bmatrix}\begin{bmatrix}I_{n_{i+1}}&-\frac{1}{2}W^{(i)}\\ 0&A^{(i-1)}\end{bmatrix}\] \[= \begin{bmatrix}\pi^{(i)}(g)&\frac{1}{2}(W^{(i)}A^{(i-1)-1}\Pi^{(i -1)}(g)A^{(i-1)}-\pi^{(i)}(g)W^{(i)})\\ 0&A^{(i-1)-1}\Pi^{(i-1)}(g)A^{(i-1)}\end{bmatrix}\] \[= \begin{bmatrix}\pi^{(i)}(g)&\frac{1}{2}(W^{(i)}\psi^{(i)}(g)-\pi^ {(i)}(g)W^{(i)})\\ 0&\psi^{(i)}(g)\end{bmatrix}.\]
The conclusion follows by induction.
**Proof** [Proof of Thm. 6] **(a)** By Lemmas 4-5, we have
\[\rho^{(i)}(g)W^{(i)} =W^{(i)}\psi^{(i)}(g)\] \[\rho^{(i)}(g)W^{(i)} =W^{(i)}A^{(i-1)-1}\mathsf{\Gamma}^{(i-1)}(g)A^{(i-1)}\] \[\rho^{(i)}(g)W^{(i)}A^{(i-1)-1} =W^{(i)}A^{(i-1)-1}\mathsf{\Gamma}^{(i-1)}(g).\]
Let \(V^{(i)}=W^{(i)}A^{(i-1)-1}\). Thus, \(W^{(i)}=V^{(i)}A^{(i-1)}\).
All that is left is to establish equivariance of the blocks of \(V^{(i)}\). We have
\[\rho^{(i)}(g)V^{(i)} =\rho^{(i)}(g)W^{(i)}A^{(i-1)-1}\] \[=W^{(i)}\psi^{(i)}(g)A^{(i-1)-1}\] \[=W^{(i)}A^{(i-1)-1}\mathsf{\Gamma}^{(i-1)}(g)A^{(i-1)}A^{(i-1)-1}\] \[=V^{(i)}\mathsf{\Gamma}^{(i-1)}(g).\]
Since \(\mathsf{\Gamma}^{(i-1)}(g)\) is a block-diagonal matrix, then blockwise equivariance follows.
**(b)** By part (a), we have
\[g^{(i)}(\chi) =W^{(i)}f^{(i)}(\chi)\] \[=V^{(i)}A^{(i-1)}f^{(i)}(\chi)\] \[=V^{(i)}h^{(i)}(\chi).\]
To establish the recursion for \(h^{(i)}\), first observe that \(A^{(i)}\) satisfies the recursion
\[A^{(i)}=\begin{bmatrix}I_{n_{i+1}}&-\frac{1}{2}W^{(i)}\\ 0&A^{(i-1)}\end{bmatrix}.\]
We thus have
\[h^{(i+1)}(\chi) =A^{(i)}f^{(i+1)}(\chi)\] \[=\begin{bmatrix}I_{n_{i+1}}&-\frac{1}{2}W^{(i)}\\ 0&A^{(i-1)}\end{bmatrix}\begin{bmatrix}\operatorname{ReLU}(W^{(i)}f^{(i)}( \chi)+b^{(i)})\\ f^{(i)}(\chi)\end{bmatrix}\] \[=\begin{bmatrix}\operatorname{ReLU}(W^{(i)}f^{(i)}(\chi)+b^{(i)} )-\frac{1}{2}W^{(i)}f^{(i)}(\chi)\\ A^{(i-1)}f^{(i)}(\chi)\end{bmatrix}\] \[=\begin{bmatrix}\operatorname{ReLU}(g^{(i)}(\chi)+b^{(i)})- \frac{1}{2}g^{(i)}(\chi)\\ h^{(i)}(\chi)\end{bmatrix},\]
which completes the proof.
### Admissible architectures
#### b.3.1 The \(\theta\) function
The following proposition establishes the invariance and equivariance of the function \(\theta\) with respect to certain conjugations.
**Proposition 8**: _The function \(\theta\) satisfies the following properties:_
1. _For every_ \(A\in\mathcal{P}(|G/|)\)_, define the ordinary perm-rep_ \(\pi_{J}^{A}(g)=A\pi_{J}(g)A^{-1}\)_. Then_ \(\theta(\cdot,\cdot,J)\) _is invariant under the conjugation_ \(\pi_{J}\mapsto\pi_{J}^{A}\)_._
2. \(\theta(H,K,gJg^{-1})=\theta(H,K,J)\forall g\in G\)_._
3. \(\theta(gHg^{-1},gKg^{-1},J)=g\theta(H,K,J)g^{-1}\)_._
**Proof (a)** For brevity, let \(\kappa=|H:K|-1\). Define the function \(\theta^{A}\) in identical fashion to \(\theta\), but replace \(\eta_{J}\) with \(\eta_{J}^{A}\). We wish to prove \(\theta^{A}=\theta\). Since the map \(\Gamma\mapsto\rho_{\Gamma}\) from finite orthogonal matrix groups to orthogonal projection operators is equivariant with respect to conjugation, then we have
\[\theta^{A}(H,K,J) =\{g\in G:\pi_{J}^{A}(g)(P_{\pi_{J}^{A}(K)}-\kappa P_{\pi_{J}^{A}( H)})=P_{\pi_{J}^{A}(K)}-\kappa P_{\pi_{J}^{A}(H)}\}\] \[=\{g\in G:A\pi_{J}(g)A^{-1}(P_{A\pi_{J}(K)A^{-1}}-\kappa P_{A\pi_ {J}(H)A^{-1}})=P_{A\pi_{J}(K)A^{-1}}-\kappa P_{A\pi_{J}(H)A^{-1}}\}\] \[=\{g\in G:A\pi_{J}(g)A^{-1}(AP_{\pi_{J}(K)}A^{-1}-\kappa AP_{\pi_ {J}(H)}A^{-1})=AP_{\pi_{J}(K)}A^{-1}\] \[\quad-\kappa AP_{\pi_{J}(H)}A^{-1}\}\] \[=\{g\in G:\pi_{J}(g)(P_{\pi_{J}(K)}-\kappa P_{\pi_{J}(H)})=P_{\pi_ {J}(K)}-\kappa P_{\pi_{J}(H)}\}\] \[=\theta(H,K,J).\]
**(b)** The theory of ordinary perm-irreps and their correspondence to group action on cosets is well-understood (Burnside, 1911; Bouc, 2000), and it is known that the conjugation of \(J\) in \(\eta_{J}\) is equivalent to the conjugation of \(\eta_{J}\) itself. The claim thus follows by part (a).
**(c)** Let \(\alpha\in G\). We will show \(\theta(\alpha H\alpha^{-1},\alpha K\alpha^{-1},J)=\theta(H,K,J)\). Note \(\kappa=|H:K|-1\) is invariant under the conjugation of \((H,K)\) by \(\alpha\). Also note \(\eta_{J}(\alpha H\alpha^{-1})=\eta_{J}(\alpha)\eta_{J}(H)\eta_{J}(\alpha)^{-1}\) and similar for \(K\). Letting \(A=\eta(\alpha)\) and proceeding in analogy to part (a), we have
\[\theta(\alpha H\alpha^{-1},\alpha K\alpha^{-1},J) =\{g\in G:\pi_{J}(g)(P_{\pi_{J}(\alpha K\alpha^{-1})}-\kappa P_{ \pi_{J}(\alpha H\alpha^{-1})})=P_{\pi_{J}(\alpha K\alpha^{-1})}\] \[\quad-\kappa P_{\eta_{J}(\alpha H\alpha^{-1})}\}\] \[=\{g\in G:\pi_{J}(g)(P_{\pi_{J}^{A}(K)}-\kappa P_{\pi_{J}^{A}(H)} )=P_{\pi_{J}^{A}(K)}-\kappa P_{\pi_{J}^{A}(H)}\}\] \[=\{g\in G:\pi_{J}(g)(AP_{\eta_{J}(K)}A^{-1}-\kappa AP_{\eta_{J}(H) }A^{-1})=AP_{\eta_{J}(K)}A^{-1}\] \[\quad-\kappa AP_{\pi_{J}(H)}A^{-1}\}\] \[=\{g\in G:A^{-1}\pi_{J}(g)A(P_{\pi_{J}(K)}-\kappa P_{\pi_{J}(H)} )=P_{\pi_{J}(K)}-\kappa P_{\pi_{J}(H)}\}\] \[=\{g\in G:\pi_{J}(\alpha^{-1}g\alpha)(P_{\pi_{J}(K)}-\kappa P_{\pi _{J}(H)})=P_{\pi_{J}(K)}-\kappa P_{\pi_{J}(H)}\}.\]
By the change of variables \(g\to aga^{-1}\), we have
\[\theta(aHa^{-1},aKa^{-1},J) =\{aga^{-1}\in G:\eta_{j}(g)(P_{\eta_{j}(K)}-\kappa P_{\eta_{j}(H)})= P_{\eta_{j}(K)}-\kappa P_{\eta_{j}(H)}\}\] \[=\alpha\{g\in G:\eta_{j}(g)(P_{\eta_{j}(K)}-\kappa P_{\eta_{j}(H)})= P_{\eta_{j}(K)}-\kappa P_{\eta_{j}(H)}\}\alpha^{-1}\] \[=\alpha\theta(H,K,J)\alpha^{-1},\]
establishing the claim.
```
1:function\(\theta(H,K,J)\)
2:def\(\eta:K\setminus G/J\mapsto\text{power}(G/J)\) by
3:\(\eta(KxJ)=\{kxJ\in G/J:k(K\cap xJx^{-1})\in K/(K\cap xJx^{-1})\}\)
4:if\(|H:K|=2\)then
5:let\(h\in H\setminus K\)
6:\(S\leftarrow\{KxJ\in K\setminus G/J:hx\in KxJ\}\)
7:else
8:\(S\leftarrow\emptyset\)
9:if\(S=\emptyset\)then
10:\(T\leftarrow\{\eta(KxJ):KxJ\in K\setminus G/J\}\)
11:else
12:\(T\leftarrow\{\eta(KxJ):KxJ\in(K\setminus G/J)\setminus S\}\cup\{\bigcup_{KxJ \in S}\eta(KxJ)\}\)
13:return\(\text{st}_{G}(T)\)
```
**Algorithm 1** Implementation of the function \(\theta\) that exploits existing functions in GAP.
Algorithm 1 gives the pseudocode for an implementation of the function \(\theta\) that can be accomplished in the GAP language (GAP) for computational group theory. Although the definition of the \(\theta\) function involves orthogonal projection operators, Alg. 1 completely circumvents these operators by taking a pure group-theoretic approach in terms of double cosets. The following proposition verifies that Alg. 1 is a correct implementation.
**Proposition 9**: _Algorithm 1 correctly implements the function \(\theta\)._
Observe that the function \(\eta\) in Alg. 1 sends each double coset \(KxJ\in K\setminus G/J\) to the set of cosets in \(G/J\) whose disjoint union is \(KxJ\).
Let \(H,K,J\leq G\) such that \(K\leq H\) and \(|H:K|\leq 2\). Let \(w\in\text{ran}(P_{\eta(K)}-(|H:K|-1)P_{\eta(H)})\). We regard \(w\) as a function \(w:G/J\mapsto\mathbb{R}\). Since \(w\in\text{ran}(P_{K})\), then \(w\) is \(K\)-invariant in the sense that \(w(KxJ)=w(xJ)\) for all \(k\in K\) and \(xJ\in G/J\). Thus, \(w\) is constant over the set \(\eta(KxJ)\) for every \(KxJ\in K\setminus G/J\).
If \(|H:K|=2\), then let \(h\in H\setminus K\). Since \(w\in\text{ran}(P_{\eta(K)}-P_{\eta(H)})\), then \(w(hKxJ)=-w(KxJ)\) for every \(KxJ\in K\setminus G/J\). Thus, if \(hKxJ=KxJ\), then \(w(KxJ)=0\). Since \(K\leq H\), then \(hKxJ=KxJ\) is equivalent to \(KhxJ=KxJ\), or just \(hx\in KxJ\). We thus have the constraint
\[w(KxJ)=0\forall KxJ\in K\setminus G/J\mid hx\in KxJ.\]
Now select \(w\) such that it takes a different nonzero value over each \(\eta(KxJ)\) for all \(KxJ\in K\setminus G/J\) such that, if \(|H:K|=2\), then \(hx\notin KxJ\). Then
\[\theta(H,K,J) =\operatorname{st}_{G}(P_{\eta(K)}-(|H:K|-1)P_{\eta(\eta)})\] \[=\operatorname{st}_{G}(w)\] \[=\{g\in G:w(gKxJ)=w(KxJ)\forall KxJ\in K\setminus G/J\}.\]
This means \(\theta(H,K,J)\) is exactly the subgroup of \(G\) that leaves the level sets of \(w\) invariant. Observe, however, that the sets in the collection \(T\) in Alg. 1 are exactly the level sets of \(w\), and hence \(\theta(H,K,J)=\operatorname{st}_{G}(T)\).
#### b.3.2 The \(\phi\) function
[Proof of Prop. 7] The necessity of condition (2) is only because if \(H^{(1)}_{j}=G\) for any \(j\), then at least one row \(w\) of \(W^{(1)}\) satisfies \(w\in\operatorname{ran}(P_{G})\) by Thm. 4a of Agrawal and Ostrowski (2022). For \(w\) to be nonzero, we thus require \(P_{G}\neq 0\). We assume condition (2) to be satisfied for the remainder of the proof.
Before proving the claim itself, we first derive a closed-form expression for \(\phi^{(i+1)}\). For notational convenience, for every \(K\leq H\leq G\), \(|H:K|\leq 2\), and linear rep \(\tau:G\mapsto\mathcal{GL}(n,\mathbb{R})\), define
\[\theta(H,K;\tau)=\{g\in G:\tau(g)(P_{\tau(K)}-(|H:K|-1)P_{\tau(H)})=P_{\tau(K)} -(|H:K|-1)P_{\tau(H)}\}.\]
Thus, \(\theta(H,K,J)\) can be equivalently written as \(\theta(H,K;\eta)\). For two reps \(\tau_{1}\) and \(\tau_{2}\), observe that \(P_{\tau_{1}\in\tau_{2}}=P_{\tau_{1}}\oplus P_{\tau_{2}}\) and hence
\[\theta(H,K;\tau_{1}\oplus\tau_{2})=\theta(H,K;\tau_{1})\cap\theta(H,K;\tau_{2}).\]
This property extends to more than two reps in the obvious way. With this notation, and recalling the identity rep \(\pi^{(0)}:G\mapsto G\), the function \(\phi^{(i+1)}\) can be rewritten as
\[\phi^{(i+1)}(H,K) =\operatorname{st}_{G}(P_{K}-(|H:K|-1)P_{H})\cap\bigcap_{j=1}^{i }\bigcap_{r=1}^{r^{(l)}}\theta(H,K,H^{(l)}_{r})\] \[=\theta(H,K;\pi^{(0)})\cap\bigcap_{j=1}^{i}\bigcap_{r=1}^{r^{(l)} }\theta(H,K;\pi^{(l)}_{r})\] \[=\theta(H,K;\pi^{(0)})\cap\theta\left(H,K;\bigoplus_{j=1}^{i} \bigoplus_{r=1}^{r^{(l)}}\pi^{(l)}_{r}\right)\] \[=\theta\left(H,K;\pi^{(0)}\oplus\bigoplus_{j=1}^{i}\pi^{(l)}\right)\] \[=\theta\left(H,K;\bigoplus_{j=0}^{i}\pi^{(l)}\right)\] \[=\theta(H,K;\Pi^{(l)}).\]
This is explicitly
\[\phi^{(i+1)}(H,K) =\{g\in G:\Pi^{(i)}(g)(P_{\Pi^{(0)}(K)}-(|H:K|-1)P_{\Pi^{(0)}(H)})=P_{ \Pi^{(0)}(K)}\] \[\qquad-(|H:K|-1)P_{\Pi^{(0)}(H)}\},\]
and it is equivalent to
\[\phi^{(i+1)}(H,K) =\{g\in G:\Pi^{(i)}(g)(P_{\Pi^{(0)}(K)}-(|\Pi^{(i)}(H):\Pi^{(i)}( K)|-1)P_{\Pi^{(0)}(H)})=P_{\Pi^{(0)}(K)}\] \[\qquad-(|\Pi^{(i)}(H):\Pi^{(i)}(K)|-1)P_{\Pi^{(0)}(H)}\}.\] (*)
To see that \(|\Pi^{(i)}(H):\Pi^{(i)}(K)|=|H:K|\), by the First Isomorphism Theorem we have
\[|\Pi^{(i)}(H):\Pi^{(i)}(K)|=\frac{|H|/|H\cap\ker(\Pi^{(0)})|}{|K|/|K\cap\ker( \Pi^{(0)})|}.\]
Since \(\Pi^{(i)}\) includes in its direct sum decomposition the identity rep \(\pi^{(0)}\), then \(\ker(\Pi^{(i)})\) must be trivial, and so the above expression reduces to \(|H|/|K|=|H:K|\).
We finally prove the claim. By definition, the \(G\)-DNN architecture \(\{\rho^{(1)},\ldots,\rho^{(d)}\}\) is admissible iff each row of \(W^{(i)}\) is nonzero and no two rows of \([W^{(i)}\mid b^{(i)}]\) are parallel, for \(i\in\{1,\ldots,d\}\). By Thm. 6 (a), since \(W^{(i)}=V^{(i)}A^{(i-1)}\), then we obtain an equivalent definition if we replace \(W^{(i)}\) with \(V^{(i)}\). Let \(V^{(i)}_{(i)}\) be the submatrix comprising the rows (and all columns) of \(V^{(i)}\) that together transform by \(\rho^{(i)}_{j}\):
\[\rho^{(i)}_{j}(g)V^{(i)}_{(i)}=V^{(i)}_{(i)}\Pi^{(i-1)}(g)\forall g\in G.\]
(We include the parentheses in the subscript of \(V^{(i)}_{(i)}\) to distinguish it from \(V^{(i)}_{j}\) appearing in Thm. 6 (a)). Then Thm. 4a of Agrawal and Ostrowski (2022) implies we have an admissible architecture (specifically, that the rows of \(V^{(i)}_{(j)}\) are nonzero and no two rows of the corresponding augmented weight matrix are parallel) iff
\[\operatorname{St}_{\Pi^{(i-1)}(G)}(P_{\Pi^{(i-1)}(K^{(i)}_{j})}-(|\Pi^{(i-1)} (H^{(i)}_{j}):\Pi^{(i-1)}(K^{(i)}_{j})|-1)P_{\Pi^{(i-1)}(H^{(i)}_{j})})=\Pi^{ (i-1)}(K^{(i)}_{j}),\]
or equivalently,
\[(\Pi^{(i-1)})^{-1}[\operatorname{st}_{\Pi^{(i-1)}(G)}(P_{\Pi^{(i-1)}(K^{(i)}_{ j})}-(|\Pi^{(i-1)}(H^{(i)}_{j}):\Pi^{(i-1)}(K^{(i)}_{j})|-1)P_{\Pi^{(i-1)}(H^{(i )}_{j})})=K^{(i)}_{j}\]
\[\{g\in G:\Pi^{(i-1)}(g)(P_{\Pi^{(i-1)}(K^{(i)}_{j})}-(|\Pi^{(i-1)}(H^{(i)}_{j} ):\Pi^{(i-1)}(K^{(i)}_{j})|-1)P_{\Pi^{(i-1)}(H^{(i)}_{j})})=P_{\Pi^{(i-1)}(K^{( i)}_{j})}\] \[\quad-(|\Pi^{(i-1)}(H^{(i)}_{j}):\Pi^{(i-1)}(K^{(i)}_{j})|-1)P_{ \Pi^{(i-1)}(H^{(i)}_{j})}=K^{(i)}_{j}.\]
Recalling (*), we recognize the last equation as nothing but \(\phi^{(i)}(H^{(i)}_{j},K^{(i)}_{j})=K^{(i)}_{j}\), there proving that condition (1) is (together with condition (2)) is equivalent to admissibility.
### Additional remarks
The following proposition establishes the compatibility of batchnorm with \(\mathsf{G}\)-DNNs.
The addition of batchnorm immediately after any ReLU layer in a \(\mathsf{G}\)-DNN preserves \(\mathsf{G}\)-invariance of the network.
Suppose we apply batchnorm immediately after the \(i\)th ReLU layer of the \(\mathsf{G}\)-DNN \(\mathbf{f}\), for some \(i\in\{1,\ldots,\mathbf{d-1}\}\). Then the activations \(\mathbf{r^{(i)}(x)=\operatorname{ReLU}(W^{(i)}f^{(i)}(x)+\mathbf{b}^{(i)})\) of the \(i\)th ReLU layer transform as
\[\mathbf{r^{(i)}(x)\to\gamma\left(\frac{\mathbf{r^{(i)}(x)-\mu\mathbb{1}}}{\sigma+\mathbf{ \varepsilon}}\right)+\beta\mathbb{1}},\]
where \(\mu\geq 0\), \(\sigma\geq 0\), \(\mathbf{\varepsilon}>0\), \(\gamma\), and \(\beta\) are all scalars. For each \(i\in\{1,\ldots,\mathbf{d}\}\), let \(W^{(i)}_{i}\) and \(V^{(i)}_{i}\) be the blocks comprising the first \(n_{i}\) columns of \(W^{(i)}\) and \(V^{(i)}\) respectively; these represent the weights of the \(i\)th layer without the skip connections. Then the above affine transformation of \(\mathbf{r^{(i)}(x)}\) is equivalent to the transformation
\[W^{(i+1)}_{i+1} \to CW^{(i+1)}_{i+1}\] \[\mathbf{b^{(i+1)}} \to\mathbf{b^{(i+1)}}+\mathbf{D}W^{(i+1)}_{i+1}\mathbb{1},\]
where
\[C =\frac{\gamma}{\sigma+\mathbf{\varepsilon}}\] \[D =\beta-\frac{\gamma\mu}{\sigma+\mathbf{\varepsilon}}.\]
By Thm. 3.2 (a), \(W^{(i+1)}=V^{(i+1)}\mathbf{A^{(i)}}\). Since \(\mathbf{A^{(i)}}\) is upper block-triangular with the top left block \(\mathbf{I}_{n_{i+1}}\), then \(W^{(i+1)}_{i+1}=V^{(i+1)}_{i+1}\). By the above transformation of \(W^{(i+1)}_{i+1}\) under batchnorm, \(V^{(i+1)}_{i+1}\) also transforms only by the scalar factor \(C\), and hence it remains equivariant as in Thm. 3.3.
All that remains is to show the bias \(\mathbf{b^{(i+1)}}\) satisfies the sufficient condition in Lemma 4 even after the batchnorm transformation, and this will establish \(\mathsf{G}\)-invariance of the network with batchnorm.
**Case 1:** Suppose \(\mathbf{\rho^{(i+1)}}\) is type 1 irreducible. Then Lemma 4 implies \(\mathbf{b^{(i+1)}}\) is parallel to \(\mathbb{1}\). Under batchnorm, \(\mathbf{b^{(i+1)}}\) transforms by the addition of \(\mathbf{D}V^{(i+1)}_{i+1}\mathbb{1}\) and thus remains parallel to \(\mathbb{1}\). The claim follows by Lemma 4.
**Case 2:** Suppose \(\mathbf{\rho^{(i+1)}}\) is type 2 irreducible. Then Lemma 4 implies \(\mathbf{b^{(i+1)}=0}\). Under batchnorm, the bias thus transforms to \(\mathbf{0+D}V^{(i+1)}_{i+1}\mathbb{1}\). It turns out, however, that \(V^{(i+1)}_{i+1}\mathbb{1}=0\); to see this, by Thm. 3.2 (a), we have
\[\mathbf{\rho^{(i+1)}(g)V^{(i+1)}_{i+1}=V^{(i+1)}_{i+1}\pi^{(i)}(g)\forall g\in G.}\]
Since \(\mathbf{i\geq 1}\) so that \(\pi^{(\mathbf{i})}\) is type 1, then without loss of generality, by selecting an appropriate basis, we assume \(\pi^{(\mathbf{i})}\) is an ordinary perm-irrep. Averaging both sides over all \(\mathbf{g\in G}\), we obtain
\[P_{\mathbf{\rho}(i+1)}V_{\mathbf{i+1}}^{(\mathbf{i+1})}=V_{\mathbf{i+1}}^{(\mathbf{i+1})}P_{\pi^{( 0)}}.\]
Since \(\mathbf{\rho^{(i+1)}}\) is type 2, then its only fixed point is the zero vector, and hence the orthogonal projection operator \(P_{\mathbf{\rho}^{(i+1)}}\) is itself zero. Moreover, since \(\pi^{(\mathbf{i})}\) is an ordinary perm-irrep and since \(\mathbf{1}\) is fixed under all permutations, then it is fixed under the orthogonal projection operator \(P_{\pi^{(0)}}\) as well- hence \(V_{\mathbf{i+1}}^{(\mathbf{i+1})}\mathbf{1}=\mathbf{0}\).
**Case 3:** Suppose \(\mathbf{\rho^{(i+1)}}\) is reducible. Then decompose it into type1 and type 2 irreps and apply Cases 1-2 separately to each irrep.
The extension of Prop. 10 to multiple channels is trivial. Batchnorm is typically applied independently to each channel. Thus, if the ReLU activations \(r^{(\mathbf{i})}(\mathbf{\chi})\) had \(\mathbf{c}\) channels (which could be achieved by having \(\mathbf{c}\) copies of every irrep in \(\mathbf{\rho^{(\mathbf{i})}}\)), then the variables \(\mathbf{\mu}\), \(\mathbf{\sigma}\), \(\mathbf{\gamma}\), and \(\mathbf{\beta}\) would be \(\mathbf{c}\)-dimensional vectors. The proof would then proceed by selecting a single arbitrary channel.
## Appendix C Examples
### Binary multiplication
**Proof** [Proof of Ex. 1] **(a)** For \(\mathbf{x_{1},x_{2}\in\{-1,1\}}\), it can be verified by hand that
\[\begin{array}{l}\mathbf{x_{1}x_{2}=[1-1]ReLU\left(\begin{bmatrix}1&1\\ 1&-1\end{bmatrix}[x_{1}]\\ \mathbf{x_{2}}\end{bmatrix}\right)-\mathbf{x_{2}}\\ =[1-10-1]\left[\begin{bmatrix}\ReLU\left(\begin{bmatrix}1&1\\ 1&-1\end{bmatrix}[x_{1}]\\ \mathbf{x_{1}}\end{bmatrix}\right)\right].(*)\end{array}\]
We can extend this to the product of more than two elements as follows: For \(i\in\{1,\ldots,d\}\), define the block-diagonal matrices
\[\begin{array}{l}\mathbf{A}^{(i)}=I_{m/2^{i}}[1-1]\\ \mathbf{B}^{(i)}=I_{m/2^{i}}[0-1]\\ \mathbf{C}^{(i)}=I_{m/2^{i}}\left[\begin{bmatrix}1&1\\ 1&-1\end{bmatrix}\right]\end{array}\]
and define the functions \(\mathbf{\rho^{(\mathbf{i})}}:\mathcal{X}\mapsto\{-1,1\}^{m/2^{i}}\) by
\[\begin{array}{l}\mathbf{\rho^{(\mathbf{1})}}(\mathbf{x})=\mathbf{A}^{(1)}\mathbf{x}\\ \mathbf{\rho^{(\mathbf{i})}}(\mathbf{x})=[\mathbf{A}^{(1)}\quad\mathbf{B}^{(\mathbf{0})}]\left[ \begin{bmatrix}\ReLU(C^{(\mathbf{i})}\mathbf{\rho}^{(i-1)}(\mathbf{x}))\\ \mathbf{\rho}^{(i-1)}(\mathbf{x})\end{bmatrix}\forall i\in\{2,\ldots,d\}.(**)\end{array}\]
The function \(\rho^{(1)}\) maps each pair of elements \((x_{2i-1},x_{2i})\) in the input \(x\) to \(x_{2i-1}-x_{2i}\in\{-1,1\}\). Then \(\rho^{(i)}\) (1) partitions \(\rho^{(1)}(x)\) into blocks, each of two elements; (2) takes the product of each pair of elements using (*); (3) iterates this procedure \(i-1\) times. The output \(\rho^{(d)}(x)\) is then the product of the elements in \(\rho^{(1)}(x)\).
Now for \(i\in\{1,\ldots,d-1\}\), the function \(g^{(i)}\) (as in Thm. 6 (b)) as given by \(g^{(i)}(x)=C^{(i+1)}\rho^{(i)}(x)\). By (**), we thus have
\[g^{(i)}(x) =\begin{bmatrix}C^{(i+1)}A^{(i)}&C^{(i+1)}B^{(i)}\end{bmatrix} \begin{bmatrix}\operatorname{ReLU}(g^{(i-1)}(x))\\ \rho^{(i-1)}(x)\end{bmatrix}\] \[=\begin{bmatrix}C^{(i+1)}A^{(i)}&C^{(i+1)}B^{(i)}\end{bmatrix} \begin{bmatrix}\operatorname{ReLU}(g^{(i-1)}(x))-\frac{1}{2}g^{(i-1)}(x)+\frac{ 1}{2}g^{(i-1)}(x)\end{bmatrix}\] \[=\begin{bmatrix}C^{(i+1)}A^{(i)}&C^{(i+1)}B^{(i)}\end{bmatrix} \begin{bmatrix}\operatorname{ReLU}(g^{(i-1)}(x))-\frac{1}{2}g^{(i-1)}(x)\\ \rho^{(i-1)}(x)\end{bmatrix}\] \[\quad+\begin{bmatrix}C^{(i+1)}A^{(i)}&C^{(i+1)}B^{(i)}\end{bmatrix} \begin{bmatrix}\frac{1}{2}g^{(i-1)}(x)\\ 0\end{bmatrix}\] \[=\begin{bmatrix}C^{(i+1)}A^{(i)}&C^{(i+1)}B^{(i)}\end{bmatrix} \begin{bmatrix}\operatorname{ReLU}(g^{(i-1)}(x))-\frac{1}{2}g^{(i-1)}(x)\\ \rho^{(i-1)}(x)\end{bmatrix}+\frac{1}{2}C^{(i+1)}A^{(i)}g^{(i-1)}(x)\] \[=\begin{bmatrix}C^{(i+1)}A^{(i)}&C^{(i+1)}B^{(i)}\end{bmatrix} \begin{bmatrix}\operatorname{ReLU}(g^{(i-1)}(x))-\frac{1}{2}g^{(i-1)}(x)\\ \rho^{(i-1)}(x)\end{bmatrix}\] \[\quad+\frac{1}{2}C^{(i+1)}A^{(i)}C^{(i)}\rho^{(i-1)}(x)\] \[=\begin{bmatrix}C^{(i+1)}A^{(i)}&C^{(i+1)}B^{(i)}+\frac{1}{2}C^{( i+1)}A^{(i)}C^{(i)}\end{bmatrix}\begin{bmatrix}\operatorname{ReLU}(g^{(i-1)}(x))-\frac{1}{2}g^{(i-1)}(x) \\ p^{(i-1)}(x)\end{bmatrix}\] \[=\begin{bmatrix}C^{(i+1)}A^{(i)}&0\end{bmatrix}\begin{bmatrix} \operatorname{ReLU}(g^{(i-1)}(x))-\frac{1}{2}g^{(i-1)}(x)\\ p^{(i-1)}(x)\end{bmatrix}\] \[=\begin{bmatrix}C^{(i+1)}A^{(i)}&0\end{bmatrix}\begin{bmatrix} \operatorname{ReLU}(g^{(i-1)}(x))-\frac{1}{2}g^{(i-1)}(x)\\ h^{(i-1)}(x)\end{bmatrix}.\]
Comparing this to the equations in Thm. 6 (b), we establish the claimed expression for \(V^{(1)}_{j}\) for \(i\in\{1,\ldots,d-1\}\).
For the case \(i=d\), we simply observe that the last weight matrix must be the outer weight vector in (*). The \(G\)-invariance of the constructed DNN is clear; the action of any \(g\in G\) on an input \(x\in\mathcal{X}\) corresponds to an even number of sign flips in \(\rho^{(1)}(x)\), which leaves the parity of the product \(\rho^{(d)}(x)\) invariant. Layerwise \(G\)-equivariance is established next.
**(b)** By part (a), The first weight matrix is
\[C^{(2)}A^{(1)}=I_{m/4}\otimes\begin{bmatrix}\begin{bmatrix}1&-1&1&-1\\ 1&-1&-1&1\end{bmatrix}.\]
The \(j\)th pair of rows in this weight matrix is the \(j\)th channel, whose first row is \(\nu_{j}\) and which transforms by \(\rho_{\mu_{j}^{(1)}}\kappa_{j}^{(1)}\). The expressions for \(H_{j}^{(1)}\) and \(\kappa_{j}^{(1)}\) are thus established by definition.
Now for \(i\in\{1,\ldots,d\}\), part (a) implies
\[\rho_{\mu_{j}^{(i+1)}}\kappa_{j}^{(i+1)}(g)\begin{bmatrix}1&-1&1&-1\\ 1&-1&-1&1\end{bmatrix}=\begin{bmatrix}1&-1&1&-1\\ 1&-1&-1&1\end{bmatrix}\begin{bmatrix}\pi_{\mu_{j-1}^{(0)}}(g)&0\\ 0&\pi_{\mu_{2j}^{(0)}}(g)\end{bmatrix}\forall g\in G,\]
where each \(\pi_{\mu_{k}^{(0)}}\) is the ordinary perm-rep part of \(\rho_{\mu_{k}^{(0)}}\kappa_{k}^{(0)}\). By definition, \(\kappa_{j}^{(i+1)}\) is the subgroup of all \(g\in G\) such that
\[\rho_{\mu_{j}^{(i+1)}}\kappa_{j}^{(i+1)}(g)e_{1}=e_{1}\]
\[\frac{1}{2}\rho_{\mu_{j}^{(i+1)}}\kappa_{j}^{(i+1)}(g)\begin{bmatrix}1&-1&1&- 1\\ 1&-1&-1&1\end{bmatrix}\begin{bmatrix}1\\ 0\\ 1\\ 0\end{bmatrix}=e_{1}\]
\[\frac{1}{2}\begin{bmatrix}1&-1&1&-1\\ 1&-1&-1&1\end{bmatrix}\begin{bmatrix}\pi_{\mu_{2j-1}^{(0)}}(g)&0\\ 0&\pi_{\mu_{2j}^{(0)}}(g)\end{bmatrix}\forall g\in G\begin{bmatrix}1\\ 0\\ 1\\ 0\end{bmatrix}=e_{1}.\]
Since ordinary perm-reps cannot flip signs, then the last equation is equivalent to
\[\pi_{\mu_{2j-1}^{(0)}}e_{1}=e_{1}\text{ and }\pi_{\mu_{2j}^{(0)}}e_{1}=e_{1}.\]
We thus have
\[\begin{split}\kappa_{j}^{(i+1)}&=\{g\in G:\pi_{\mu_{2j-1}^{(0)}}e _{1}=e_{1}\}\cap\{g\in G:\pi_{\mu_{2j}^{(0)}}e_{1}=e_{1}\}\\ &=H_{2j-1}^{(i)}\cap H_{2j}^{(i)}.\end{split}\]
Similarly, by definition we have
\[H_{j}^{(i+1)}=\{g\in G:\rho_{\mu_{j}^{(i+1)}}\kappa_{j}^{(i+1)}(g)e_{1}=\pm e _{1}\}.\]
Proceeding analogously as above, we find that \(g\in G\) is contained in \(H_{j}^{(i+1)}\) iff
\[(\pi_{\mu_{2j-1}^{(0)}}(g)e_{1}=e_{1}\text{ and }\pi_{\mu_{2j}^{(0)}}(g)e_{1}=e_{1}) \text{ or }(\pi_{\mu_{2j-1}^{(0)}}(g)e_{1}=e_{2}\text{ and }\pi_{\mu_{2j}^{(0)}}(g)e_{1}=e_{2}).\]
The first term in the disjunction corresponds to \(H_{2j-1}^{(i)}\cap H_{2j}^{(i)}\), and the second term in the disjunction corresponds to the intersection of the complements, as claimed.
In the case of the \((d-1)\)st layer, the product output \(\rho^{(d-1)}(x)\) is 2D and thus can only transform by \(\pm 1\). The rep \(\rho^{(d-)}\) thus decomposes into two copies of a scalar rep. Finally, that the final rep \(\rho^{(d)}\) is trivial follows from the \(G\)-invariance of the network, thereby completing the proof. |
2307.07832 | MixupExplainer: Generalizing Explanations for Graph Neural Networks with
Data Augmentation | Graph Neural Networks (GNNs) have received increasing attention due to their
ability to learn from graph-structured data. However, their predictions are
often not interpretable. Post-hoc instance-level explanation methods have been
proposed to understand GNN predictions. These methods seek to discover
substructures that explain the prediction behavior of a trained GNN. In this
paper, we shed light on the existence of the distribution shifting issue in
existing methods, which affects explanation quality, particularly in
applications on real-life datasets with tight decision boundaries. To address
this issue, we introduce a generalized Graph Information Bottleneck (GIB) form
that includes a label-independent graph variable, which is equivalent to the
vanilla GIB. Driven by the generalized GIB, we propose a graph mixup method,
MixupExplainer, with a theoretical guarantee to resolve the distribution
shifting issue. We conduct extensive experiments on both synthetic and
real-world datasets to validate the effectiveness of our proposed mixup
approach over existing approaches. We also provide a detailed analysis of how
our proposed approach alleviates the distribution shifting issue. | Jiaxing Zhang, Dongsheng Luo, Hua Wei | 2023-07-15T15:46:38Z | http://arxiv.org/abs/2307.07832v1 | # MixupExplainer: Generalizing Explanations for Graph Neural Networks with Data Augmentation
###### Abstract.
Graph Neural Networks (GNNs) have received increasing attention due to their ability to learn from graph-structured data. However, their predictions are often not interpretable. Post-hoc instance-level explanation methods have been proposed to understand GNN predictions. These methods seek to discover substructures that explain the prediction behavior of a trained GNN. In this paper, we shed light on the existence of the distribution shifting issue in existing methods, which affects explanation quality, particularly in applications on real-life datasets with tight decision boundaries. To address this issue, we introduce a generalized Graph Information Bottleneck (GIB) form that includes a label-independent graph variable, which is equivalent to the vanilla GIB. Driven by the generalized GIB, we propose a graph mixup method, MixupExplainer, with a theoretical guarantee to resolve the distribution shifting issue. We conduct extensive experiments on both synthetic and real-world datasets to validate the effectiveness of our proposed mixup approach over existing approaches. We also provide a detailed analysis of how our proposed approach alleviates the distribution shifting issue.
graph neural network, explainability, data augmentation +
Footnote †: dagger}\)Corresponding author
+
Footnote †: dagger}\)Corresponding author
and its explanation shows that the explanation embeddings are out of distribution with respect to the original graphs, which leads to impaired safe usage of the approximation because of the inductive bias in \(f\). The negative impact of the distribution shifting problem on explanation quality is especially pronounced when applied to complex real-world datasets with tight decision boundaries.
While the distribution shifting issue in post-hoc explanations has gained growing attention in computer vision (Beng et al., 2017), this issue is less explored in the graph domain. In computer vision, (Beng et al., 2017) optimizes image classifier explanations to highlight contextual information relevant to the prediction and consistent with the training distribution. (Kang et al., 2018) addresses the distribution shifting issue in image explanation via a module that quantifies affinity between perturbed data and original dataset distribution. In the graph domain, while a recent work (Kang et al., 2019) attempts to address distribution shifting by annealing the size constraint coefficient at the start of the explanation process, the distribution shifting issue still persists throughout the explanation process.
To address the distribution shifting issue in post-hoc graph explanation, we introduce a general form of Graph Information Bottleneck (GIB) that includes another label-independent graph variable \(G^{\Delta}\). This new form of GIB is proven equivalent to vanilla GIB. By having \(G^{\Delta}\) in the objective, we can alleviate the distribution shifting problem with theoretical guarantees. To further improve the explanation method, we propose MixupExplainer using an improved Mixup approach. The MixupExplainer assumes that a non-explainable part of a graph is label-independent and mixes the explanation with a non-explainable structure from another randomly sampled graph. The explanation substructure is obtained by minimizing the difference between the predicted labels of the original graph and the mixup graph.
To the end, we summarize our contributions as follows.
* For the first time, we point out that the distribution shifting problem is prevalent in the most popular post-hoc explanation framework for graph neural networks.
* We derive a generalized framework with a solid theoretical foundation to alleviate the problem and propose a straightforward yet effective instantiation based on mixing up the explanation with a randomly sampled base structure by aligning the graph and mixing the graph masks.
* Comprehensive empirical studies on both synthetic and real-life datasets demonstrate that our method can dramatically and consistently improve the quality of the explanations, with up to 35.5% in AUC scores.
## 2. Related Work
### Graph Neural Networks
The use of graph neural networks (GNNs) is on the rise for analyzing graph structure data, as seen in recent research studies (Beng et al., 2017; Chen et al., 2018; Chen et al., 2018). There are two main types of GNNs: spectral-based approaches (Chen et al., 2018; Chen et al., 2018; Chen et al., 2018) and spatial-based approaches (Chen et al., 2018; Chen et al., 2018; Chen et al., 2018). Despite the differences, message passing is a common framework for both, using pattern extraction and message interaction between layers to update node embeddings. However, GNNs are still considered a black box model with a hard-to-understand mechanism, particularly for graph data, which is harder to interpret compared to image data. To fully utilize GNNs, especially in high-risk applications, it is crucial to develop methods for understanding how they work.
### GNN Explanation
Many attempts have been made to interpret GNN models and explain their predictions (Kang et al., 2018; Chen et al., 2018; Chen et al., 2018; Chen et al., 2018; Chen et al., 2018). These methods can be grouped into two categories based on granularity: (1) instance-level explanation, which explains the prediction for each instance by identifying significant substructures (Chen et al., 2018; Chen et al., 2018; Chen et al., 2018), and (2) model-level explanation, which seeks to understand the global decision rules captured by the GNN (Kang et al., 2018; Chen et al., 2018; Chen et al., 2018). From a methodological perspective, existing methods can be classified as (1) self-explainable GNNs (Kang et al., 2018; Chen et al., 2018), where the GNN can provide both predictions and explanations, and (2) post-hoc explanations (Kang et al., 2018; Chen et al., 2018; Chen et al., 2018), which use another model or strategy to explain the target GNN. In this work, we focus on post-hoc instance-level explanations, which involve identifying instance-wise critical substructures to explain the prediction. Various strategies have been explored, including gradient signals, perturbed predictions, and decomposition.
Perturbed prediction-based methods are the most widely used in post-hoc instance-level explanations. The idea is to learn a perturbation mask that filters out non-important connections and identifies dominant substructures while preserving the original predictions. For example, GNNExplainer (Chen et al., 2018) uses end-to-end learned soft masks on node attributes and graph structure, while PGExplainer (Kang et al., 2018) incorporates a graph generator to incorporate global information. RG-Explainer (Chen et al., 2018) uses reinforcement learning technology with starting point selection to find important substructures for the explanation.
However, most of these methods fail to consider the distribution shifting issue. The explanation should contain the same information that contributes to the prediction, but the GNN is trained on a data pattern that consists of an explanation subgraph relevant to labels, and a label-independent structure, leading to a distribution shifting problem when feeding the explanation directly into the GNN. Our
Figure 1. Visualization of original graphs \(G\), explanation subgraphs \(G^{*}\), and our generated graphs \(G^{(\text{mix})}\). There is a large distributional divergence between explanation subgraphs \(G^{*}\) and original graphs \(G\). \(G_{a}\) and \(G_{b}\) are two graphs in the original dataset. More experimental results on the existence of the distributional divergence can be found in Section 5.3.
method aims to capture the distribution information of the graph and build the explanation with a label-independent structure to help the explainer better minimize the objective function and retrieve a higher-quality explanation.
### Graph Data Augmentation with Mixup
Data augmentation addresses issues such as noise, scarcity, and out-of-distribution problems. One popular data augmentation approach is using Mixup (Wang et al., 2017) strategy to generate synthetic training examples based on feature mixing and label mixing. Specifically, (Wang et al., 2017; Wang et al., 2018) mix the graph representation learned from GNNs to avoid dealing with the arbitrary structure in the input space for mixing a node or graph pair. ifMixup (Wang et al., 2018) interpolates both the node features and the edges of the input pair based on feature mixing and graph generation. (Wang et al., 2018) and (Wang et al., 2018) generate interpolated graphs with the estimation of the properties in the graph data, like the graphon of each class or nearest neighbors of target nodes. All the previous methods (Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018) aim to generalize the mixup approach to improve the performance of classification models like GNNs. Unlike existing graph mixup approaches, this paper solves a different task, which is to generalize the explanations for GNN.
## 3. Preliminary
### Notations and Problem Definition
We denote a graph as \(G=(\mathcal{V},\mathcal{E};X,\mathbf{A})\), where \(\mathcal{V}=\{v_{1},v_{2},...,v_{n}\}\) represents a set of \(n\) nodes and \(\mathcal{E}\in\mathcal{V}\times\mathcal{V}\) represents the edge set. Each graph has a feature matrix \(\mathbf{X}\in\mathbb{R}^{n\times d}\) for the nodes, where in \(\mathbf{X},\mathbf{x}_{i}\in\mathbb{R}^{1\times d}\) is the \(d\)-dimensional node feature of node \(v_{i}\). \(\mathcal{E}\) is described by an adjacency matrix \(\mathbf{A}\in\{0,1\}^{n\times n}\). \(A_{ij}=1\) means that there is an edge between node \(v_{i}\) and \(v_{j}\); otherwise, \(A_{ij}=0\).
For graph classification task, each graph \(G_{i}\) has a label \(Y_{i}\in\mathcal{C}\), with a GNN model \(f\) trained to classify \(G_{i}\) into its class, i.e., \(f:(\mathbf{X},\mathbf{A})\mapsto\{1,2,...,C\}\). For the node classification task, each graph \(G_{i}\) denotes a \(K\)-hop sub-graph centered around node \(v_{i}\), with a GNN model \(f\) trained to predict the label for node \(v_{i}\) based on the node representation of \(v_{i}\) learned from \(G_{i}\).
Problem 1 (Post-hoc Instance-level GNN Explanation).: _Given a trained GNN model \(f\), for an arbitrary input graph \(G=(\mathcal{V},\mathcal{E};X,\mathbf{A})\), the goal of post-hoc instance-level GNN explanation is to find a subgraph \(G^{*}\) that can explain the prediction of \(f\) on \(G\)._
Informative feature selection has been well studied in non-graph structured data (Wang et al., 2017), and traditional methods, such as concrete autoencoder (Bengio et al., 2017), can be directly extended to explain features in GNNs. In this paper, we focus on discovering important typologies. Formally, the obtained explanation \(G^{*}\) is depicted by a binary mask \(\mathbf{M}\in\{0,1\}^{n\times n}\) on the adjacency matrix, e.g., \(G^{*}=(\mathcal{V},\mathcal{E},\mathbf{A}\odot\mathbf{M};\mathbf{X}),\odot\) means elements-wise multiplication. The mask highlights components of \(G\) which are essential for \(f\) to make the prediction.
### Graph Information Bottleneck
The Information Bottleneck (IB) (Wang et al., 2018; Wang et al., 2018) provides an intuitive principle for learning dense representations that an optimal representation should contain _minimal_ and _sufficient_ information for the downstream prediction task. Based on IB, a recent work unifies the most existing post-hoc explanation methods for GNN, such as GNNExplainer (Wang et al., 2018), PGExplainer (Wang et al., 2018), with the graph information bottleneck (GIB) principle (Wang et al., 2018; Wang et al., 2018; Wang et al., 2018). Formally, the objective of explaining the prediction of \(f\) on \(G\) can be represented by
\[\operatorname*{arg\,min}_{G^{*}}I(G,G^{*})-\alpha I(G^{*},Y), \tag{1}\]
where \(G^{*}\) is the explanation subgraph, \(Y\) is the original or ground truth label, and \(\alpha\) is a hyper-parameter to get the trade-off between minimal and sufficient constraints. GIB uses the mutual information \(I(G,G^{*})\) to select the minimal explanation that inherits only the most indicative information from \(G\) to predict the label \(Y\) by maximizing \(I(G^{*},Y)\), where \(I(G,G^{*})\) avoids imposing potentially biased constraints, such as the size of the connectivity of the selected subgraphs (Wang et al., 2018). Through the optimization of the subgraph, \(G^{*}\) provides model interpretation. Further, from the definition of mutual information, we have \(I(G^{*},Y)=H(Y)-H(Y|G^{*})\), where the entropy \(H(Y)\) is static and independent of the explanation process. Thus, minimizing the mutual information between the explanation subgraph \(G^{*}\) and \(Y\) can be reformulated as maximizing the conditional entropy of \(Y\) given \(G^{*}\). Formally, we rewrite the GIB objective as follows:
\[\operatorname*{arg\,min}_{G^{*}}I(G,G^{*})+\alpha H(Y|G^{*}), \tag{2}\]
As is shown in Figure 2(a), the objective function in Eq. (2) optimizes \(G^{*}\) to have the minimal mutual information with the original graph \(G\), which could be expressed as a subgraph from \(G\) with a smaller size, or scattered components in \(G\), while at the same time provides maximum mutual information for \(Y\), which is equivalent to have minimum entropy \(H(Y|G^{*})\).
Due to the intractability of entropy of the label conditioned on explanation, a widely-adopted approximation in previous methods (Wang et al., 2018; Wang et al., 2018; Wang et al., 2018) is:
\[\operatorname*{arg\,min}_{G^{*}}I(G,G^{*})+\alpha H(Y|G^{*})\approx \operatorname*{arg\,min}_{G^{*}}I(G,G^{*})+\alpha CE(Y,Y^{*}), \tag{3}\]
where \(Y^{*}=f(G^{*})\) is the predicted label of \(G^{*}\) made by the model to be explained, \(f\) and the cross-entropy \(\operatorname{CE}(Y,Y^{*})\) between the ground truth label \(Y\) and \(Y^{*}\) is used to approximate \(H(Y|G^{*})\).
## 4. Methodology
In this section, we first introduce an overlooked problem in the GIB objective. Then we propose a generalized GIB objective to address the problem, which directly inspires our method through a mixup approach.
### Generalized GIB
#### 4.1.1. Diverging Distributions in Eq. (3).
Although prevalent, the approximation with \(Y^{*}=f(G^{*})\) in Eq. (3) overlooks the distributional divergence between the original graph \(G\) and the dense subgraph \(G^{*}\) after the processing of the prediction model \(f\). An intuitive example from the MUTAG dataset (Chen et al., 2017) is shown in Figure 1. The prediction model \(f\), represented by a hypothesis line, performs well in classifying the positive and negative samples. Due to the distribution shifting problem naturally inherent in \(f(G)\) and \(f(G^{*})\) on explanation subgraphs, \(f\) maps some explanation subgraphs across the decision boundary to the negative region. As a result, the explanation subgraph achieved by Eq. (3) may be suboptimal and even
far away from the ground truth explanation due to the significant divergence between \(f(G^{*})\) and \(f(G)\). The existing GIB framework could work for simple synthetic datasets by relying on the implicit knowledge associated with the class and assuming a large decision margin between the two or more classes. However, in more practical scenarios like MUTAG, the existing approximation may be heavily affected by the distribution shifting problem (Kraus et al., 2019; Kraus et al., 2019).
#### 4.1.2. Addressing with Label-independent Subgraph
To address the above challenge in the previous GIB methods, we first generalize the existing GIB framework by taking a label-independent subgraph \(G^{\Delta}\) into consideration. The intuition is that for an original graph \(G_{a}\) with label \(Y_{a}\), the label-independent subgraph \(G^{\Delta}_{a}\) also contains useful information. For example, \(G^{\Delta}_{a}\) makes sure that connecting it with the label-preserving subgraph \(G^{*}_{a}\) will not lead to another label. Formally, given a graph variable \(G^{\Delta}\) that satisfies \(I(G^{\Delta},Y|G^{*})=0\), the GIB objective can be generalized as follows.
\[\operatorname*{arg\,min}_{G^{*}}I(G,G^{*})+\alpha H(Y|G^{*},G^{\Delta}), \quad\text{s.t.}\quad I(G^{\Delta},Y|G^{*})=0. \tag{4}\]
As shown below, our generalized GIB has the following property.
Property 1 ().: _The generalized GIB objective, Eq. (4) is equivalent to vanilla GIB, Eq. (2)._
This can be proved by the definition of conditional entropy. With the condition that \(I(G^{\Delta},Y|G^{*})=0\), we have \(H(Y|G^{*})=H(Y|G^{*})+I(G^{\Delta},Y|G^{*})=H(Y|G^{*},G^{\Delta})\). Thus, the optimal solutions of GIB and our generalized version are equivalent. In addition, the advantage of our objective is that by choosing a suitable \(G^{\Delta}\) that minimizes the distribution distance, \(D(G^{*}+G^{\Delta},G)\), we can approximate the GIB without including the distribution shifting problem. An intuitive illustration is given in Figure 2(b).
Following exiting work (Kraus et al., 2019; Kraus et al., 2019), we can further approximate \(H(Y|G^{*},G^{\Delta})\) with \(\operatorname{CE}(Y,Y^{m})\), where \(Y^{m}=f(G^{*}+G^{\Delta})\) is the predicted label of \(G^{*}+G^{\Delta}\) made by the model \(f\) to be explained. Especially when \(G^{\Delta}\) is an empty graph, our objective degenerates to the vanilla approximation. Formally, we derive our new objective for GNN explanation as follows:
\[\operatorname*{arg\,min}_{G^{\Delta},G^{*}} I(G,G^{*})+\alpha\operatorname{CE}(Y,Y^{m})\] \[\text{s.t.}\operatorname{D}(G^{*}+G^{\Delta},G)=0,I(G^{\Delta},Y| G^{*})=0. \tag{5}\]
### MixupExplainer
Inspired by Eq. 5, in this section, we introduce a straightforward yet theoretically guaranteed instantiation, MixupExplainer, to resolve the distribution shifting issue. Figure 3 demonstrates the overall framework of the proposed MixupExplainer and the differences between MixupExplainer and previous GIB methods. MixupExplainer includes a graph generation phase after extracting the explanation of the graph with the explainer. Specifically, we instantiate the \(G^{\Delta}\) from the distribution of label-independent subgraphs from the graph dataset, denoted as \(\mathbb{P}_{\mathcal{G}^{(i)}}\), and connect \(G^{*}\) and \(G^{\Delta}\) to generate a new graph \(G^{(\text{mix})}\). Formally,
\[G^{\Delta}\sim\mathbb{P}_{\mathcal{G}^{(i)}},\quad G^{(\text{mix})}=G^{*}+G^{ \Delta}. \tag{6}\]
To avoid the trivial case that \(G=G^{(\text{mix})}\), when sampling \(G^{\Delta}\), we dismiss the original graph itself. In addition, since \(G^{\Delta}\) is sampled without considering the label information, we can make a safe assumption that \(I(G^{\Delta},Y|G^{*})=0\).
As stated in Problem 1, given a graph \(G_{a}=(\mathbf{A}_{a},\mathbf{X}_{a})\)1 and a to-be-explained model \(f\), an explanation model \(g\) aims to learns a subgraph \(G^{*}_{a}\), represented with the edge mask \(\mathbf{M}_{a}=g(G_{a})\) on the adjacency matrix \(\mathbf{A}_{a}\). To generate a graph distributed similarly to \(G_{a}\), we need to generate a label-independent subgraph, where we randomly sample another graph instance from the dataset, denoted by \(G_{b}\), without considering the label information. With the explanation model \(g\), we obtain the corresponding edge mask \(\mathbf{M}_{b}\) for \(G_{b}\). Then, we mix these two graphs by connecting the informative part in \(G_{a}\) and the label-independent part in \(G_{b}\). We first assume that \(G_{a}\) and \(G_{b}\) share the same set of nodes, and more general cases are discussed in the next section. Formally, the mask of the mixed graph, \(\mathbf{M}_{a}^{(\text{mix})}\), is calculated as follows.
Footnote 1: We dismiss \(\mathcal{V}\) and \(\mathcal{E}\) to simplify the notations.
\[\mathbf{M}_{a}^{(\text{mix})}=\lambda\mathbf{M}_{a}+(\mathbf{A}_{b}-\lambda\mathbf{M}_{b}), \tag{7}\]
where \(\mathbf{A}_{b}\) is the adjacency matrix of graph \(G_{b}\) and \(\lambda\) is a hyper-parameter to support flexible usage of mixup operation. Then, we have \(G_{a}^{(\text{mix})}=(\mathbf{X}_{a},\mathbf{M}_{a}^{(\text{mix})})\). The mask matrix \(\mathbf{M}_{a}\) and \(\mathbf{M}_{b}\) denote the weight of the edges in \(\mathbf{A}_{a}\) and \(\mathbf{A}_{b}\), respectively, with the same size of the matrix. By default, we mix up \(G^{*}_{a}\) with the rest part of the \(G_{b}\) by setting \(\lambda=1\) and above formula could be further simplified as:
\[\mathbf{M}_{a}^{(\text{mix})}=\mathbf{M}_{a}+(\mathbf{A}_{b}-\mathbf{M}_{b}). \tag{8}\]
Note that our proposed mixup approach is different from traditional mixup approaches (Kraus et al., 2019; Kraus et al., 2019; Kraus et al., 2019) in data augmentation, where they usually follow a form similar to \(\mathbf{M}^{(\text{mix})}=\lambda\mathbf{M}_{a}+(1-\lambda)\mathbf{M}_{b}\). This form of mixup does not differentiate label-dependent from label-independent parts. On the contrary, our proposed mixup approach in Eq. (7) includes the label-dependent part in \(G_{a}\) with \(\lambda\mathbf{M}_{a}\) and excludes the label-dependent part in \(G_{b}\) by subtracting the same \(\lambda\) on \(\mathbf{M}_{b}\) from \(\mathbf{A}_{b}\).
Figure 2. Illustration of GIB and our proposed new objective. (a) Previous vanilla GIB objective aims to minimize \(I(G^{*},Y)\) and \(H(Y|G^{*})\), with a smaller overlap between \(G^{*}\) and \(G\). (b) Our generalized GIB objective has the same objective as vanilla GIB, with a larger lap between \(G\) and \(G^{*}+G^{\Delta}\), resulting in less distribution shifting issue.
#### 4.2.1. Implementation
In this section, we introduce the implementation details of the mixup function and provide the pseudo-code of graph mixup in Algorithm 1.
Given a graph \(G_{a}\) with \(n_{a}\) nodes and another graph \(G_{b}\) with \(n_{b}\) nodes, the addition in Eq. (7) between two matrices requires \(\mathbf{M}_{a}\) and \(\mathbf{M}_{b}\) have the same dimensions, i.e., \(G_{a}\) and \(G_{b}\) have the same number of nodes. However, in real-world graph datasets, this assumption may not hold, leading to a mismatch between the dimensions of \(\mathbf{M}_{a}\) and \(\mathbf{M}_{b}\). In order to merge two graphs with different sets of nodes, we first extend node sets in \(G_{a}\) and \(G_{b}\) to a single node set \(\mathcal{V}_{a}\cup\mathcal{V}_{b}\), and their adjacency matrices are calculated with the following functions:
\[\mathbf{A}_{a}^{\text{ext}}=\left[\begin{array}{cc}\mathbf{A}_{a}&0\\ 0&0_{b}\end{array}\right],\mathbf{A}_{b}^{\text{ext}}=\left[\begin{array}{cc} \mathbb{0}_{a}&0\\ 0&\mathbf{A}_{b}\end{array}\right], \tag{9}\]
where \(\mathbb{0}_{a}\) and \(\mathbb{0}_{b}\) are zero matrices with shapes \(n_{a}\times n_{a}\) and \(n_{b}\times n_{b}\), respectively.
After extending \(G_{a}\) and \(G_{b}\), we then merge them into \(G^{(\text{mix})}=(\mathbf{X}^{(\text{mix})},\mathbf{M}_{a}^{(\text{mix})}\odot\mathbf{A} ^{(\text{mix})})\), where \(\mathbf{X}^{(\text{mix})}=[\mathbf{X}_{a};\mathbf{X}_{b}]\) is the concatenation of node features \(\mathbf{X}_{a}\) and \(\mathbf{X}_{b}\); \(\mathbf{A}^{(\text{mix})}\) is the merged adjacency matrix; \(\mathbf{M}_{a}^{(mix)}\) is the edge mask indicating the edge weights for the explanation.
Specifically, the adjacency matrix of \(G^{(\text{mix})}\) is:
\[\mathbf{A}^{(\text{mix})}=\left[\begin{array}{cc}\mathbf{A}_{a}&\mathbf{A}_{c}\\ \mathbf{A}_{c}^{T}&\mathbf{A}_{b}\end{array}\right], \tag{10}\]
where \(\mathbf{A}_{c}\) is a matrix indicating the cross-graph connectivity between the nodes in \(G_{a}\) and \(G_{b}\). In practice, we randomly sample \(\eta\) cross-graph edges to connect \(G_{a}\) and \(G_{b}\) at each mixup step to ensure the mixed graph is a connected graph to be optimized together on both label-dependent and label-independent subgraphs.
Similarly, the edge mask matrix is obtained from extended \(\mathbf{M}_{a}\) and \(\mathbf{M}_{b}\) and calculated with Eq. (7). Formally, we have
\[\mathbf{M}_{a}^{(\text{mix})}=\left[\begin{array}{cc}\lambda\mathbf{M}_{a}&\mathbf{M}_ {c}\\ \mathbf{M}_{c}^{T}&\mathbf{A}_{b}-\lambda\mathbf{M}_{b}\end{array}\right] \tag{11}\]
where the explainer \(g\) gives the \(\mathbf{M}_{a}\) and \(\mathbf{M}_{b}\), \(\mathbf{M}_{c}\) is the weight matrix on the randomly-sampled cross-graph edges corresponding with \(\mathbf{A}_{c}\), where the values are randomly sampled on connected edges in \(\mathbf{A}_{c}\) at each mixup step and thus will not be optimized by \(g\).
Finally, we can mixup the edge weight matrices \(\mathbf{M}_{a}^{\text{ext}}\) and \(\mathbf{M}_{b}^{\text{ext}}\) together with Eq. (7). The mixed graph \(G_{a}^{(\text{mix})}\) is then fed into the GNN model \(f\) to calculate the predicted result \(Y^{(\text{mix})}\). The detailed implementation is shown in Algorithm 1.
#### 4.2.2. Computational Complexity Analysis
Here, we analyze the computational complexity of our mixup approach. Given a graph \(G_{a}\) and a randomly sampled graph \(G_{b}\), the complexity of graph extension on adjacency matrices and edge masks is \(\mathcal{O}(|\mathcal{E}_{a}|+|\mathcal{E}_{b}|)\), where \(|\mathcal{E}_{a}|\) and \(|\mathcal{E}_{b}|\) denote the number of edges in \(G_{a}\) and \(G_{b}\), respectively. To generate \(\eta\) cross-graph edges, the computational complexity is \(\mathcal{O}(\eta)\). For mixup, the complexity is \(\mathcal{O}(|\mathcal{E}_{a}|+|\mathcal{E}_{b}|)\). By considering \(\eta\) as a small constant, the overall complexity of our mixup approach is \(\mathcal{O}(|\mathcal{E}_{a}|+|\mathcal{E}_{b}|)\).
#### 4.2.3. Theoretical Justification
In the following, we theoretically prove that: _the proposed mixup approach could reduce the distance between the explanation and original graphs_. Formally, we have the following theorem:
**Theorem 1**.: _Given an original graph \(G\), graph explanation \(G^{*}\) and \(G^{(\text{mix})}\) generated by Eq. (7), we have \(KL(G,G^{*})\geq KL(G,G^{(\text{mix})})\)._
Proof Sketch.: According to the previous work (Zhu et al., 2019; Zhang et al., 2020), a graph \(G\) can be treated as \(G=G^{(e)}+G^{(i)}\), where \(G^{(e)}\) presents the underlying subgraph that makes important contributions to GNN's predictions, which is the expected explanatory graph, and \(G^{(i)}\) consists of
Figure 3. Illustration of the GIB-based explanation and our proposed MixupExplainer. (a) Vanilla GIB directly minimizes \(\text{CE}(Y,Y^{*})\), which is the cross entropy between the original prediction \(Y\) and the prediction of explanation subgraph \(G^{*}\) made by the to-be-explained model \(f\). (b) Our MixupExplainer first generates an augmented graph \(G^{(\text{mix})}\) by mixing up the explanation subgraph \(G^{*}\) with the label-independent part from another randomly sampled graph. Then we minimize the cross entropy between \(Y\) and \(Y^{(\text{mix})}\), the prediction made by \(f\) on \(G^{(\text{mix})}\).
the remaining label-independent edges for predictions made by the GNN. Assuming the graph \(G^{(\mathbf{e})}\) and \(G^{(i)}\) independently follow the distribution \(\mathbb{P}_{\mathbf{\mathcal{G}}^{(e)}}\) and \(\mathbb{P}_{\mathbf{\mathcal{G}}^{(i)}}\) respectively, denoted as \(G^{(\mathbf{e})}\sim\mathbb{P}_{\mathbf{\mathcal{G}}^{(e)}}\) and \(G^{(i)}\sim\mathbb{P}_{\mathbf{\mathcal{G}}^{(i)}}\), we randomly sample \(G_{b}=G_{b}^{(\mathbf{e})}+G_{b}^{(i)}\) from the data set. Both \(G\) and \(G_{b}\) follow the distribution \(\mathbb{P}_{\mathbf{\mathcal{G}}}=\mathbb{P}_{\mathbf{\mathcal{G}}^{(e)}}\).6 We could get our Mixup explanation:
Footnote 6: All the dataset and codes can be found in [https://github.com/jr48/MixupExplainer](https://github.com/jr48/MixupExplainer)
\[G^{(\text{mix})}\coloneqq G^{(\mathbf{e})}+(G_{b}-G_{b}^{(\mathbf{e})})=G^{(\mathbf{e})}+G _{b}^{(i)}, \tag{12}\]
Then, we have \(\mathbb{P}_{\mathbf{\mathcal{G}}^{(\text{mix})}}=\mathbb{P}_{\mathbf{\mathcal{G}}^{(e )}}\ast\mathbb{P}_{\mathbf{\mathcal{G}}^{(i)}}=\mathbb{P}_{\mathbf{\mathcal{G}}}\). It is easy to show that \(KL(G,G^{(\text{mix})})=0\). Thus, we have
\[KL(G,G^{*})\geq\text{KL}(G,G^{(\text{mix})}) \tag{13}\]
The theoretical justification shows that our objective function could better estimate the explanation distribution and resolve the distribution shifting issue than the previous approach. In addition, with a safe assumption that \(I(G^{\Delta},Y|G^{*})=0\), as discussed in Eq. (6), we have MixupExplainer satisfy the s.t. condition in Eq. (5). Thus, we can simplify the objective for MixupExplainer as:
\[\operatorname*{arg\,min}_{G^{*}}\quad I(G,G^{*})+\alpha\text{CE}(Y,Y^{(\text{ mix})}) \tag{14}\]
## 5. Experimental Study
We conduct comprehensive experimental studies on benchmark datasets to empirically verify the effectiveness of the proposed MixupExplainer. Specifically, we aim to answer the following research questions:
* RQ1: Can the proposed framework outperform the GIB in identifying explanatory substructures for GNNs?
* RQ2: Is the distribution shifting issue severe in the existing GNN explanation methods? Could the proposed Mixup approach alleviate this issue?
* RQ3: How does the proposed approach perform under different hyperparameters?
### Experiment Settings
#### 5.1.1. Datasets
We focus on analyzing the effects of the distribution shifting problem between the ground truth explanation and the original graphs. Thus, we select six publicly available benchmark datasets with ground truth explanations in our empirical studies 2.
Footnote 2: All the dataset and codes can be found in [https://github.com/jr48/MixupExplainer](https://github.com/jr48/MixupExplainer)
* **BA-Shapes**(Zhang et al., 2019): This is a node classification dataset based on a 300-node Barabasi-Albert (BA) graph, to which 80 "house" motifs have been randomly attached. The nodes are labeled for use by GNN classifiers, while the edges within the corresponding motif serve as ground truth for explainers. There are four classes in the classification task, with one class indicating nodes in the base graph and the others indicating the relative location of nodes in the motif.
* **BA-Community**(Zhang et al., 2019): This extends the BA-Shapes dataset to more complex scenarios with eight classes. Two types of motifs are attached to the base graph, with nodes in different motifs having different labels.
* **Tree-Circles**(Zhang et al., 2019): This is a node classification dataset with two classes, with a binary tree serving as the base graph and a 6-node cycle structure as the motif. The labels only indicate if the nodes are in the motifs.
* **Tree-Grid**(Zhang et al., 2019): This is a node classification dataset created by attaching 80 grid motifs to a single 8-layer balanced binary tree. The labels only indicate if the nodes are in the motifs, and edges within the relative motif are used as ground-truth explanations.
* **BA-2motifs**(Zhang et al., 2019): This is a graph classification dataset where the label of the graph depends on the type of motif attached to the base graph, which is a BA random graph. The two types of motifs are a 5-node house structure and a 5-node circle structure.
* **MUTAG**(Chen et al., 2019): Unlike other synthetic datasets, MUTAG is a real-world molecular dataset commonly used for graph classification explanations. Each graph in MUTAG represents a molecule, with nodes representing atoms and edges representing bonds between atoms. The labels for the graphs are based on the chemical functionalities of the corresponding molecules.
#### 5.1.2. Baselines
To assess the effectiveness of the proposed framework, we use representative GIB-based explanation methods, GNNExplainer (Zhang et al., 2019) and PGExplainer (Zhang et al., 2019) as baselines. We include these two backbone explainers in our framework MixupExplainer and replace the GIB objective with the new proposed mixup objective. The methods are denoted by MixUp-GNNExplainer and MixUp-PGExplainer, respectively. We also include other types of post-hoc explanation methods for comparison, including GRAD (Zhang et al., 2019), ATT (Zhang et al., 2019), SubgraphX (Zhang et al., 2019), MetaGNN (Zhang et al., 2019), and RG-Explainer (Zhang et al., 2019).
* **GRAD**(Zhang et al., 2019): GRAD learns weight vectors of edges by computing gradients of GNN's objective function.
* **ATT**(Zhang et al., 2019): ATT distinguishes the edge attention weights in the input graph with the self-attention layers. Each edge's importance is obtained by averaging its attention weights across all attention layers.
* **SubgraphX**(Zhang et al., 2019): SubgraphX uses Monte Carlo Tree Search (MCTS) to find out the connected sub-graphs, which could preserve the predictions as explanations.
* **MetaGNN**(Zhang et al., 2019) MetaGNN proposes a meta-explainer for improving the level of explainability of a GNN directly at training time by training the GNNs and the explainer in turn.
* **RG-Explainer**(Zhang et al., 2019): RG-Explainer is an RL-enhanced explainer for GNN, which constructs the explanation subgraph by starting from a seed and sequentially adding nodes with an RL agent.
* **GNNExplainer**(Zhang et al., 2019): GNNExplainer is a post-hoc method, which provides explanations for every single instance by learning an edge mask for the edges in the graph. The weight of the edge could be treated as important.
* **PGExplainer**(Zhang et al., 2019): PGExplainer extends GNNExplainer by adopting a deep neural network to parameterize the generation process of explanations, which enables PGExplainer to explain the graphs in a global view. It also generates the substructure graph explanation with the edge importance mask.
#### 5.1.3. Configurations
The experiment configurations are set following prior research (Kumar et al., 2017). A three-layer GCN model was trained on 80% of each dataset's instances as the target model. All explanation methods used the Adam optimizer with a weight decay of \(5e\)-4 (Kumar et al., 2017). The learning rate for GNNExplainer was initialized to 0.01, with 100 training epochs. For PGExplainer, the learning rate was set to 0.003, and the training epoch was 30. The weight of mix-up processing, controlled by \(\lambda\), was determined through grid search. Explanations are tested in all instances. While running our approach MixUp-GNNExplainer and MixUp-PGExplainer and comparing them to the original GNNExplainer and PGExplainer, we set them with the same configurations, respectively. Hyperparameters are kept as the default values in other baselines.
#### 5.1.4. Evaluation Metrics
Due to the existence of gold standard explanations, we follow existing works (Kumar et al., 2017; Wang et al., 2018; Wang et al., 2019) and adopt AUC-ROC score on edge importance to evaluate the faithfulness of different methods. Other metrics, such as fidelity (Zhao et al., 2018), are not included because the metrics themselves are affected by the distribution shifting problem, making them unsuitable in our setting.
To quantitatively measure the distribution shifting between the original graph and the explanation graph, we use _Cosine score_ and _Euclidean distance_ to measure the distances between the graph embeddings learned by the GNN model. For the Cosine score, the range is \([-1,1]\), with 1 being the most similar and -1 being the least similar. For the Euclidean distance, the smaller, the better.
### Quantitative Evaluation (RQ1)
To answer RQ1, we compare MixupExplainer with other baseline methods in terms of the AUC-ROC score. Our approach is evaluated using the weighted vector of the graph generated by the explainers, which serves as the explanation and is compared against the ground truth to calculate the AUC-ROC score. Each experiment is conducted 10 times with random seeds. We summarize the average performances in Table 1.
As shown in Table 1, across all six datasets, with both GNNExplainer or PGExplainer as the backbone methods, MixupExplainer can consistently and significantly improve the quality of obtained explanations. Specifically, Mixup-GNNExplainer improves the AUC scores by 12.3%, on average, on the node classification datasets, and 22.6% on graph classification tasks. Similarly, MixUp-PGExplainer achieves average improvements of 5.41% and 19.4% for node/graph classification tasks, respectively. The comparisons between our MixupExplainer and the original counterparts indicate the advantage of the proposed explanation framework. In addition, MixUp-PGExplainer achieves competitive and even state-of-the-art performances compared with other sophisticated baselines, such as reinforcement learning-based RG-Explainer.
### Alleviating Distribution Shifts (RQ2)
In the previous section, we showed that our MixUp approach outperforms existing explanation methods in terms of AUC-ROC. In this section, we show the existence of the distribution shifting issue and show our proposed mixup approach alleviates this issue and improves the performance in explanation w.r.t. AUC.
**Visualizing Distributing Shifting.** In this section, we show the existence of the distribution shifting issue by visualizing the distribution vector (the output of the last layer in a well-trained GNN model \(f\)) for the original graph \(G\), the explanation from MixupExplainer \(G^{\text{(mix)}}\), and the ground truth explanation \(G^{*}\) with t-Distributed Stochastic Neighbor Embedding(t-SNE) (Srivastava et al., 2017). To calculate distribution vectors, we use the output of the last GNN layer in \(f\) as the representation vector \(\mathbf{h}\) for the original graph. Ground truth explanations \(G^{*}\) and the mixup graph from MixupExplainer, \(G^{\text{(mix)}}\) are also fed into the model to achieve corresponding representations, denoted by \(\mathbf{h}^{*}\), \(\mathbf{h}^{\text{(mix)}}\), respectively. The visualization results can be found in Figure 4. The red points represent the vectors from original graphs \(\mathbf{G}\); the blue points represent vectors from substructure explanations \(\mathbf{G}^{\prime}\), and the green points represent the vectors from the mixup explanations \(\mathbf{G}^{\prime}_{mix}\). Note that for BA-2motifs, while there are multiple graphs in the dataset, with only two kinds of motifs as explanations, t-SNE only shows two blue points, which are actually multiple overlapping blue points. From Figure 4, we have the following observations:
\(\bullet\) The blue points shift away from the red points in most datasets, including both synthetic and real-world datasets. It means that the distribution shifting issue exists in most cases, where most existing work overlooked this issue.
\(\bullet\) The green points are inseparable from the red points in most
\begin{table}
\begin{tabular}{c|c c c c c c} \hline \hline & BA-Shapes & BA-Community & Tree-Circles & Tree-Grid & BA-2motifs & MUTAG \\ \hline GRAD & 0.882 & 0.750 & 0.905 & 0.612 & 0.717 & 0.783 \\ ATT & 0.815 & 0.739 & 0.824 & 0.667 & 0.667 & 0.765 \\ SubgraphX & 0.548 & 0.473 & 0.617 & 0.516 & 0.610 & 0.529 \\ MetaGNN & 0.851 & 0.688 & 0.523 & 0.628 & 0.500 & 0.680 \\ RG-Explainer & 0.985 & 0.919 & 0.787 & 0.927 & 0.657 & 0.873 \\ \hline GNNExplainer & 0.884\({}_{\pm 0.002}\) & 0.682\({}_{\pm 0.004}\) & 0.683\({}_{\pm 0.009}\) & 0.379\({}_{\pm 0.001}\) & 0.660\({}_{\pm 0.006}\) & 0.539\({}_{\pm 0.002}\) \\ + MixUp & 0.890\({}_{\pm 0.004}\) & 0.788\({}_{\pm 0.006}\) & 0.690\({}_{\pm 0.014}\) & 0.501\({}_{\pm 0.003}\) & 0.869\({}_{\pm 0.004}\) & 0.612\({}_{\pm 0.043}\) \\ (improvement) & 0.60\% & 15.5\% & 1.02\% & 32.2\% & 31.7\% & 13.5\% \\ \hline PGExplainer & 0.999\({}_{\pm 0.001}\) & 0.829\({}_{\pm 0.040}\) & 0.762\({}_{\pm 0.014}\) & 0.679\({}_{\pm 0.008}\) & 0.679\({}_{\pm 0.043}\) & 0.843\({}_{\pm 0.084}\) \\ + MixUp & 0.999\({}_{\pm 0.001}\) & 0.955\({}_{\pm 0.017}\) & 0.774\({}_{\pm 0.004}\) & 0.712\({}_{\pm 0.000}\) & 0.920\({}_{\pm 0.031}\) & 0.871\({}_{\pm 0.079}\) \\ (improvement) & 0.00\% & 15.2\% & 1.57\% & 4.86\% & 35.5\% & 3.32\% \\ \hline \hline \end{tabular}
\end{table}
Table 1. Explanation faithfulness in terms of AUC-ROC on edges under six datasets. The higher, the better. Our mixup approach achieves consistent improvements over backbone GIB-based explanation methods.
datasets. It means that the explanation from MixupExplainer aligns well with the original graph's distribution, which indicates our mixup approach's effectiveness in alleviating the distribution shifting issue.
\(\bullet\) The shifting between blue points and red points is more obvious in the MUTAG dataset, where the green points generated by MixupExplainer still align well with the red points. This shows our method with mixup works well not only in synthetic datasets but also in the real-world dataset.
**Measuring Distances.** In this section, we quantitatively assess the distribution shifting issue by measuring the distances between the distribution vector \(\mathbf{h}\) from the original graphs and the explanation subgraphs \(\mathbf{h}^{*}\) and \(\mathbf{h}^{(\text{mix})}\). We report the averaged Cosine score and the Euclidean distance between different types of representation vectors in Table 2. From the results, we can see that, on average, \(\mathbf{h}^{(\text{mix})}\) has a higher Cosine score and a smaller Euclidean distance with \(\mathbf{h}\) than \(\mathbf{h}^{*}\), indicating more similarity of distribution between \(G^{(\text{mix})}\) and \(G\) than that between \(G^{*}\) and \(G\). The smaller distances between representation vectors demonstrate that our Mixup approach can effectively alleviate the distribution shifting problem caused by the inductive bias in the prediction model \(f\). As \(G^{(\text{mix})}\) better estimates the distribution of the original graphs, MixupExplainer can consistently improve the performance of existing explainers.
**Correlation with Performance Improvements.** We quantitatively evaluate the correlation between the improvements of AUC-ROC scores of MixupExplainer over basic counterparts and the improvements in distances with our mixup approach. We calculate the improvements of AUC-ROC scores from GNNExplainer and PGExplainer over GNNExplainer and PGExplainer without mixup (denoted as \(\Delta_{\text{AUC}}^{\text{GNNExplainer}}\) and \(\Delta_{\text{AUC}}^{\text{PGExplainer}}\), respectively). The improvements of average \(\text{Cosine}(\mathbf{h},\mathbf{h}^{(\text{mix})})\) over average \(\text{Cosine}(\mathbf{h},\mathbf{h}^{*})\) is denoted by \(\Delta_{\text{Cosine}}\), and the improvements on Euclidean distance is
\begin{table}
\begin{tabular}{c|c c c c c c} \hline & BA-Shapes & BA-Community & Tree-Circles & Tree-Grid & BA-2motifs & MUTAG \\ \hline Avg. Cosine\((\mathbf{h},\mathbf{h}^{*})\) & 0.574 & 0.483 & 0.962 & 0.629 & 0.579 & 0.775 \\ Avg. Cosine\((\mathbf{h},\mathbf{h}^{(\text{mix})})\) & 0.940\(\pm\)0.005 & 0.644\(\pm\)0.006 & 0.953\(\pm\)0.006 & 0.810\(\pm\)0.004 & 0.901\(\pm\)0.000 & 0.852\(\pm\)0.006 \\ \hline Avg. Euclidean\((\mathbf{h},\mathbf{h}^{*})\) & 1.30 & 1.31 & 0.213 & 0.921 & 1.32 & 1.07 \\ Avg. Euclidean\((\mathbf{h},\mathbf{h}^{(\text{mix})})\) & 0.440\(\pm\)0.014 & 1.10\(\pm\)0.010 & 0.211\(\pm\)0.011 & 0.582\(\pm\)0.006 & 0.587\(\pm\)0.001 & 0.816\(\pm\)0.011 \\ \hline \end{tabular}
\end{table}
Table 2. The Cosine score and Euclidean distance between the distribution vectors of the original graph \(\mathbf{h}\), explanation subgraph \(\mathbf{h}^{*}\), and our mixup graph \(\mathbf{h}^{(\text{mix})}\) on different datasets. Large Cosine scores and small Euclidean distances indicate high similarities between representations. The standard deviations of Avg. \(\text{Cosine}(\mathbf{h},\mathbf{h}^{*})\) and Avg. \(\text{Euclidean}(\mathbf{h},\mathbf{h}^{*})\) are not included because they are static without random processes.
Figure 4. Visualizations of the distribution shifting issue with t-SNE on six datasets. The points are generated with the output before the last layer of the model to be explained \(f\), which is then plotted with t-SNE. The red points mean original graphs \(G\), the blue points mean substructure explanations \(G^{*}\), and the green points mean mixup explanations \(G^{(mix)}\). Green dots align well with red dots, while blue dots shift away from red dots.
\(\Delta_{\text{Euclidean}}\). Figure 5 shows the correlation between \(\Delta_{\text{AUC}}^{\text{GNNExplainer}}\), \(\Delta_{\text{AUC}}^{\text{PExplainer}}\), \(\Delta_{\text{Cosine}}\), and \(\Delta_{\text{Euclidean}}\). We can see that all these four improvements strongly correlated to each other with statistical significance, indicating the improvements achieved by MixupExplainer in explanations accuracy own to the successful alleviation of the distribution shifting issue.
### Parameter Study (RQ3)
In this section, we investigate the hyperparameters of our approach, which include \(\lambda\) and \(\eta\), on the BA-2motifs dataset. The hyperparameter \(\lambda\) controls the weight on the original graph during the mixup process. We find the optimal value of \(\lambda\) by tuning it within the \([0,1]\) range. Note that, with \(\lambda=0\), Eq. (7) is trivial and doesn't help explain \(G_{a}\) with only \(\mathbf{A}_{b}\). The experimental results can be found in Figure 6. We can see that the best performance is achieved with \(\lambda=0.1\) and that the approach consistently outperforms the best performance from baselines with \(\lambda\in[0.05,1]\). The hyperparameter \(\eta\) is the number of cross-graph edges during mixup, indicating the connectivity between label-dependent explanations and label-independent subgraphs. We tune it within the \([1,20]\) range on the BA-2motifs dataset. The results in Figure 7 show that the best performance is achieved with \(\eta=1\). With different \(\eta\), our approach shows stable and consistently better performance than the best baseline.
## 6. Conclusion
In this work, we study the distribution shifting problem to obtain robust explanations for GNNs, which is largely neglected by the existing GIB-based post-hoc instance-level explanation framework. With a close analysis of the explanation methods of GNNs, we emphasize the possible distribution shifting issue induced by the existing framework. We propose a simple yet effective approach to address the distribution shifting issue by mixing up the explanation with a randomly sampled base graph structure. The designed algorithms can be incorporated into existing methods with no effort. Experiments validate its effectiveness, and further theoretical analysis shows that it is more effective in alleviating the distribution shifting issue in graph explanation. In the future, we will seek more robust explanations. Increased robustness indicates stronger generality and could provide better class-level interpretation at the same time.
###### Acknowledgements.
The work was partially supported by NSF award #2153311. The views and conclusions contained in this paper are those of the authors and should not be interpreted as representing any funding agencies.
Figure 5. Correlation between improvements of AUC-ROC scores in explanation performance and the improvements of distribution distances on different datasets. The value of \(r\) indicates the Pearson correlation coefficient, and the values with \(*\) indicate statistical significance for correlation, where \(***\) indicates the p-value for testing non-correlation \(p\leq 0.001\).
Figure 6. Hyper parameter analysis of \(\lambda\) on BA-2motifs with Mixup-PGExplainer. (a) The performance of explanation w.r.t AUC. The blue line represents the mean AUC score with standard deviations over ten runs with different random seeds on each \(\lambda\) value. The red line represents the performance of the baseline PGExplainer. (b) The distances between \(h\) and \(h^{(\text{mix})}\) for different \(\lambda\). The blue and yellow lines represent the mean of the Cosine score and Euclidean distance with standard deviations, respectively.
Figure 7. Hyperparameter analysis of \(\eta\) on BA-2motifs with Mixup-PGExplainer. (a) The performance of explanation w.r.t AUC. (b) The distances between \(h\) and \(h^{(\text{mix})}\) for different \(\eta\). |
2307.00185 | Interpretable Neural Networks with Random Constructive Algorithm | This paper introduces an Interpretable Neural Network (INN) incorporating
spatial information to tackle the opaque parameterization process of random
weighted neural networks. The INN leverages spatial information to elucidate
the connection between parameters and network residuals. Furthermore, it
devises a geometric relationship strategy using a pool of candidate nodes and
established relationships to select node parameters conducive to network
convergence. Additionally, a lightweight version of INN tailored for
large-scale data modeling tasks is proposed. The paper also showcases the
infinite approximation property of INN. Experimental findings on various
benchmark datasets and real-world industrial cases demonstrate INN's
superiority over other neural networks of the same type in terms of modeling
speed, accuracy, and network structure. | Jing Nan, Wei Dai | 2023-07-01T01:07:20Z | http://arxiv.org/abs/2307.00185v3 | An Interpretable Constructive Algorithm for Incremental Random Weight Neural Networks and Its Application
###### Abstract
Incremental random weight neural networks (IR-WNNs) have gained attention in view of its easy implementation and fast learning. However, a significant drawback of IRWNNs is that the relationship between the hidden parameters (node) and the residual error (model performance) is difficult to be interpreted. To address the above issue, this article proposes an interpretable constructive algorithm (ICA) with geometric information constraint. First, based on the geometric relationship between the hidden parameters and the residual error, an interpretable geometric information constraint is proposed to randomly assign the hidden parameters. Meanwhile, a node pool strategy is employed to obtain hidden parameters that is more conducive to convergence from hidden parameters satisfying the proposed constraint. Furthermore, the universal approximation property of the ICA is proved. Finally, a lightweight version of ICA is presented for large-scale data modeling tasks. Experimental results on six benchmark datasets and a numerical simulation dataset demonstrate that the ICA outperforms other constructive algorithms in terms of modeling speed, model accuracy, and model network structure. Besides, two practical industrial application case are used to validate the effectiveness of ICA in practical applications.
Incremental random weight neural networks, interpretable constructive algorithm, geometric information constraint, universal approximation property, large-scale data modeling.
## I Introduction
Neural networks (NNs), a promising computing paradigm that thoroughly differs from traditional model-based computing, can learn amazingly well patterns from complex data. Therefore, it should not be surprising that NNs are applied in a variety of research fields [1, 2, 3]. The most popular NNs are the deep neural networks (DNNs) and the flat neural networks (FNNs). DNNs realize end-to-end learning by organically combining unsupervised layer-by-layer pre-training with supervised fine-tuning [4]. This learning approach makes DNNs have great potential in expressive power and generalization ability, but it also leads to a time-consuming training process. Recently, due to the universal approximation ability, RWNNs, as a typical representative of FNNs, have been receiving increasing interset [5, 6, 7, 8]. RWNNs are characterized by a two-step training paradigm, i.e., randomly assigning the hidden parameters, and evaluating the output weights by solving a system of linear equations.
Despite the aforementioned advantages, little is known about how network structure for RWNNs fits into modeling tasks. Too large a network structure will result in a poor generalization, while too small a network structure will cause insufficient learning ability. The constructive algorithms always begin with a small network structure (usually a hidden node) and dynamically grow the network structure by adding new hidden nodes incrementally until the requirements are met [9]. This means that the constructive algorithms are likely to offer smaller network structure for modeling tasks [10]. Therefore, constructive version of RWNNs (incremental RWNNs, IRWNNs) have been successfully applied for data modeling tasks [11, 12, 13].
From the perspective of probability theory, the randomly generated hidden parameters are not necessarily suitable for IRWNNs. Therefore, a natural question arises: What hidden parameters are good for IRWNNs? Based on algebraic study of multidimensional nonlinear functions, [14] proved that the relationship between input samples and hidden parameters can be expressed by nonlinear weight equations. [15] showed that there exists a supervisory mechanism between the hidden parameters and the input samples for better network performance. [16] proposed a constructive algorithm with supervisory mechanism to randomly assign hidden parameters in the dynamical interval. [17] proposed a hidden parameters generation approach by analyzing the input samples scope and activation function. Recently, [18] proposed RWNNs with compact incremental inequality constraints, i.e., CIRWN, to improve the quality of hidden parameters. Although these approaches further improve the potential of IRWNNs, very little is known about how hidden parameters accomplish their goals. That is, it is difficult to visualize the influence of each hidden parameter on residual error (network performance). At present, how to interpret the predicted behavior of NNs to improve interpretability is a meaningful and important topic [19, 20]. Thus, further research on interpretable constructive algorithm is important and necessary for RWNNs.
Motivated by the above analysis, this paper proposes an interpretable constructive algorithm (ICA) to help people understand the nature behind the predicted behavior of RWNNs. The main contributions are listed below:
1) The geometric relationship between the hidden parameters and the residual error is employed to build an interpretable geometric information constraint for assigning the randomized hidden parameters in the incremental construction process, and the theoretical analysis is fully discussed.
2) A node pool strategy is developed to further improve the quality of the hidden nodes by searching the hidden parameters that are more conducive to convergence.
3) Using different calculation methods of network output weights, two algorithm implementations, namely ICA and ICA+, are proposed.
The remainder of the article is organized as follows. Section 2 briefly reviews RWNNs and the constructive algorithms. Section 3 proposes an interpretable constructive algorithm and describes it in detail. In section 4, a numerical simulation dataset, six real-world datasets, an ore grinding semi-physical simulation platform, and a gesture recognition system are considered to evaluate the effectiveness and efficiency of the proposed ICA and ICA+. Finally, conclusions are drawn in section 5.
## II Preliminaries
### _Random Weight Neural Networks_
RWNNs can be regarded as a flatted network, where all the hidden parameters (the input weights and biases) are randomly assigned from a fixed interval and fixed during the training process. The output weights are evaluated by solving a system of linear equations. The theory of RWNNs is described as follows.
For a target function \(f:R^{d}\to R^{m}\), the RWNNs with \(L\) hidden nodes can be written as \(:f_{L}=H\beta\), where \(H=\left[g_{1}\left(\omega_{1}^{\mathrm{T}}\cdot x+b_{1}\right),\cdots,g_{L} \left(\omega_{L}^{\mathrm{T}}\cdot x+b_{L}\right)\right]\), \(\mathrm{T}\) denotes matrix transpose, \(x\) is the input sample, \(\omega_{j}\) and \(b_{j}\) are the input weights and biases of the \(j\)-th hidden node, respectively. \(j=1,\cdots,L\), \(g_{j}\) denotes the nonlinear activation function of the \(j\)-th hidden node. The output weights \(\beta\) are evaluated by \(\beta=H^{\dagger}f_{L}\), where \(\beta=\left[\beta_{1},\beta_{2},...,\beta_{L}\right]^{\mathrm{T}}\), \(H^{\dagger}\) denotes the Moore-Penrose generalized inverse of \(H\).
### _Constructive Algorithms_
Constructive algorithms are likely to find the minimal network structure due to their incremental construction nature. Therefore, the constructive algorithms are introduced RWNNs, namely IRWNNs. Specifically, assuming that the IRWNNs with \(L-1\) hidden nodes does not reach the termination condition, then a new hidden node will be generated by the following two steps:
1) The input weights \(\omega_{L}\) and bias \(b_{L}\) are randomly generated from the fixed interval \(\left[-\lambda,\lambda\right]^{d}\) and \(\left[-\lambda,\lambda\right]\). In particular, \(\lambda\) usually takes the value 1. Then, the output vector \(g_{L}\) of the \(L\)-th hidden node, which is determined by maximizing \(\Delta=\frac{\left(e_{L-1,1},g_{L}\right)^{2}}{\left\|g_{L}\right\|^{2}}\), where \(e_{L-1}=f-f_{L-1}=\left[e_{L-1,1},e_{L-1,2},...,e_{L-1,m}\right]\) is the current network residual error. \(f_{L-1}\) is the output of IRWNNs with with \(L-1\) hidden nodes.
2) The output weights vector \(\beta_{L}\) of the \(L\)-th hidden node can be obtained by \(\beta_{L}\)=\(\frac{\left(e_{L-1,1},g_{L}\right)^{2}}{\left\|g_{L}\right\|^{2}}\).
If the new network residual error \(e_{L}=f-f_{L}\) dose not reach the predefined residual error, a new hidden node needs to be added until the predefined residual error or maximum number of hidden nodes is reached.
## III Interpretable Constructive Algorithm
In this section, the interpretable geometric information constraint is constructed based on the geometric relationship between the residual error and the hidden parameters, and the universal approximation property of this constraint is guaranteed by combining the residual error. In addition, a node pool strategy is employed to obtain hidden parameters that are more conducive to convergence. Finally, two different algorithm implementations are proposed, namely ICA and ICA+.
### _Interpretable Geometric Information Constraint_
**Theorem 1**: _Suppose that span(\(\Gamma\)) is dense in \(L^{2}\) and \(\forall g\in\Gamma\), \(0<\left\|g\right\|<v\) for some \(v\in R\). Given \(0<\sigma<1\), \(\sigma=\sigma+rand\left(1-\sigma,1\right)\), \(\tau=\frac{1+\sigma L}{1+L}\) and \(\gamma_{L}\geq\left(1-\tau\right)\). If \(g_{L}\) is randomly generated under interpretable geometric information constraint_
\[\cos^{2}\theta_{L-1}\geq\gamma_{L}\left\langle e_{L-1},e_{L-1}\right\rangle \tag{1}\]
The output weights \(\beta_{L}\) are evaluated by \(\beta_{L}\)=\(\frac{\left(e_{L-1},g_{L}\right)}{\left\|g_{L}\right\|^{2}}\). Then, we have \(\lim_{L\rightarrow+\infty}\left\|e_{L}\right\|=0\).
**Proof:**
Based on the above analysis, we have that
\[\begin{array}{l}\left\|e_{L}\right\|^{2}-\left\|e_{L-1}\right\|^{2}\\ =\left\|e_{L-1}-\beta_{L}g_{L}\right\|^{2}-\left\|e_{L-1}\right\|^{2}\\ =-2\left\langle e_{L-1},\beta_{L}g_{L}\right\rangle+\left\langle\beta_{L}g_{L},\beta_{L}g_{L}\right\rangle\\ =-\frac{\left(e_{L-1},\beta_{L}g_{L}\right)^{2}}{\left\|g_{L}\right\|^{2}}\\ \leq 0\end{array} \tag{2}\]
Then, it has been proved that the residual error \(\left\|e_{L}\right\|\) is monotonically decreasing as \(L\rightarrow\infty\).
It follows from Eq. (1) and Eq. (2) that
\[\begin{array}{l}\left\|e_{L}\right\|^{2}-\tau\left\|e_{L-1}\right\|^{2}\\ =\sum\limits_{q=1}^{m}\left\langle e_{L-1,q}-\beta_{L,q}g_{L},e_{L-1,q}-\beta_{ L,q}g_{L}\right\rangle\\ \quad-\sum\limits_{q=1}^{m}\tau\left\langle e_{L-1,q},e_{L-1,q}\right\rangle\\ =(1-\tau)\sum\limits_{q=1}^{m}\left\|e_{L-1,q}\right\|^{2}-\sum\limits_{q=1}^{m} \frac{\left(e_{L-1,q},g_{L}\right)^{2}}{\left\|g_{L}\right\|^{2}}\\ \leq\gamma_{L}\sum\limits_{q=1}^{m}\left\|e_{L-1,q}\right\|^{2}-\sum\limits_{q=1} ^{m}\frac{\left(e_{L-1,q},g_{L}\right)^{2}}{\left\|g_{L}\right\|^{2}}\\ =\gamma_{L}\|e_{L-1}\|^{2}-\frac{\left(e_{L-1,q}\right)^{2}}{\left\|g_{L}\right\| ^{2}}\end{array} \tag{3}\]
According to \(e_{L}\)=\(e_{L-1}-\beta_{L}g_{L}\) and \(\beta_{L}\)=\(\frac{\left\langle e_{L-1},g_{L}\right\rangle}{\left\|g_{L}\right\|^{2}}\), we have
\[\begin{array}{l}\left\langle e_{L},g_{L}\right\rangle\\ =\left\langle e_{L-1}-\beta_{L}g_{L},g_{L}\right\rangle\\ =\left\langle e_{L-1},g_{L}\right\rangle-\beta_{L}\left\langle g_{L},g_{L} \right\rangle\\ =\left\langle e_{L-1},g_{L}\right\rangle-\frac{\left\langle e_{L-1},g_{L} \right\rangle}{\left\|g_{L}\right\|^{2}}\left\langle g_{L},g_{L}\right\rangle \\ =0\end{array} \tag{4}\]
Then, Eq. (4) means \(e_{L}\)\(\perp\)\(g_{L}\). It can be easily observed that \(e_{L}\), \(e_{L-1}\) and \(\beta_{L}g_{L}\) satisfy the geometric relationship shown in Fig. 1.
In addition, based on \(f_{L-1}=\sum\limits_{j=1}^{L-1}\beta_{j}g_{j}\), we have
\[\begin{array}{l}\sum\limits_{j=1}^{L-1}\beta_{j}\left\langle e_{L-1},g_{j} \right\rangle\\ =\left\langle e_{L-1},\sum\limits_{j=1}^{L-1}\beta_{j}g_{j}\right\rangle\\ =\left\langle e_{L-1},e_{L-1}+f_{L-1}\right\rangle\\ =\left\langle e_{L-1},e_{L-1}\right\rangle+\left\langle e_{L-1},f_{L-1}\right\rangle \\ =\left\langle e_{L-1},e_{L-1}\right\rangle\\ =\left\|e_{L-1}\right\|^{2}\end{array} \tag{5}\]
where \(f_{L-1}\) is orthogonal to \(e_{L-1}\). Thus, we have
\[\begin{array}{l}\exists\beta_{j}\left\langle e_{L-1},g_{j}\right\rangle\geq \frac{\left|e_{L-1}\right|^{2}}{L-1}\\ <=>\beta_{j}\left\|g_{j}\right\|\left\langle\frac{\left\langle e_{L-1},g_{j} \right\rangle}{\left\|g_{j}\right\|}\geq\frac{\left|e_{L-1}\right\|^{2}}{L-1} \\ =>\frac{\left|\left\langle e_{L-1},g_{j}\right\rangle\right|}{\left\|g_{j} \right\|}\geq\frac{\left|e_{L-1}\right\|^{2}}{\left\|e_{L-1}\right\|^{2}} \end{array} \tag{6}\]
where \(\frac{\left\|e_{L-1}\right\|^{2}}{L-1}\)is the average of \(\left\|e_{L-1}\right\|^{2}\).
According to the geometric relationship (Fig. 1) and Eq. (6), the following equation is obtained
\[\frac{\left|\left\langle g_{j},e_{L-1}\right\rangle\right|}{\left\|g_{j} \right\|}=\left(\left\|e_{L-1}\right\|\cos\theta_{L-1}\right) \tag{7}\]
It follows from Eq. (6) and (7) that
\[\cos^{2}\!\theta_{L-1}\geq\varphi\!\left\|e_{L-1}\right\|^{2} \tag{8}\]
where \(0<\varphi=\frac{1}{\left(\left(L-1\right)\beta_{j}\left\|g_{j}\right\|\right)^ {2}}<1\). \(\varphi\) is sufficiently small when \(L-1\) is very large.
Based on the Eq. (3) and Eq. (8), the parameter \(\gamma_{L}\) is directly related to whether the residual error converges. As the modeling process proceeds, the residual error becomes smaller which makes the configuration task on \(w_{L}\) and \(b_{L}\) more challenging. Therefore, the parameter \(\gamma_{L}\) is designed as a dynamic value to ensure that Eq. (9) holds.
\[\varphi\geq\gamma_{L} \tag{9}\]
It follows from Eq. (1), (3), (8), and (9) that
\[\begin{array}{l}\left\|e_{L}\right\|^{2}-\tau\!\left\|e_{L-1}\right\|^{2}\\ \leq\gamma_{L}\!\left\|e_{L-1}\right\|^{2}-\frac{\left\langle e_{L-1},g_{L} \right\rangle^{2}}{\left\|g_{L}\right\|^{2}}\\ \leq\gamma_{L}\!\left\|e_{L-1}\right\|^{2}-\cos^{2}\!\theta_{L-1}\\ \leq 0\end{array} \tag{10}\]
Then, we have that \(\lim_{L\rightarrow+\infty}\left\|e_{L}\right\|=0\).
**Remark 1:** According to Fig. 1, the complex black-box relation between \(e_{L-1}\) and \(g_{L}\) can be visualized using \(\theta_{L}\). Then, Eq. (1) has some interpretability.
### _Node Pool Strategy_
Although the interpretable geometric information constraint can ensures that the constructed network has universal approximation property. However, the hidden parameters (\(g_{L}\)) are generated randomly at a single time, which may not make the network residual decrease quickly. As a result, Eq. (1) is optimized using the node pool strategy as
\[\left(\cos^{2}\!\theta_{L}\right)_{\max}\geq\gamma_{L}\left\langle e_{L-1},e_ {L-1}\right\rangle \tag{11}\]
**Remark 2:** Eq. (11) directly selects the hidden parameter that can minimize the network residual error from many candidates (node pool). However, the traditional output weights calculation method (\(\beta_{L}\)=\(\frac{\left\langle e_{L-1},g_{L}\right\rangle}{\left\|g_{L}\right\|^{2}}\)) may leads to slower convergence. Therefore, two effective methods are designed to evaluate the output weights, namely, 1) global optimization and 2) dynamic stepwise updating.
Fig. 3: Spatial geometry construction process of ICA.
Fig. 2: Network structure of ICA.
### _Algorithm Implementations_
In this section, two different algorithm implementations, termed ICA and ICA+, will be reported. The network structure and spatial geometry construction process of ICA are shown in Fig. 2 and Fig. 3.
#### Iii-C1 Ica
For a target function \(f:R^{d}\to R^{m}\), assume that an ICA with \(L-1\) hidden nodes has been constructed, i.e., \(f_{L-1}=\sum_{j=1}^{L-1}\beta_{j}g_{j}\left(\omega_{j}^{T}\cdot x+b_{j}\right)\). If the generated \(g_{L}\) makes the interpretable geometric information constraint Eq. (11) hold, and the output weights are given by
\[\beta\!\!=\!\arg\min_{\beta}\left\|f-\sum_{j=1}^{L}\beta_{j}g_{j}\right\| \tag{12}\]
Then, we have that \(\lim_{L\rightarrow\infty}\left\|f-f_{L}\right\|\!=\!0\), where \(f_{L}=f_{L-1}+\beta_{L}g_{L}\).
Rearranging Eq.(12), the following matrix form results
\[\beta=H^{\dagger}f_{L} \tag{13}\]
where \(\beta=\left[\beta_{1},\beta_{2},...,\beta_{L}\right]^{\mathrm{T}}\), \(H=\left[g_{1},g_{2},...,g_{L}\right]\), \(H^{\dagger}\) denotes the Moore-Penrose generalized inverse of \(H\).
**Remark 3:** The calculation of the Moore\(-\)Penrose generalized inverse involves the SVD, which will greatly increase computational cost. The problem is even more acute when dealing with large-scale data modeling tasks. To solve this problem, using the iteration theory of Greville [21, 22, 23], the ICA is extended to a lightweight version, called ICA+.
#### Iii-C2 ICA+
For a target function \(f:R^{d}\to R^{m}\), assuming that the ICA with \(L-1\) hidden nodes has been constructed, i.e., \(f_{L-1}=\sum_{j=1}^{L-1}\beta_{j}g_{j}\left(\omega_{j}^{T}\cdot x+b_{j}\right)\). If the generated \(g_{L}\) makes Eq. (11) holds. Let \(H_{L-1}=\left[g_{1},g_{2},...,g_{L-1}\right]\) denote the output weights matrix of the hidden layer with \(L-1\) nodes. \(H_{L}\!\!=\!\!\left[H_{L-1}g_{L}\right]\) represents the output weights matrix of the hidden layer with \(L\) nodes. Based on the iteration theory of Greville, \(H_{L}^{\dagger}\) can be obtained by
\[H_{L}^{\dagger}\!\!=\!\left[\begin{array}{c}H_{L-1}^{\dagger}-d_{L}b_{L}^{ \mathrm{T}}\\ b_{L}^{\mathrm{T}}\end{array}\right] \tag{14}\]
where \(d_{L}=H_{L-1}^{\dagger}g_{L}\), \(c_{L}=h_{L}-H_{L-1}d_{L}\), \(b_{L}^{\mathrm{T}}=\left\{\begin{array}{ccl}\left(c_{L}\right)^{\dagger}&if&c _{L}\neq 0\\ \left(1+d_{L}^{\mathrm{T}}d_{L}\right)^{-1}\!d_{L}^{\mathrm{T}}H_{L-1}^{ \mathrm{T}}&if&c_{L}=0\end{array}\right..\)
Then, the output weights can be derived as
\[\beta\!\!=\!\!\left[\begin{array}{c}\beta^{previous}-d_{L}b_{L}^{\mathrm{ T}}f\\ b_{L}^{\mathrm{T}}f\end{array}\right] \tag{15}\]
where \(\beta^{previous}\) denotes the output weights before a new hidden node is added. The specific implementation steps of ICA and ICA+ are shown in the pseudocode.
```
Initialize\(x=\{x_{1},x_{2},...,x_{N}\}\), \(x_{i}\in R^{d}\) is the input and \(f\!=\!\{f_{1},f_{2},...,f_{N}\}\), \(f_{i}\in R^{m}\) is the output, \(\mathrm{T}_{\mathrm{max}}\) denotes the maximum times of random configuration; \(\mathrm{L}_{\mathrm{max}}\) denotes the maximum number of iteration; \(\ell\) is the expected error tolerance, \(\zeta\!=\!\{\lambda_{\min}:\lambda:\lambda_{\max}\}\) is a set of positive scalars; \(e_{0}=f,0<r<1\), \(\Omega\), \(W\), \(L=1\), \(\Omega\), \(W\), \(L=1\), \(\mathbf{While}\leq L_{\mathrm{max}}\) or \(\left\|e_{0}\right\|>\ell\), do \(\mathbf{HHDHDParameters}\) For\(\lambda\in\zeta\),do For\(k=1,2,...,\mathrm{T}_{\mathrm{max}}\),do Randomly assign the input weights \(\omega_{L}\) and the bias \(b_{L}\) from \(\left[-\lambda,\lambda\right]^{d}\) and \(\left[-\lambda,\lambda\right]\), respectively: Calculate \(g_{L}\), \(\mathrm{cos}\ell_{L-1}\) based on \(g_{L}=g_{L}\left(\left(\omega_{L}^{T}\cdot x+b_{L}\right)\right)\) and \(\frac{\left\langle e_{L-1}\cdot g_{L}\right\rangle}{\left\|e_{L-1}\right\| \cdot\left\|g_{L}\right\|}\); Save \(\omega_{L}\) and \(b_{L}\) in \(W\); End For. Calculate \(\tau\) and \(\gamma_{L}\); If \(\left(\mathrm{cos}^{2}\theta_{L-1}\right)\geq\gamma_{L}\)\(\left\langle e_{L-1},e_{L-1}\right\rangle\),do Find \(w_{L}^{\mathrm{T}}b_{L}^{\mathrm{T}}\) that maximize \(\mathrm{cos}^{2}\theta_{L-1}\) in \(W\); Set \(H_{L}=\left[g_{1},g_{2},...,g_{L}\right]\); Else Renev \(\sigma:=\sigma+rand\left(1-\sigma,1\right)\); Returns and regenerates the hidden parameters; End If. Output Weight Evaluation Calculate \(\beta\) using Eq. (13) (ICA) or Eq. (15) (ICA+); Obtain \(e_{L}=f-H_{L}\beta\); Update \(e_{0}=e_{L}\) and \(L=L+1\); End While.
```
**Algorithm** ICA(+)
## IV Experimental Results
In this section, we present the performance of the proposed ICA and ICA+ as well as IRWNNs, and CIRWN on a function approximation dataset, six benchmark datasets, an ore grinding semi-physical simulation platform, and a gesture recognition system. The function approximation dataset was randomly generated by Eq. (16) defined on [0, 1]. The specifications of the seven datasets can be found in TABLE I. In addition, the experimental parameters of all algorithms are summarized in TABLE II. The above experimental parameter settings are the best solutions obtained from multiple experiments.
\[f\left(x\right)=\frac{1}{\left(\left(x-0.3\right)^{2}+0.01\right)}+\frac{1}{ \left(\left(x-0.9\right)^{2}+0.04\right)}-6 \tag{16}\]
where \(x\in\left[0,1\right]\).
All the comparing experiments are implemented on MATLAB 2020a running on a PC with 3.00 GHz Core i7 CPU and 8 GB RAM. Each experiment is repeated 30 times, and the average of the 30 experiments is set as the final reported result. The sigmoid function \(g\left(u\right)=\frac{1}{1+\mathrm{exp}(-u)}\) is employed as the activation function of these four randomized algorithms. In addition, modeling accuracy, root mean squares error (RMSE), and efficiency (the time spent on building the network) were employed to measure the performance of all randomized algorithms.
\[\mathrm{RMSE}=\sqrt{\frac{1}{N}\sum_{i=1}^{N}\left(f_{i}-\tilde{f}_{i}\right)^ {2}} \tag{17}\]
where \(f_{i}\) denotes the real value of the output, \(\tilde{f}_{i}\) is the prediction, \(N\) is the number of the input.
of hidden nodes. In addition, both ICA and ICA+ require fewer hidden nodes to achieve the RMSE convergence. These results show that the proposed two randomized algorithms have obvious advantages in compact structure. Fig. 5 shows the kernel density function (KDF) of the estimated error when the models achieve the expected error tolerance. It can be seen from Fig. 5 that the ICA and the ICA+ perform better than the IRWNNs and the CIRWN due to the KDF of ICA and ICA+ is approximating the real data distribution. This means that the proposed ICA and ICA+ have better prediction ability. Fig. 6 describes the different parameters \(\lambda\) influence on the KDF performance of ICA. It is evident that different \(\lambda\) have favorable or unfavorable effects on the KDF performance of ICA. This shows that \(\lambda\) is an important parameter for the KDF performance of ICA. Therefore, to achieve better KDF performance, \(\lambda\) should not remain fixed.
#### Iv-A2 Benchmark Datasets
In this section, the performance of the ICA, ICA+, IRWNNs, and CIRWN is measured on six benchmark datasets. These benchmark datasets are mainly from KEEL and UCI, and their details can be observed in TABLE I. The experimental parameter information of the four randomized algorithms are given in TABLE II. TABLE III shows the time, training RMSE and testing RMSE results of these four randomized algorithms on six benchmark datasets. As shown in TABLE III, the training RMSE of both ICA and ICA+ is lower than that of CIRWN on most datasets. This means that the interpretable geometric information constraint can help generate better quality hidden parameters. Meanwhile, when compared wth the ICA+, the ICA is weak in the testing and training RMSE performance. This is because ICA+ uses an iterative update method to obtain the output weights, which is very dependent on the quality of the output weights of the first hidden node. In contrast, ICA uses the Moore-Penrose generalized inverse method to obtain the output weights, which enables ICA to obtain the globally optimal output weights after each node is added.
When the same hidden nodes are added, the training time of the proposed ICA and ICA+ is lower than the training time of CIRWN, especially ICA+. Compared with ICA, ICA+ reduced the training time by 14.29%, 50.56%, 67.58%, 43.31%, 67.02%, and 75.96% on Iris, Segment, HAR, Compactiv, Concrete, and Winequality, respectively. Therefore, the LCA+ is superior to other randomized algorithms in terms of lightweight. Fig. 7 depicts the effect of the hidden node pool size \(T_{\max}\) on the RMSE of ICA. It shows that too large or too small \(T_{\max}\) leads to an increase in RMSE. It is worth noting that we do not show the training time. In fact, \(T_{\max}\) is related to the efficiency of the network because it controls the number of hidden parameters obtained from the random interval. In our
Fig. 4: Convergence curve of RMSE for the ICA, ICA+, CIRWN and IRWNNs.
Fig. 5: KDF of the ICA, ICA+, CIRWN and IRWNNs.
Fig. 6: Impact of different \(\lambda\) on KDF.
experiments, this parameter was set with careful trade-offs.
### _Hand Gesture Recognition Case_
Hand Gesture recognition (HGR) is a hot topic in patter recognition due to its wide ranges of applications, such as virtual reality, health monitoring and smart homes [24]. In this section, we evaluate the performance of ICA and ICA+ based on our own developed HGR system. The HGR system framework is shown in Fig. 8, which includes both hardware module and software module. The hardware module consists of a gesture data acquisition and a data transmission shown in Fig. 9. The software module includes a feature extraction and a modeling and recognition shown in Fig. 10. In this section, a gesture dataset with a total sample size of 5136, feature number of 64 and category number of 24 was obtained through the feature extraction of the software module [25]. The dataset has been divided into the training dataset and the testing dataset.
#### Iv-B1 Parameter Configuration
For IRWNNs, the random set of hidden parameters is fixed interval [-150,150]. The random parameters of the other three randomized algorithms is selected from a variable interval \(\zeta\)=\(\{150:10:200\}\). For ICA, ICA+, and CIRWN, the maximum number of iteration is set to \(L_{\max}\) = 500, and the maximum times of random configuration is set to \(T_{\max}\) = 20.
#### Iv-B2 Comparison and Discussion
The average RMSE of four algorithms based on thirty times of experiments on the HGR testing dataset is displayed in Fig. 11. It can be found that the ICA and ICA+ have good stability performance in terms of RMSE. As can be seen from Fig. 11, the difference between the maximum RMSE and the minimum RMSE for IRWNNs and CIRWN is 0.15 and 0.02, respectively. TABLE IV shows the experimental results of the IRWNNs, ICA, ICA+, and CIRWN on the HGR system. It can be seen from TABLE IV that compared with IRWNNs and CIRWN, the ICA and ICA+ have a great advantage in terms of training time and classification accuracy. Based on the comparisons and analyzes of these results, we can conclude that the proposed ICA and ICA+ are more effective than IRWNNs and CIRWN for HGR tasks.
Fig. 8: Framework diagram of gesture recognition system.
Fig. 10: Software module.
Fig. 7: Impact of node pool on RMSE.
Fig. 9: Hardware module.
### _Ore grinding Case_
## V Conclusion
In this paper, an interpretable constructive algorithm (ICA) is proposed to visualize the contribution of each hidden parameter on residual error to improve the interpretability of RWNNs predicted behavior. In ICA, the hidden parameters are randomly assigned by the interpretable geometric information constraint with node pool strategy. Further, ICA is extended to ICA+ in order to reduce the computational cost. In particular, the difference between ICA+ and ICA is that ICA+ uses a more lightweight and efficient iterative update method to evaluate the output weights, while ICA uses a globally optimal approach to evaluate the output weights. Experimental results on seven benchmark datasets, a hand gesture recognition system and an ore grinding semi-physical simulation platform show that ICA and ICA+ can effectively reduce computational consumption and have better network performance than other construction algorithms.
|
2302.09019 | Tensor Networks Meet Neural Networks: A Survey and Future Perspectives | Tensor networks (TNs) and neural networks (NNs) are two fundamental data
modeling approaches. TNs were introduced to solve the curse of dimensionality
in large-scale tensors by converting an exponential number of dimensions to
polynomial complexity. As a result, they have attracted significant attention
in the fields of quantum physics and machine learning. Meanwhile, NNs have
displayed exceptional performance in various applications, e.g., computer
vision, natural language processing, and robotics research. Interestingly,
although these two types of networks originate from different observations,
they are inherently linked through the common multilinearity structure
underlying both TNs and NNs, thereby motivating a significant number of
intellectual developments regarding combinations of TNs and NNs. In this paper,
we refer to these combinations as tensorial neural networks (TNNs), and present
an introduction to TNNs in three primary aspects: network compression,
information fusion, and quantum circuit simulation. Furthermore, this survey
also explores methods for improving TNNs, examines flexible toolboxes for
implementing TNNs, and documents TNN development while highlighting potential
future directions. To the best of our knowledge, this is the first
comprehensive survey that bridges the connections among NNs, TNs, and quantum
circuits. We provide a curated list of TNNs at
\url{https://github.com/tnbar/awesome-tensorial-neural-networks}. | Maolin Wang, Yu Pan, Zenglin Xu, Xiangli Yang, Guangxi Li, Andrzej Cichocki | 2023-01-22T17:35:56Z | http://arxiv.org/abs/2302.09019v2 | # Tensor Networks Meet Neural Networks:
###### Abstract
Tensor networks (TNs) and neural networks (NNs) are two fundamental types of data modeling approaches. TNs have been proposed as a solution to the curse of dimensionality faced by large-scale tensors by converting an exponential number of dimensions to polynomial complexity. Thus, they have attracted many studies in the fields of quantum physics and machine learning. On the other hand, NNs are computing systems inspired by the biological NNs that constitute human brains. Recently, NNs and their variants have achieved outstanding performance in various applications, e.g., computer vision, natural language processing, and robotics research. Interestingly, although these two types of networks come from different observations, they are inextricably linked via the common intrinsic multilinearity structure underlying both TNs and NNs. Consequently, a significant number of intellectual sparks regarding combinations of TNs and NNs have burst out. The combinations described as "tensor networks meet neural networks" are termed tensorial neural networks (TNNs) in this paper. This survey introduces TNNs based on three aspects. 1) Network Compression. TNs can greatly reduce parameters in NNs and satisfy the idea of constructing effective NNs. 2) Information Fusion. This can naturally and effectively enhance NNs with their ability to model the interactions among multiple modalities, views, or sources of various data. 3) Quantum Circuit Simulation. TNs can assist in designing and simulating quantum neural networks (ONNs). This survey also investigates methods for improving TNNs, examines useful toolboxes for implementing TNNs, and attempts to document TNN development and highlight its potential future directions. To the best of our knowledge, this is the first comprehensive survey to bridge the connections among NNs, TNs, and quantum circuits. We provide a curated list of TNNs at [https://github.com/tnbar/awesome-tensorial-neural-networks](https://github.com/tnbar/awesome-tensorial-neural-networks).
Tensor Networks, Neural Networks, Network Compression, Information Fusion, Quantum Circuit Simulation
## 1 Introduction
Tensors are high-order arrays that represent the multiway interactions among multiple modal sources. In contrast, vectors (i.e., first-order tensors) and matrices (i.e., second-order tensors) are accessed in only one or two modes, respectively. As a common data type, tensors have been widely observed in several scenarios [1, 2, 3, 4]. For instance, functional magnetic resonance imaging (fMRI) samples are inherently fourth-order tensors that are composed of three-dimensional voxels that change over time [5, 6, 7]. In quantum physics, variational wave functions used to study many-body quantum systems are also high-order tensors [8, 9]. For spatiotemporal traffic analysis, road flow/speed information, which is collected from multiple roads over several weeks, can also be structured as a third-order tensor (road segment\(\times\)day\(\times\)time of day) [10]. However, for higher-order tensors, when the number of modes increases, the total number of elements in the tensors increases exponentially, resulting in a catastrophe when storing and processing tensors. Such a phenomenon is also recognized as the "curse of dimensionality" [11].
**Tensor Networks (TNs).** TNs [8, 11, 12] are generally countable collections of small-scale tensors that are interconnected by tensor contractions. These small-scale tensors are referred to as "components", "blocks", "factors", or "cores". Very large-scale tensors can be approximately represented in extremely compressed and distributed formats through TNs. Thus, it is feasible to implement distributed storage and efficient processing for high-order tensors that could not be dealt with before. By using TN methods, the curse of dimensionality can be alleviated or completely overcome [11]. Commonly used TN formats include CANDECOMP/PARAFAC (CP) [13, 14, 15], Tucker decomposition [16, 17], Blockerm Tucker (BTT) decomposition [18, 19, 20], Matrix Product State (MPS)/Tensor Train (TT) decomposition [21, 22, 23, 24], Matrix Product Operators (MPO)/matrix Tensor Train (mTT) decomposition [21, 22, 23, 24], Tensor Ring (TR) decomposition [25], Tree TN/Hierarchical Tucker (HT) decomposition [26], Projected Entangled Pair State (PEPS)/Tensor Grid decomposition [8, 27, 28], Multiscale Entanglement Renormalization [29], etc. For the purpose of understanding the interconnected structures of TNs, a TN diagram was developed as a straightforward graphical diagram (which is discussed in Section 2.2). A TN can provide a theoretical and computational framework for the analysis of some computationally unacceptable tasks. For example, based on the low-rank structures of TNs, Pan et al. [30] were able to solve the quantum random circuit sampling problem in 15 hours using 512 graphics processing units (GPUs); this problem was previously believed to require over 10,000 years on the most powerful classic electronic supercomputer and effectively challenge the quantum supremacy of Google's quantum computer called "Sycamore". Other applications include brain analysis [31], quantum chemistry calculation [32], human face clustering [33], dimensionality reduction [34], missing value estimation [35], latent factor analysis [36], subspace learning [37], etc.
**Neural Networks (NNs).** NNs are biologically inspired learning paradigms that enable a machine to learn knowledge from observational data through backpropagation [38, 39]. NNs that are stacked with multiple layers, i.e., deep NNs (DNNs) [40, 41], are widely used in the field of artificial intelligence due to their powerful ability to capture abundant information from deep structures. Typical types of DNNs include restricted Boltzmann machines (RBMs) [42], convolutional NNs (CNNs) [41, 43], recurrent NNs (RNNs) [44, 45], and Transformers [46, 47]. DNNs currently reach state-of-the-art performance in a wide range of applications in computer vision [48] and natural language processing [49]. For example, a number of CNN architectures such as AlexNet [50], VGGNet [51], GoogLeNet [52] and ResNet [53] won championships on the ImageNet dataset [54], demonstrating good potential for solving image classification tasks. Particularly, Alphafold [55, 56], which is a kind of Transformer architecture, can identify the structure of a protein in days, which previously took researchers years. Recently, Alphafold2 [55, 56] predicted the structures of nearly all known proteins with average atomic precision. Deep learning techniques are still pushing forward advances in a number of disciplines, including speech recognition [57], DNA mutations detection [58], structural biology [55, 56], drug discovery [59], food security [60], etc.
**Tensor Networks Meet Neural Networks.** TNs and NNs are two types of networks that come from different origins and have achieved success from different aspects, as mentioned above. Interestingly, they are closely bonded through their multilinear mathematical property rather than being orthogonal to each other [11]. Therefore, a promising approach is to integrate them via multilinearity to attain the objective that "the whole is greater than the sum of the parts." The main advantages of TNs are their compact structures, multiple entries, and close relationships with quantum mechanics, while NNs are well known for their wide applications [8, 12]. Based on these observations, it is feasible to combine TNs and NNs in three ways.
(1) Network Compression. NNs have achieved many successes in various tasks [40, 41, 41]. However, NNs still suffer from excess linear product calculations with massive numbers of dimensions and the curse of dimensionality [78]. A promising solution for addressing this issue is to utilize the lightweight and multilinear characteristics of TNs [78, 68, 79]. In detail, TNs can decompose any tensor of NNs into smaller blocks, thereby reducing the dimensionality to linear complexity [61, 62]. For example, in comparison to utilizing naive long short-term memory (LSTM) for action recognition tasks, TR-LSTM [79] models, which leverage TN technology to decompose weight tensors, can
\begin{table}
\begin{tabular}{c c l l c} \hline \hline \multicolumn{1}{c}{**Category**} & \multicolumn{1}{c}{**Subcategory**} & \multicolumn{1}{c}{**Detailed Models/Techniques**} & \multicolumn{1}{c}{**Section**} \\ \hline \hline \multirow{8}{*}{**TNNs**} & \multirow{3}{*}{Convolutional Neural Networks} & CP-CNN [61, 62, 63], Tucker-CNN [64, 65], TT-CNN [66, 67], & \multirow{3}{*}{3.1} \\ & & TR-CNN [68], BTT-CNN [69], TC-CNN [70, 71], HT-TT-CNN [72], & & \\ & & T-Net [73], TMTMT [74], PMT [75], CP-HOConv [76] & & \\ \cline{2-4} & \multirow{3}{*}{**Network Compression**} & \multirow{3}{*}{Recurrent Neural Networks} & TT-RNN [77], BTT-RNN [69, 78], TR-RNN [79], HT-RNN [80], & \multirow{3}{*}{3.2} \\ & & CP/Tateer-RNN [84], TC-RNN [70, 71] Kronecker-RNN [81], & & \\ & & Tucker/CP-TT-RNN [82], HT-TT-RNN [72], KCP-RNN [83], & & \\ & & Conv-TT-LSTM [84] & & \\ \cline{2-4} & \multirow{3}{*}{Transformer} & BTT-Transformer [85], MPO-Transformer [86], & & \\ & & Tuformer [87], Tucker-Bert [88], Hypoformer [89] & & \\ \cline{2-4} & \multirow{3}{*}{Graph Neural Networks} & TGNNs [90], TGCN [91], DSTGNN [92] & & \\ \cline{2-4} & & Restricted Boltzmann Machines & TT-RBM [93], TR-RBM [94], Tv-RBM [95], Mv-RBM [96] & & 3.5 \\ \hline \multirow{2}{*}{**Information Fusion**} & \multirow{2}{*}{Tensor Fusion Layers} & TFL [90, 97, 98], LMF [99], PFN [100] & & 4.1 \\ \cline{2-4} & & Multimodal Pooling Layers & MUTAN [101], MCB [102], MLB [103] CIT [104] & & 4.2 \\ \hline \multirow{2}{*}{**Quantum Circuit**} & \multirow{2}{*}{Quantum Embedding} & Image-Emb [105], Language-Emb [106, 107, 108] & & 5.1 \\ \cline{2-4} & \multirow{2}{*}{Quantum Data Processing} & Supervised MPS [105], Tree-TN [109], Uniform-MPS [108], LPS [110] & & 5.2 \\ \cline{2-4} & & Quantum TNNs & ConvAC [107, 111], TLSM [112] & & 5.3 \\ \hline \hline \multirow{8}{*}{**Utility of TNNs**} & \multirow{3}{*}{Basic Tensor Operations} & Stable Training & Mixed Precision [113], Yu Initialization [114] & & 6.1 \\ \cline{2-4} & \multirow{3}{*}{Basic Tensor Operations} & \multirow{3}{*}{Rank Selection} & PSTRN [115], TR-RL [116], CP-Bayes [117], & & 6.2 \\ \cline{1-1} & & TT-Bayes [118], TT-ADMM [119], BMF [120, 121] & & 6.2 \\ \cline{2-4} & \multirow{3}{*}{Hardware Speedup} & TIE [122], LTNN [123], TT-Engine [124], Fast CP-CNN [125] & & 6.3 \\ \cline{2-4} & & Tensorly [126], TensorTools [127],Tensor Toolbox [128], & & \\ \cline{2-4} & \multirow{3}{*}{Basic Tensor Operations} & TenDeC++ [129], OSTD [130], TensorD [131], TT-Toolbox [132], & & \\ \cline{2-4} & & Trontoch [133], TronMPS [134], Tensor [135],T3F [136], & & 7.1 \\ \cline{2-4} & \multirow{3}{*}{Deep Model Implementations} & Tensorly-Torch [126], TedNet [64] & & 7.2 \\ \cline{2-4} & & Quantum Tensor Simulations & Yao [139], TensorNetwork [137], lambeq [140], ITensor [135], TeD-Q [141] & 7.3 \\ \hline \hline \end{tabular}
\end{table} TABLE I: An overview of TNNs and their utility. We first introduce the building of compact TNNs in different basic NN structures including CNNs, RNNs, Transformers, graph neural networks (GNNs) and RBMs in Section 3. Next, we explore the use of TNs in efficient information fusion methods based on tensor fusion and multimodal fusion in Section 4. Then, we discuss some applications involving TNs in quantum circuits and quantum TNNs in Section 5. Furthermore, we explain some training and implementation techniques for TNNs in Section 6. Finally, we introduce some general and powerful toolboxes for processing TNNs in Section 7.
compress the number of parameters by approximately 34,000 times while simultaneously outperforming naive LSTM.
(2) Information Fusion. In real-world data analysis cases, it is important to model higher-order interactions in data from multimodal sources to achieve better performance [8]. However, NNs are typically used to handle the inputs of one-mode vectors, so they lack sufficient expressive power to model such higher-order interactions [101]. To solve this problem, a promising approach is to embed TNs into NNs as efficient fusion units to process multimodal data with the help of the multiple-entry property [97, 98, 100]. Taking a visual question answering (VQA) task [142] as an example, multimodal Tucker fusion (MUTAN) [101] can learn high-level interactions between textual representations and visual representations via a Tucker-format framework. As a result, MUTAN has achieved state-of-the-art performance with an efficiently parameterized low-rank structure.
(3) Quantum Circuit Simulation. TNs can act as simulators and be bridges between classic NNs and quantum circuits. First, many studies have suggested implementing NNs on quantum circuits to accelerate their running speeds through the ultra-parallelism properties of quantum computation schemes [143, 144]. However, currently quantum computers are not sufficiently powerful for deploying NNs directly, which causes difficulty when verifying the possible performance of quantum neural networks (QNNs) [143]. Fortunately, TNs can be effective quantum simulators in electronic computers because of the equivalences between TNs and quantum circuits [145, 8]. In detail, the input qubits and unitary operation gates in quantum circuits can be viewed as tensors. Gate connections can also be viewed as tensor contractions in TN schemes [145]. By utilizing TNs to achieve quantum circuit simulation for NNs, a new era of QNNs exploration can be started before realistically powerful quantum computers are manufactured.
We call this family of approaches that connect TNs with NNs **tensorial neural networks (TNNs)**. To the best of our knowledge, this is the first comprehensive survey to bridge the connections among NNs, TNs, and Quantum Circuits. An overview of TNNs and their utility is shown in Table I.
The remaining sections of this survey are organized as follows. Section 2 provides the fundamentals of tensor notations, tensor diagrams, and TN formats. Section 3 discusses the use of TNs for building compact TNNs. Section 4 explores efficient information fusion processes using TNNs. Section 5 discusses some basic applications of TNs in quantum circuits and TNNs. Section 6 explains some training and implementation techniques for TNNs. Section 7 introduces general and powerful toolboxes that can be used to process TNNs.
## 2 Tensor Basis
### _Tensor Notations_
A tensor [146, 147], also known as a multiway array, can be viewed as a higher-order extension of a vector (i.e., a first-order tensor) or a matrix (i.e., a second-order tensor). Like the rows and columns in a matrix, an \(N\)th-order tensor \(\boldsymbol{\mathcal{X}}\in\mathbb{R}^{I_{1}\times I_{2}\ldots\times I_{N}}\) has \(N\) modes (i.e., ways, orders, or indices) whose lengths (i.e., dimensions) are represented by \(I_{1}\) to \(I_{N}\), respectively. As shown in Table II, lowercase letters denote scalars, e.g., \(a\), boldface lowercase letters denote vectors, e.g., \(\mathbf{a}\), boldface capital letters denote matrices, e.g., \(\mathbf{A}\) and boldface Euler script letters denote higher-order tensors, e.g., \(\boldsymbol{\mathcal{A}}\). In this paper, we define a "tensor" with a wider range that includes scalars, vectors, and matrices.
### _Tensor Diagrams_
In this subsection, we introduce TN diagrams and their corresponding mathematical operations. TN diagrams were first developed by Roger Penrose [148] in the early 1970s and is now commonly used to describe quantum algorithms [8, 9] and machine learning algorithms [105, 61, 12]. In these diagrams, tensors are denoted graphically by nodes with edges [22]. TN diagrams are practical tools for the intuitive presentation and convenient representation of complex tensors. Therefore, tensor diagrams are widely used in the tensor field. As the data and weights in the deep learning field are all tensors, tensor diagrams are also promising for use as general network analysis tools in this area. An overview of the basic symbols of tensors is shown in Fig. 1.
#### 2.2.1 Tensor Nodes
A tensor is denoted as a node with edges, as illustrated in Fig. 1. The number of edges denotes the modes of a tensor, and a value on an edge represents the dimension of the corresponding mode. For example, a one-edge node denotes a vector \(\mathbf{a}\in\mathbb{R}^{I}\), a two-edge node denotes a matrix \(\mathbf{A}\in\mathbb{R}^{I\times J}\) and a three-edge node denotes a tensor \(\boldsymbol{\mathcal{A}}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}}\).
#### 2.2.2 Tensor Contraction
Tensor contraction means that two tensors are contracted into one tensor along their associated pairs of indices. As a result, the corresponding connected edges disappear while the dangling edges persist. Tensor contraction can be formulated as a tensor product:
\[\boldsymbol{\mathcal{C}}=\boldsymbol{\mathcal{A}}\times_{M+1,M+2, \ldots M+N}^{1,2,\ldots N}\boldsymbol{\mathcal{B}} \tag{1}\] \[=\sum_{i_{1},i_{2},\ldots i_{N}}\boldsymbol{\mathcal{A}}_{i_{1},i_{2},\ldots i_{N},\star}\quad\boldsymbol{\mathcal{B}}_{\star,i_{1},i_{2}, \ldots i_{N}}, \tag{2}\]
where \(\boldsymbol{\mathcal{A}}\in\mathbb{R}^{I_{1}\times\ldots I_{N}\times P_{1} \times\ldots P_{K}},\boldsymbol{\mathcal{B}}\in\mathbb{R}^{P_{K+1}\times \ldots P_{K+M}\times I_{1}\times\ldots I_{N}}\), and \(\boldsymbol{\mathcal{C}}\in\mathbb{R}^{P_{1}\times\ldots P_{K}\times P_{K+1} \cdots P_{K+M}}\). Fig. 1 also shows a diagram
\begin{table}
\begin{tabular}{c|c} \hline Symbol & Explanation \\ \hline \(a\) & scalar \\ \(\mathbf{a}\) & vector \\ \(\mathbf{A}\) & matrix \\ \(\boldsymbol{\mathcal{A}}\) & tensor \\ \(A\) & dimensionality \\ \(\oplus\) & convolution operation \\ \(\circ\) & outer product operation \\ \(<\cdot,\cdot>\) & inner product of two tensors \\ \(\lfloor\cdot\rfloor\) & quantum state bra vector(unit column complex vector) \\ \(\langle\cdot\rfloor\) & quantum state ket vector(unit row complex vector ) \\ \(\langle\cdot\lfloor\cdot\rfloor\rangle\) & inner product of two quantum state vectors \\ \hline \end{tabular}
\end{table} TABLE II: Tensor notations.
Fig. 1: Basic symbols for TN diagrams. For more basic knowledge about TNs, refer to [8] and [11].
of the matrix multiplication operation, which is the most classic tensor contraction situation. The equation representation is:
\[\mathbf{C}=\mathbf{A}\times_{2}^{1}\mathbf{B}. \tag{3}\]
Tensor contractions among multiple tensors (e.g., TNs) can be computed by sequentially performing tensor contractions between each pair of tensors. It is worth mentioning that the contracting sequence must be determined to achieve better calculation efficiency [149].
#### 2.2.3 Dummy Tensor
Recently, a newly designed dummy tensor was proposed by Hayashi et al. to represent convolution operations [61]. As depicted in Fig. 1, a node with star and arrow symbols denotes a dummy tensor. This operation is formulated as
\[\mathbf{y}_{j^{\prime}}=\sum_{j=0}^{\alpha-1}\sum_{k=0}^{\beta-1}\mathcal{P}_{ j,j^{\prime},k}\mathbf{a}_{j}\mathbf{b}_{k}, \tag{4}\]
where \(\mathbf{a}\in\mathbb{R}^{\alpha}\) denotes a vector that will be processed by a convolutional weight \(\mathbf{b}\in\mathbb{R}^{\beta}\), and \(\mathbf{y}\in\mathbb{R}^{\alpha^{\prime}}\) is an output. \(\mathcal{P}\in\left\{0,1\right\}^{\alpha\times\alpha^{\prime}\times\beta}\) is a binary tensor with elements defined as \(\mathcal{P}_{j,j^{\prime},k}=1\) if \(j=sj^{\prime}+k-p\) and \(0\) otherwise, where \(s\) and \(p\) represent the stride and padding size, respectively. Thus, \(\mathcal{P}\) can be applied to any two tensors to form a convolutional relationship.
#### 2.2.4 Hyperedge
As shown in Fig. 1, we illustrate the hyperedge that was also introduced by Hayashi et al. [61]. An example of a hyperedge with a size of \(R\) can be formulated as
\[\mathbf{y}_{ijk}=\sum_{l=1}^{R}\mathbf{A}_{il}\mathbf{B}_{jl}\mathbf{C}_{kl}, \tag{5}\]
where \(\mathbf{A}\in\mathbb{R}^{I\times R},\mathbf{B}\in\mathbb{R}^{J\times R}\) and \(\mathbf{C}\in\mathbb{R}^{K\times R}\) are three matrices. \(\mathbf{y}\in\mathbb{R}^{I\times J\times K}\) denotes the results of applying a hyperedge on \(\mathbf{A}\), \(\mathbf{B}\), and \(\mathbf{C}\). A hyperedge node is simply equal to a tensor whose diagonal elements are 1. This tensor indicates the addition operation performed over several substructures (e.g., the matrices in Fig. 1). Hayashi et al. [61] showed that a tensor diagram can represent an arbitrary tensorial CNN (TCNN) by introducing dummy tensors and hyperedges.
#### 2.2.5 Tensor Unfolding
Tensor unfolding is an operation that virtually flattens a tensor into a high-dimensional but low-order tensor. Matricization is a special case of tensor unfolding. To be more specific, given an \(N\)th-order tensor \(\boldsymbol{\mathcal{A}}\in\mathbb{R}^{I_{1}\times I_{2}\ldots\times I_{N}}\), its mode-\(n\) unfolding process yields a matrix \(\boldsymbol{A}_{(n)}\in\mathbb{R}^{I_{n}\times I_{1}I_{2}\ldots I_{n-1}I_{n+1} \ldots I_{N}}\). Such an operation can also be regarded as performing tensor contraction with a specifically designed tensor. A fourth-order tensor unfolding diagram is illustrated in Fig. 1.
### _Tensor Decomposition Formats_
The commonly used terminology "tensor decomposition" (TD) is equivalent to "tensor network" to some extent. Previously, TD was employed primarily in signal processing fields [150, 151]. TNs were originally utilized largely in the physics and quantum circuit fields [8, 148]. Traditional TD models, such as CP [13, 14, 15] and Tucker decomposition [16, 17], can be viewed as basic kinds of TNs. In the realm of signal processing, several powerful TNs architectures for quantum analysis have also been introduced. For instance, MPS decomposition [152] was defined as TT decomposition [21] and has tremendous success in several applications [12]. After years of collaboration and progress across different research fields, there is no significant distinction between these two terminologies. Therefore, TD and TNs are treated in a unified way in this paper. We briefly introduce some basic TDs by employing TN diagrams.
#### 2.3.1 Candecomp/parafafac
CP [13, 14, 15] factorizes a higher-order tensor into a sum of several rank-1 tensor components. For instance, given an \(N\)th-order
Fig. 2: Implementation of CP on a 5th-order tensor \(\mathcal{T}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}\times I_{4}\times I_{5}}\). We show the higher-order tensor in a matrix representation of size \(I_{1}\times I_{2}\), whose elements are 3rd-order tensors of size \(I_{2}\times I_{3}\times I_{4}\). The main idea of CP is to decompose \(\mathcal{T}\) into five factor vectors that encode the modes of \(\mathcal{T}\). Then, it is feasible to apply outer production on these factor vectors to recover \(\mathcal{T}\). Moreover, by regarding such an outer production as a component, CP with a rank of \(R\) implements summation on \(R\) components to further improve the expressive power of the tensor.
tensor \(\mathbf{\mathscr{X}}\in\mathbb{R}^{I_{1}\times I_{2}\ldots I_{N}}\), each of its elements in the CP format form can be formulated as
\[\mathbf{\mathscr{X}}_{i_{1},i_{2},\ldots,i_{N}}\approx\sum_{r=1}^{R}\mathbf{\mathscr{S}}_ {r}\prod_{n=1}^{N}\mathbf{\mathsf{A}}_{i_{n},r}^{(n)}, \tag{6}\]
where \(R\) denotes the CP rank (defined as the smallest possible number of rank-1 tensors [150]), \(\mathbf{\mathscr{S}}\) denotes the diagonal core tensor (only the \(R\) nonzero elements on the superdiagonal) and \(\mathbf{\mathsf{A}}^{(n)}\in\mathbb{R}^{I_{n}\times R}\) denotes a series of factor matrices. The TN diagram for CP is illustrated in Fig. 3 (a). We also provide a detailed visualization of CP in Fig. 2 as an illustrative case of a TN.
When calculating a CP format, the first issue that arises is how to determine the number of rank-1 tensor components, i.e., the CP rank \(R\). Actually, this is an NP-hard problem [153]. Hence, in practice, a numerical value is usually assumed in advance (i.e., as a hyperparameter), to fit various CP-based models [150]. After that, the diagonal core tensor \(\mathbf{\mathscr{S}}\) and the factor matrices \(\mathbf{\mathsf{A}}^{(n)}\) can be directly solved by employing algorithmic iteration, which usually involves the alternating least-squares (ALS) method that was originally proposed in [13, 14].
#### 2.3.2 Tucker Decomposition
Tucker decomposition [16, 17] factorizes a higher-order tensor into a core tensor multiplied by a corresponding factor matrix along each mode. To be more specific, given an \(N\)th-order tensor \(\mathbf{\mathscr{X}}\in\mathbb{R}^{I_{1}\times I_{2}\ldots I_{N}}\), the Tucker decomposition can be formulated in an elementwise manner as
\[\mathbf{\mathscr{X}}_{i_{1},i_{2},\ldots,i_{N}}\approx\sum_{r_{1},\ldots,r_{N}=1}^ {R_{1},\ldots,R_{N}}\mathbf{\mathscr{S}}_{r_{1},r_{2},\ldots,r_{N}}\prod_{n=1}^{N }\mathbf{\mathsf{A}}_{i_{n},r_{n}}^{(n)}, \tag{7}\]
where \(\{R_{1},R_{2},\ldots,R_{N}\}\) denotes a series of Tucker ranks, \(\mathbf{\mathscr{S}}\in\mathbb{R}^{R_{1}\times R_{2}\ldots R_{N}}\) denotes the core tensor and \(\mathbf{\mathsf{A}}^{(n)}\in\mathbb{R}^{I_{n}\times R_{n}}\) denotes a factor matrix. The TN diagram for Tucker decomposition is illustrated in Fig. 3 (b). Here, please note that compared with the CP rank, \(R_{1},R_{2},\ldots,R_{N}\) can take different numerical values.
Tucker decomposition is commonly used and can be degraded to CP by setting the core tensor \(\mathbf{\mathscr{G}}\) as a superdiagonal tensor whose diagonal elements are 1. In addition, Tucker decomposition lacks constraints on its factors, leading to the nonuniqueness of its decomposition results, which is typically undesirable for practical applications due to the lack of explainability. Consequently, orthogonal limitations are always imposed on the component matrices, yielding the well-known and classical higher-order singular value decomposition (HOSVD) algorithm [154].
#### 2.3.3 BTT Decomposition
CP and Tucker decomposition both decompose a tensor into a core tensor multiplied by a matrix along each mode, while CP imposes an additional superdiagonal constraint on the core tensor for the sake of simplifying the structural information of the core tensor. A more generalized decomposition method called BTT decomposition [18] has been proposed to make a tradeoff between the CP and Tucker methods by imposing a block diagonal constraint
Fig. 3: TN diagrams of some popular decompositions. (a) Diagrams of the CP format. It decomposes a tensor \(\mathbf{\mathscr{X}}\) into a sum of several rank-1 tensors \(\mathbf{\alpha}_{i,r}^{(1)}\circ\mathbf{\alpha}_{i,r}^{(2)}\circ\cdots\circ\mathbf{\alpha}_ {i,r}^{(N)}\). (b) Diagrams of Tucker decomposition. It decomposes a tensor \(\mathbf{\mathscr{X}}\) into a core tensor \(\mathbf{\mathscr{G}}\) multiplied by a matrix \(\mathbf{\mathsf{A}}^{(n)}\) along the \(n\)th mode. (c) Diagram of block term decomposition. It decomposes a tensor \(\mathbf{\mathscr{X}}\) into a sum of several Tucker decompositions (on the right) with low Tucker ranks. (d) Diagram of TT decomposition. It decomposes a tensor \(\mathbf{\mathscr{X}}\) into a linear multiplication of a set of 3rd-order core tensors \(\mathbf{\mathscr{G}}^{(2)}\cdots\mathbf{\mathscr{G}}^{(N-1)}\) and two matrices \(\mathbf{\mathscr{G}}^{(1)},\quad\mathbf{\mathscr{G}}^{(N)}\). (e) Diagram of TR decomposition. It decomposes a tensor \(\mathbf{\mathscr{X}}\) into a set of 3rd-order core tensors and contracts them into a ring structure. (f) Diagram of HT Decomposition. It represents a tensor \(\mathbf{\mathscr{X}}\) as a tree-like diagram. For more basic knowledge about TNs, refer to [8] and [11].
on Tucker's core tensor. The TN diagram for BTT decomposition is illustrated in Fig. 3 (c).
BTT decomposition aims to decompose a tensor into a sum of several Tucker decompositions with low Tucker ranks. Specifically, the BTT decomposition of a 4th-order tensor \(\mathbf{\mathfrak{X}}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}\times I_{4}}\) can be represented by 6 nodes with special contractions. Here, \(\mathbf{\mathfrak{S}}\in\mathbb{R}^{R_{C}\times R_{T}\times R_{T}\times R_{T} \times R_{T}}\) denotes the \(R_{C}\) core tensors of the Tucker decompositions, and each \(\mathbf{\mathcal{A}}^{(n)}\in\mathbb{R}^{R_{C}\times I_{u}\times R_{T}}\) denotes the \(R_{C}\) corresponding factor matrices of the Tucker decompositions. Moreover, each element of \(\mathbf{\mathfrak{X}}\) is computed as
\[\mathbf{\mathfrak{X}}_{i_{1},i_{2},i_{3},i_{4}}\approx\sum_{r_{C}=1} ^{R_{C}}\sum_{r_{1},r_{2},r_{3},r_{4}=1}^{R_{T},R_{T},R_{T},R_{T}}\] \[\mathbf{\mathfrak{S}}_{r_{C},r_{1},r_{2},r_{3},r_{4}}\mathbf{\mathcal{A}} ^{(1)}_{r_{C},i_{1},r_{1}}\mathbf{\mathcal{A}}^{(2)}_{r_{C},i_{2},r_{2}}\mathbf{ \mathcal{A}}^{(3)}_{r_{C},i_{3},r_{3}}\mathbf{\mathcal{A}}^{(4)}_{r_{C},i_{4},r_{4}}, \tag{8}\]
where \(R_{T}\) denotes the Tucker rank (which means that the Tucker rank equals \(\{R_{T},R_{T},R_{T},R_{T}\}\)) and \(R_{C}\) represents the CP rank. Together, they are called BT ranks.
The advantages of BTT decomposition mainly depend on its compatibility with the benefits of the both CP and Tucker methods. The reason for this is that when the Tucker rank is equal to 1, BTT decomposition degenerates to CP; when the CP rank equals 1, it degenerates to Tucker decomposition.
#### 2.3.4 TT Decomposition
TT decomposition [21, 22], also called MPS decomposition in quantum physics [152, 155], is derived purely from TNs. TT decomposition factorizes a higher-order tensor into a linear multiplication of a series of 3rd-order core tensors. For example, given an \(N\)th-order tensor \(\mathbf{\mathfrak{X}}\in\mathbb{R}^{I_{1}\times I_{2}\ldots I_{N}}\), the TT decomposition can be formulated in an elementwise manner as
\[\mathbf{\mathfrak{X}}_{i_{1},i_{2},\ldots,i_{N}}\approx\sum_{r_{1},r_ {2},\ldots,r_{N-1}=1}^{R_{1},R_{2},\ldots,R_{N-1}}\] \[\mathbf{\mathfrak{S}}^{(1)}_{1,i_{1},r_{1}}\mathbf{\mathfrak{S}}^{(2)}_{ r_{1},i_{2},r_{2}}\mathbf{\mathfrak{S}}^{(3)}_{r_{2},i_{3},r_{3}}\cdots\mathbf{ \mathfrak{S}}^{(N)}_{r_{N-1},i_{N},1}, \tag{9}\]
where \(\{R_{1},R_{2},\ldots,R_{N-1}\}\) denote the TT ranks, \(\mathbf{\mathfrak{S}}^{(n)}\in\mathbb{R}^{R_{n-1}\times I_{n}\times R_{n}}\) denotes a 3rd-order core tensor and \(R_{0}=R_{N}=1\), which means that \(\mathbf{\mathfrak{S}}^{(1)}\) and \(\mathbf{\mathfrak{S}}^{(N)}\) are actually two matrices. The TN diagram for TT decomposition is illustrated in Fig. 3 (d).
TT decomposition can be computed easily by employing SVD recursively. In addition, as the simplest model among the available TN, TT decomposition is widely applied in the theory and practice of TNs [9]. Notably, Eq. (9) and Fig. 3 (d) have an MPS format. Some papers [77, 89, 93] have also used TT decomposition with an MPO [156] format. Given a \(2N\)-order tensor \(\mathbf{\mathfrak{X}}\in\mathbb{R}^{I_{1}\times J_{1}\times I_{2}\times J_{2} \ldots I_{N}\times J_{N}}\), its MPO decomposition can be mathematically expressed as
\[\mathbf{\mathfrak{X}}_{i_{1},j_{1},i_{2},j_{2},\ldots,i_{N},j_{N}} \approx\sum_{r_{1},r_{2},\ldots,r_{N-1}=1}^{R_{1},R_{2},\ldots,R_{N-1}}\] \[\mathbf{\mathfrak{S}}^{(1)}_{1,i_{1},j_{1},r_{1}}\mathbf{\mathfrak{S}}^{(2 )}_{r_{1},i_{2},j_{2},r_{2}}\mathbf{\mathfrak{S}}^{(3)}_{r_{2},i_{3},j_{3},r_{3}} \cdots\mathbf{\mathfrak{S}}^{(N)}_{r_{N-1},i_{N},j_{N},1}, \tag{10}\]
where \(\{R_{1},R_{2},\ldots,R_{N-1}\}\) denote the ranks, \(\mathbf{\mathfrak{S}}^{(n)}\in\mathbb{R}^{R_{n-1}\times I_{n}\times R_{n}}\) denotes a 4th-order core tensor and \(R_{0}=R_{N}=1\), which means that \(\mathbf{\mathfrak{S}}^{(1)}\) and \(\mathbf{\mathfrak{S}}^{(N)}\) are actually two 3rd-order core tensors.
#### 2.3.5 TR Decomposition
TT benefits from fast convergence. However, it suffers from its two endpoints, which hinder the representation ability and flexibility of TT-based models. Thus, to release the power of a linear architecture, researchers link its endpoints to produce a ring format named a TR [25]. The TR decomposition of a tensor \(\mathbf{\mathfrak{X}}\in\mathbb{R}^{I_{1}\times I_{2}\ldots I_{N}}\) can be formulated as
\[\mathbf{\mathfrak{X}}_{i_{1},i_{2},\ldots,i_{N}}\approx\sum_{r_{0},r_ {1},\ldots,r_{N-1}}^{R_{0},R_{1},\ldots,R_{N-1}}\] \[\mathbf{\mathfrak{S}}^{(1)}_{r_{0},i_{1},r_{1}}\mathbf{\mathfrak{S}}^{(2 )}_{r_{1},i_{2},r_{2}}\mathbf{\mathfrak{S}}^{(3)}_{r_{2},i_{3},r_{3}}\cdots\mathbf{ \mathfrak{S}}^{(N)}_{r_{N-1},i_{N},r_{0}}, \tag{11}\]
where \(\{R_{0},R_{1},\ldots,R_{N}\}\) denote the TR ranks, each node \(\mathbf{\mathfrak{S}}^{(n)}\in\mathbb{R}^{R_{n-1}\times I_{n}\times R_{n}}\) is a 3rd-order tensor and \(R_{0}=R_{N}\). Compared with TT decomposition, it is not necessary for TR decomposition to follow a strict order when multiplying its nodes. The TN diagram for TR decomposition is illustrated in Fig. 3 (e).
#### 2.3.6 HT Decomposition
HT decomposition [26] possesses a tree-like structure. In general, it is feasible to transfer a tensor \(\mathbf{\mathfrak{X}}\in\mathbb{R}^{I_{1}\times\cdots\times I_{N}}\) to a binary tree with a root node associated with \(S_{set}=\{1,I_{2},\cdots,N\}\) and \(\mathbf{\mathfrak{X}}=\mathbf{\mathfrak{U}}_{S_{set}}\) as the root frame. \(S_{set1},S_{set2}\subseteq S_{set}\) is defined as the index set, which is associated with the left child node \(\mathbf{\mathfrak{U}}_{S_{set1}}\) and right child node \(\mathbf{\mathfrak{U}}_{S_{set2}}\). \(\mathbf{\mathfrak{U}}_{S_{set1}}\in\mathbb{R}^{R_{1}\times I_{\text{min}}(S_{set 1})\times\cdots\times I_{\text{max}}(S_{set1})}\) can also be recursively decomposed into its left child node \(\mathbf{\mathfrak{U}}_{D_{set1}}\) and right child node \(\mathbf{\mathfrak{U}}_{D_{set1}}\). The first three steps are as
\[\mathbf{\mathfrak{U}}_{S_{set}}\approx\mathbf{\mathfrak{S}}_{s}\times_{1}^{2} \mathbf{\mathfrak{U}}_{S_{set1}}\times_{1}^{2}\mathbf{\mathfrak{U}}_{S_{set2}}, \tag{12}\] \[\mathbf{\mathfrak{U}}_{S_{set1}}\approx\mathbf{\mathfrak{S}}_{s1}\times_{1}^ {2}\mathbf{\mathfrak{U}}_{D_{set1}}\times_{1}^{2}\mathbf{\mathfrak{U}}_{D_{set2}},\] (13) \[\mathbf{\mathfrak{U}}_{S_{set2}}\approx\mathbf{\mathfrak{S}}_{s2}\times_{1}^ {2}\mathbf{\mathfrak{U}}_{D_{set3}}\times_{1}^{2}\mathbf{\mathfrak{U}}_{D_{set4}}, \tag{14}\]
where \(\mathbf{\mathfrak{S}}_{s}\in\mathbb{R}^{R_{1}\times R_{2}}\), \(\mathbf{\mathfrak{S}}_{s1}\in\mathbb{R}^{R_{1}\times R_{3}\times R_{4}}\) and \(\mathbf{\mathfrak{S}}_{s1}\in\mathbb{R}^{R_{2}\times R_{5}\times R_{6}}\). This procedure can be performed recursively to obtain a tree-like structure. The TN diagram for HT decomposition is illustrated in Fig. 3 (f).
#### 2.3.7 PEPS Decomposition
A TN structure with different typologies and higher-dimensional connections can also be considered. PEPS decomposition [8, 27, 28], also known as tensor grid decomposition [157], is a high-dimensional TN that generalizes a TT. PEPS decomposition provides a natural structure that can capture more high-dimensional information. PEPS cores can be characterized as \(\mathbf{\mathfrak{S}}^{(m,n)}\in\mathbb{R}^{I_{mn}\times R_{imn}\times R_{mn} \times R_{mn}\times R_{dmn}}\). The mathematical formula [75] is
\[\mathbf{\mathfrak{X}}_{i_{1},i_{2},\ldots,i_{MN}}=\sum_
decomposition is illustrated in Fig. 3 (g). PEPS decomposition has a polynomial correlation decay with the separation distance. In contrast, MPS decomposition has an exponential correlation decay. This indicates that PEPS decomposition has a more powerful representation ability [8] because it strengthens the interactions between different tensor modes.
## 3 Network Compression with TNNs
DNNs have extraordinarily high spatial and temporal complexity levels, as deeply stacked layers contain large-scale matrix multiplications. As a result, DNNs usually require several days for training while occupying a large amount of memory for inference purposes. In addition, large weight redundancy has been proven to exist in DNNs [158], indicating the possibility of compressing DNNs while maintaining performance. Motivated by this, a wide range of compression techniques have been developed, including pruning [159, 160], quantization [161, 162], distillation [163, 164] and low-rank decomposition [79, 165, 166]. Among them, applying TNs to DNNs to construct TNNs can be a good choice since TNNs have excellent abilities to approximate the original weights with many fewer parameters [113]. In this direction, researchers have completed many studies, especially concerning the reconstruction of convolutional and fully connected layers through a variety of TD formats [61, 67, 69, 79]. With compact architectures, these TNNs can achieve improved performance with less redundancy. In this section, we introduce five common kinds of TNNs, i.e., TCNNs in Section 3.1, tensorial RNNs (TRNNs) in Section 3.2, tensorial Transformers in Section 3.3, tensorial GNN (TGNN) in Section 3.4, and tensorial RBMs in Section 3.5.
### _TCNNs_
CNNs have recently achieved much success. However, CNNs' enormous sizes cause weight redundancy and superfluous computations, affecting both their performance and efficiency. TD methods can be effective solutions to this problem. Commonly, CNNs represented with tensor formats are called TCNNs. Prior to introducing TCNNs, we formulate a vanilla CNN, shown in Fig. 4 (a), as
\[\mathfrak{Y}=\mathfrak{X}\oplus\mathfrak{C}+\mathbf{b}, \tag{17}\]
where \(\mathfrak{C}\in\mathbb{R}^{K\times K\times I\times O}\) denotes a convolutional weight, \(\mathfrak{X}\in\mathbb{R}^{I\times H\times W}\) denotes an input, \(\mathfrak{Y}\in\mathbb{R}^{O\times H^{\prime}\times W^{\prime}}\) denotes an output, \(\mathbf{b}\in\mathbb{R}^{O}\) represents a bias, and \(\otimes\) denotes a convolutional operator. \(K\) represents the kernel window size, \(I\) is an input channel, \(H\) and \(W\) denote the height and width of \(\mathfrak{X}\), \(O\) is an output channel, and \(H^{\prime}\) and \(W^{\prime}\) denote the height and width of \(\mathfrak{Y}\), respectively. TCNNs mainly focus on decomposing channels \(I\) and \(O\). In detail, the weight \(\mathfrak{C}\) is first reshaped to \(\tilde{\mathfrak{C}}\in\mathbb{R}^{K\times K\times I_{1}\times I_{2}\times \cdots I_{M}\times O_{1}\times O_{2}\times\cdots\otimes_{N}}\), where \(\prod_{k=1}^{M}I_{k}=I\) and \(\prod_{k=1}^{N}J_{k}=J\). Then, TCNNs can be derived by tensorizing the reshaped convolutional kernel \(\tilde{\mathfrak{C}}\).
To accelerate the CNN training and inference process, CP-CNN [61, 62, 63] is constructed by decomposing the convolutional weight into the CP format, as shown in Fig. 4 (d). CP-CNN only contains vectors as subcomponents, leading to an extremely compact structure and the highest compression ratio. As with CP-CNN, it is possible to implement additional TCNNs by applying tensor formats (as seen in the examples in Fig. 3) to the convolutional weight. Tucker decomposition, a widely used tensor format, is often applied to CNNs to form Tucker-CNNs [64, 65]. Different from simple Tucker formats, a BTT-CNN has a hyperedered \(R_{c}\), which can denote the summation of Tucker decompositions. Other BTT-CNNs [69] have also been proposed. Compared to Tucker CNNs, BTT-CNNs are much more powerful and usually derive better results [69]. Highly compact TT formats have also been introduced to CNNs to implement TT-CNNs [66]. Compared to TTs, TR formats are usually much more compact [68], and TR-CNNs [68] are much more powerful than TT-CNNs.
There are also some tensorial convolutional neural networks that decompose more than just the convolution cores. The tensorized network (T-Net) [73] treats the whole network as a one-layer architecture and then decomposes it. As a result, the T-Net achieves better results with a lighter structure. CP-higher-order convolution (CP-HOConv) [76] utilizes the CP format to handle tasks with higher-order data, e.g., spatiotemporal emotion estimation. For multitask missions, Yang et al. [74] proposed the Tensor Train multitask (TTMT) and Tucker multitask (TMT) models using TT and Tucker formats, respectively, to alleviate the negative transfer problem in a hard sharing architecture and reduce the parameter volume in a soft structure. A PEPS-like concatenated TN layer [75] for multitask missions was also proposed. Unlike the TMT and TMT models, which suffer from the negative transfer problem due to their hard sharing architectures, the PEPS structure only contains a soft sharing layer, thereby achieving better performance.
Fig. 4: Correspondence between TN diagrams and convolutional procedures. In each subfigure, the left part is a TN diagram, and the right part is the associated commonly used feature representation.
### _TrNNs_
RNNs, such as the vanilla RNN and LSTM, have achieved promising performance on sequential data. However, when dealing with high-dimensional input data (e.g., video and text data), the input-to-hidden and hidden-to-hidden transformations in RNNs will result in high memory usage rates and computational costs. To solve this problem, low-rank TD is efficient for compressing the transformation process in practice. First, we formulate an RNN as
\[\mathbf{h}^{(t+1)}=\phi(\mathbf{W}\mathbf{x}^{(t)}+\mathbf{U}\mathbf{h}^{(t)}+ \mathbf{b}), \tag{18}\]
where \(\mathbf{h}^{(t)}\in\mathbb{R}^{O}\) and \(\mathbf{x}^{(t)}\in\mathbb{R}^{I}\) denote the hidden state and input feature at time \(t\), respectively, \(\mathbf{W}\in\mathbb{R}^{O\times I}\) is the input-to-hidden matrix, \(\mathbf{U}\in\mathbb{R}^{O\times O}\) represents the hidden-to-hidden matrix, and \(\mathbf{b}\in\mathbb{R}^{O}\) is a bias. \(\phi(\cdot)\) indicates a series of operations that form RNN variants, including the vanilla RNN and LSTM [167]. Eq. (18) can also be reformulated in a concatenation form that is widely used in TD:
\[\mathbf{h}^{(t+1)}=\phi([\mathbf{W},\mathbf{U}][\mathbf{x}^{(t)},\mathbf{h}^ {(t)}]+\mathbf{b}), \tag{19}\]
where \([\mathbf{W},\mathbf{U}]\in\mathbb{R}^{O\times(I+O)}\) and \([\mathbf{x}^{(t)},\mathbf{h}^{(t)}]\in\mathbb{R}^{(I+O)}\) denote the concatenation of \(\mathbf{W},\mathbf{U}\) and \(\mathbf{x}^{(t)},\mathbf{h}^{(t)}\), respectively. As shown in Fig. 5, there are usually two ways to decompose RNNs: (a) only tensorizing \(\mathbf{W}\), which is often the largest component in an RNN, and (b) tensorizing \([\mathbf{W},\mathbf{U}]\) for extreme compression. Note that since \(\mathbf{U}\) is usually smaller than \(\mathbf{W}\), no works decompose \(\mathbf{U}\) only. The process of implementing a TRNN is the same as that used to implement a TCNN, namely, reshaping the weights into higher-order formulations and replacing them with tensor formats.
The most direct and simple compression method is to solely decompose the enormous input-to-hidden matrix \(\mathbf{W}\). The CP-RNN and Tucker-RNN [64] can be directly constructed with the CP and Tucker formats, respectively. With an extremely compact low-rank structure, the CP-RNN can always derive the smallest size in comparison with other tensor formats. The TT-RNN [77] implements the TT format on an RNN to obtain a high parameter compression ratio. However, the TT-RNN suffers from a linear structure with two smaller endpoints, which hinders the representation ability and flexibility of TT-based models. To release the power of a linear architecture, TRs were proposed to link the endpoints to create a ring format [25]. TR-An RNN [79] with a TR was formed to achieve a much more compact network. BTT-RNN [69, 78] was constructed on the generalized TD approach: BTT decomposition [19]. BTT-RNN can automatically learn interparameter correlations to implicitly prune redundant dense connections and simultaneously achieve better performance.
Moreover, studies are utilizing TD to compress an RNN's two transformation layers, and some have even developed decomposition methods that are suitable for both RNNs and CNNs. TT-GRU [82] and the HT-RNN [80] decompose \([\mathbf{W},\mathbf{U}]\) to attain a higher compression ratio. Specifically, TT-GRU [82] applies a TT for decomposition, and the HT-RNN [80] adopts HT decomposition. Unlike prior works that decomposed hidden matrices, Conv-TT-LSTM [84] utilizes the idea of a TT to represent convolutional operations. As shown in Fig. 5, through a TT-like convolution, Conv-TT-LSTM can replace convolutional LSTM with fewer parameters while achieving good results on action benchmarks. For the adaptation of both CNNs and RNNs, a hybrid TD (termed HT-TT) method that combines HT and TT decomposition [72] was adopted to compress both the CNN and RNN \([\mathbf{W},\mathbf{U}]\) matrices. In addition, the tensor contraction layer (TC-Layer) [71] was designed to replace the fully connected layer and therefore can be utilized as the last layer of a CNN and the hidden layers in RNNs. Interestingly, TC-Layer is a special case of a TT-based layer obtained by setting the ranks to 1.
### _Tensorial Transformers_
Transformers [168, 46] are well known for processing sequence data. Compared with CNNs and RNNs, Transformers can be stacked into large-scale sizes to achieve significant performance [47]. However, Transformers are still redundant, similar to classic DNNs, which can be made smaller and more efficient [88]. Therefore, TD, as a flexible compression tool, can be explored to reduce the numbers of parameters in Transformers [85, 86, 87].
Classic Transformers mainly consist of self-attention (SA) and feedforward Networks (FFNs). SA processes the given query matrix \(\mathbf{Q}\), key matrix \(\mathbf{K}\) and value matrix \(\mathbf{V}\) with parameters \(\mathbf{W}^{Q},\mathbf{W}^{K},\mathbf{W}^{V},\mathbf{W}^{O}\). More generally, SA is separated into \(n\) heads: \(\{\mathbf{W}_{i}^{Q}\}^{n},\{\mathbf{W}_{i}^{K}\}^{n},\{\mathbf{W}_{i}^{V}\}^{n}, \{\mathbf{W}_{i}^{O}\}^{n}\). Each head can be calculated as
\[\mathrm{Att}_{i}(\mathbf{Q},\mathbf{K},\mathbf{V})=\mathrm{softmax}\left( \frac{\mathbf{Q}\mathbf{W}_{i}^{Q}\mathbf{W}_{i}^{K}{}^{T}\mathbf{K}^{T}}{ \sqrt{d}}\right)\mathbf{V}\mathbf{W}_{i}^{V}{\mathbf{W}_{i}^{O}}^{T}. \tag{20}\]
Then, \(\mathrm{SA}((\mathbf{Q},\mathbf{K},\mathbf{V}))=\sum_{i=1}^{n}\mathrm{Att}_{i} (\mathbf{Q},\mathbf{K},\mathbf{V})\). Another important component, the FFN, is formulated as
\[\mathrm{FFN}(\mathbf{X})=\mathrm{ReLU}(\mathbf{X}\mathbf{W}^{in}+\mathbf{b}^{ in})\mathbf{W}^{out}+\mathbf{b}^{out}, \tag{21}\]
where \(\mathbf{X}\) is the input, \(\mathbf{b}^{in}\) and \(\mathbf{b}^{out}\) are biases, and \(\mathbf{W}^{in}\) and \(\mathbf{W}^{out}\) are weights. Apparently, the number of parameters in a Transformer is mainly based on its linear transformation matrices, i.e., \(\mathbf{W}^{Q},\mathbf{W}^{K},\mathbf{W}^{V},\mathbf{W}^{O}\), \(\mathbf{W}^{in}\) and \(\mathbf{W}^{out}\).
Therefore, most compression studies focus on eliminating the parameters of these matrices. For instance, the MPO structure was proposed to decompose each matrix in a Transformer [86], generating central tensors (containing the core information) and small auxiliary tensors. A tuning strategy was further adopted to continue training the auxiliary tensors to achieve a performance
Fig. 5: TR LSTM. It is effective at reducing the parameters of an LSTM model by replacing the input-to-hidden transformation weights with TR decomposition.
improvement while freezing the weight of the central tensor to retain the main information of the original matrix. Moreover, observing that a low-rank MPO structure can cause a severe performance drop, Hypoformer [89] was proposed based on hybrid TT decomposition; this approach concatenates a dense matrix part with a low-rank MPO part. Hypoformer retains the full-rank property while reducing the required numbers of operations and parameters to compress and accelerate the base Transformer. In addition, by concatenating all matrices into one larger tensor, Tucker-Bert [88] decomposes the concatenated tensor with Tucker decomposition to greatly reduce the number of parameters, leading to extreme compression and maintaining comparably good results. Interestingly, Tuformer [87] generalizes MHSA into the Tucker form, thus containing more expressive power and achieving better results, as shown in Fig. 6.
### _Tgnns_
GNNs have achieved groundbreaking performances across a range of applications and domains [169]. One classic GNN layer consists of an aggregation function for aggregating the neighbor node information and an update function for updating the current node information. For example, the processing step for node \(v\) in the \(k\)-th layer of a GNN can be formulated as
\[\begin{split}\mathbf{a}_{v}^{(k)}&\leftarrow \mathrm{Aggregate}_{(k)}\left(\left\{\mathbf{h}_{u}^{(k-1)},\forall u\in \mathcal{N}(v)\right\}\right),\\ \mathbf{h}_{v}^{(k)}&\leftarrow\mathrm{Update}_{(k) }\left(\mathbf{h}_{v}^{(k-1)},\mathbf{a}_{v}^{(k)}\right),\end{split} \tag{22}\]
where \(\mathbf{a}_{v}^{(k)}\) is an aggregated embedding vector, \(\mathbf{h}_{v}^{(k-1)}\) is a node embedding vector, and \(\mathcal{N}(v)\) is a neighbor node set. A typical choice for the update function is a simple-layer perceptron, and simple summation/maximization is always chosen as the aggregation function. Classic GNNs suffer from low model expressivity since high-order nonlinear information among nodes is missed [90]. Because of the merits of the tradeoff between expressivity and computing efficiency, the usage of TGNNs for graph data processing is quite beneficial.
To efficiently parameterize permutation-invariant multilinear maps for modeling the interactions among neighbors in an undirected graph structure, a TGNN [90] makes use of a symmetric CP layer as its node aggregation function. It has been demonstrated that a TGNN has a strong capacity to represent any multilinear polynomial that is permutation-invariant, including the sum and mean pooling functions. Compared to undirected graph processing, TGNNs are more naturally suited for high-order graph structures, such as knowledge graphs. Traditional relational graph convolutional networks neglect the trilinear interaction relations in knowledge graphs and additively combine the information possessed by entities. The TGCN [91] was proposed by using a low-rank Tucker layer as the aggregation function to improve the efficiency and computational space requirement of multilinear modeling. TGNNs are also appropriate for high-order correlation modeling in dynamic spatial-temporal graph processing situations. For example, The DSTGNN [92] applies learnable TTG and STG modules to find dynamic time relations and spatial relations, respectively. Then, the DSTGNN explores the dynamic entangled correlations between the STG and TTG modules via a PEPS layer, which reduces the number of DSTGNN parameters.
### _Tensorial RBMs_
RBMs [42] are generative stochastic NNs that can learn a probability distribution from an input set. A standard RBM consists of one visible factor \(\mathbf{v}\in\mathbb{R}^{M}\) and one hidden factor \(\mathbf{h}\in\mathbb{R}^{N}\) and assigns the following energy function for a joint vector \(\{\mathbf{v},\mathbf{u}\}\) as
\[E(\mathbf{v},\mathbf{h})=-\mathbf{v}^{T}\mathbf{w}\mathbf{h}-\mathbf{v}^{T} \mathbf{b}-\mathbf{c}^{T}\mathbf{h}, \tag{23}\]
where \(\mathbf{b}\in\mathbb{R}^{M}\) and \(\mathbf{c}\in\mathbb{R}^{N}\) are the biases of the visible layer and hidden layer, respectively, and \(\mathbf{W}\in\mathbb{R}^{M\times N}\) is the mapping weight matrix. The probability distribution of the joint vector \(\{\mathbf{v},\mathbf{u}\}\) can be defined as
\[P(v,h)=\frac{1}{Z}e^{-E(\mathbf{v},\mathbf{h})}, \tag{24}\]
where \(Z\) is a partition function. Then, some loss function can be defined via the learnable distribution formulation to optimize the parameters. The RBM parameters appear to be based mostly on the mapping weight matrix.
As a result, the majority of compression research concentrates on reducing the number of weight matrix variables. For instance, Tv-RBM [95] explores the use of an RBM for higher-order inputs. In this model, the weight matrix is transformed into a CP layer structure, where each visible layer is represented as a tensor while each hidden layer is still a vector. In another higher-order RBM, namely, Mv-RBM [96], its visible and hidden layers are all represented as matrices, and its weights are represented as a TC layer [71]. MPO-RBM [170] and TT-RBM [93] represent the weight matrix with a TT layer to greatly compress the number of required parameters. Moreover, TR-RBM [94] performs TR decomposition on its RBM, where the visible and hidden layers of TR-RBM are all generalized to tensors.
**Remark.** Compact TNNs have demonstrated the potential to achieve extremely high compression ratios while preserving their
Fig. 6: Tensor diagrams for SA modules [86]. (a) It is feasible to represent a classic multihead SA (MHSA) mechanism in a tensor diagram. MHSA can be treated as a special case of tunable-head self-attention (THSA) by setting \(\mathbf{C}=\mathbf{I}_{H}\otimes(\mathbf{1}_{D}^{\top}\mathbf{1}_{D})\). (b) The THSA) of the Tuformer can be a more generalized version of SA through a trainable matrix \(\mathbf{C}\). (c) THSA has a design space formulated as \(\mathbf{C}=\mathbf{C}_{1}\otimes(C_{2}^{\top}C_{3})\), which is the direct generalized form of MHSA.
model performance. However, their computational acceleration rates are not very significant compared with their compression ratios, which is mainly due to the contraction operations. This therefore calls for further research to improve the employed contraction strategies, since unoptimized contraction strategies can result in unsatisfactory running memory consumption.
## 4 Information Fusion via TNNs
In real-world data analyses, the collected data can be derived from multiple sources; e.g., vision, sound, and text sources can be contained in video data [142]. For example, in the VQA task, the key point lies in effectively modeling the interactions between the two modalities, i.e., text and image information. When processing such data, it is infeasible to consider diverse sources in the same form. Therefore, it is desirable to mix this information through multiple entrances to address multimodal sources in special building structures. Such methods with entrances are called information fusion approaches. Feature-level fusion [171] and decision-level fusion [172] are popular methods that are used in the early stage. However, these methods are simple linear methods and do not allow intramodality dynamics to be efficiently modeled. To solve this problem, TNNs are utilized in fusion tasks for modeling intramodality dynamics based on the natural multilinear property. In addition, TNNs are capable of processing higher-order data, which is a widely used ability. In conclusion, TNs provide effective frameworks for tensor operations, and it is natural and meaningful to express and generalize the information fusion modules (such as attention modules and vector concatenation modules) encountered in deep learning through TNs. Therefore, many studies adopt TNNs to capture the higher-order interactions among data or parameters. In this section, we introduce two main series of TNN structures for information fusion: the tensor fusion layer in Section 4.1 and multimodal pooling in Section 4.2.
### _Tensor Fusion Layer-Based Methods_
Multimodal sentiment analysis is a task containing three communicative modalities, i.e., the textual modality, visual modality, and acoustic modality [97]. Addressing multimodal sentiment analysis, Zadeh et al. [97] proposed novel TNNs with deep information fusion layers named tensor fusion layers (TFLs), which can easily learn intramodality dynamics and intermodality dynamics and are able to aggregate multimodal interactions, thereby efficiently fusing the three communicative modalities. Specifically, a TFL first takes embedded feature vectors \(\mathbf{z_{t}}\), \(\mathbf{z_{v}}\) and \(\mathbf{z_{a}}\) derived by embedding networks rather than the original three data types. Then, the TFL concatenates a scalar \(1\) with each embedded feature vector:
\[\mathbf{z}_{t}^{{}^{\prime}}=\left[\begin{array}{c}\mathbf{z_{t}}\\ 1\end{array}\right]\mathbf{z}_{v}^{{}^{\prime}}=\left[\begin{array}{c} \mathbf{z_{v}}\\ 1\end{array}\right]\mathbf{z}_{a}^{{}^{\prime}}=\left[\begin{array}{c} \mathbf{z_{a}}\\ 1\end{array}\right]. \tag{25}\]
Then, as shown in Fig. 7, the TFL obtains a feature tensor \(\mathsf{\Sigma}\) by calculating the outer product among the three concatenated vectors:
\[\mathsf{\Sigma}=\mathbf{z}_{t}^{{}^{\prime}}\circ\mathbf{z}_{v}^{{}^{\prime}} \circ\mathbf{z}_{a}^{{}^{\prime}}=\left[\begin{array}{c}\mathbf{z_{t}}\\ 1\end{array}\right]\circ\left[\begin{array}{c}\mathbf{z_{v}}\\ 1\end{array}\right]\circ\left[\begin{array}{c}\mathbf{z_{u}}\\ 1\end{array}\right]. \tag{26}\]
Finally, the TFL processes the feature tensor \(\mathsf{\Sigma}\) to obtain a prediction \(\mathbf{y}\) via a two-layer fully connected NN. Compared to direct concatenation-based fusion, which only considers unimodal interactions [97], the TFL benefits from capturing both unimodal interactions and multimodal interactions.
Despite its success, the TFL suffers from exponential increases in its computational complexity and number of parameters when the number of modalities increases. For example, in a multimodal sentiment analysis case [97], the feature tensor \(\mathsf{\Sigma}\in\mathbb{R}^{129\times 33\times 33}\) and the hidden vector \(\mathbf{h}\in\mathbb{R}^{128}\) can result in \(17,981,568\) parameters to be optimized. To address these excessive parameters, low-rank multimodal fusion (LMF) [99] adopts a special BTT layer to overcome the massive computational cost and overfitting risks of the TFL. For a general situation with \(n\) modalities, the feature tensor \(\mathsf{\Sigma}=\circ_{m=1}^{M}z_{m}^{{}^{\prime}}\) can be processed. The hidden vector \(\mathbf{h}\) can be computed as follows:
\[\mathbf{h}=ReLU\left(\mathbf{h}\mathbf{e}(\mathbf{W}_{1}z_{1}^{{}^{\prime}}, \mathbf{W}_{2}z_{2}^{{}^{\prime}},\cdots\mathbf{W}_{M}z_{M}^{{}^{\prime}},I)+ \mathbf{b}\right),\]
where \(\mathbf{W}_{i}\in\mathbb{R}^{d_{i}\times d_{h}}\) is the weight matrix and \(I\in\mathbb{R}^{d_{h}\times d_{h}}\) is an identity matrix. LMF reduces the computational complexity of the TFL from \(O\left(\prod_{m=1}^{M}d_{m}\right)\) to \(O\left(d_{h}\times\sum_{m=1}^{M}d_{m}\right)\).
Although LMF and the TFL achieve better fusion results than other methods, they restrict the order of interactions, causing higher-order interactions to lack information. A PTP [100] block has been proposed to tackle this problem. The whole procedure and TN diagram of PTP are shown in Fig. 8 and Fig. 9, respectively.
PTP first merges all feature vectors \(\left\{\mathbf{z}_{m}\right\}_{m=1}^{M}\) into a long feature vector
\[\mathbf{z}_{12\cdots M}^{\top}=\left[1,\mathbf{z}_{1}^{\top},\mathbf{z}_{2}^ {\top},\cdots,\mathbf{z}_{M}^{\top}\right]. \tag{27}\]
The polynomial feature tensor of degree \(P\) is represented as
\[\mathsf{\Sigma}^{P}=\mathbf{z}_{12\ldots M}\circ\mathbf{z}_{12\ldots M}\circ \cdots\circ\mathbf{z}_{12\cdots M}. \tag{28}\]
PTP [100] then adopts a tensorial layer (e.g., a CP layer) to process the polynomial feature tensor \(\mathsf{\Sigma}^{P}\). The CP layer is represented as
\[\mathbf{h}=\mathbf{h}\mathbf{e}(\mathbf{W}_{1}\mathbf{z}_{12\ldots M},\cdots\mathbf{ W}_{P}\mathbf{z}_{12\ldots M},\mathbf{\Lambda}) \tag{29}\]
where \(\mathbf{W}_{i}\in\mathbb{R}^{d_{i}\times d_{h}}\) is the weight matrix and \(\mathbf{\Lambda}\in\mathbb{R}^{d_{h}\times d_{h}}\) is a learnable diagonal matrix. The structure of PTP is also equivalent to that of a deep polynomial NN [173]. PTP models all nonlinear high-order interactions. For multimodal time series data, one approach uses a "window" to characterize local correlations and stack the PTP blocks in multiple layers. Such a model is called a hierarchical polynomial fusion network (HPFN) [100]. The HPFN can recursively process local temporal-modality patterns to achieve a better information fusion effect.
The structure of a single-layer PTP block is similar to that of a shallow convolutional arithmetic circuit (ConvAC) network [107] (see Section 5.3). The only difference between ConvAC and PTP is that the standard ConvAC network processes quantum location features, whereas PTP processes the temporal-modality patterns and polynomial concatenated multimodal features. The
Fig. 7: Illustration of the tensor fusion process in Eq. (26). Different from a TN diagram, each circle corresponds to a value.
HPFN is nearly equivalent to a deeper ConvAC network, and its great expressive power might be implied by their connection. The recursive relationships in deep polynomial NNs have also been found and implemented so that polynomial inputs can be efficiently computed via a hierarchical NN [100]. Chrysos et al. [173] also discovered similar results.
### _Multimodal Pooling-Based Methods_
Another group of information fusion methods originated from VQA tasks [142]. In VQA tasks, the most important aspect is to parameterize bilinear the interactions between visual and textual representations. To address this aspect, some tensor fusion methods have been discovered in this area. Multimodal compact bilinear pooling (MCB) [102] is a well-known fusion method for VQA tasks and can be regarded as a special Tucker decomposition-based NN. MCB tries to optimize the simple bilinear fusion operation
\[\mathbf{z}=\mathbf{W}[\mathbf{v}\circ\mathbf{q}], \tag{30}\]
where \(\mathbf{v}\) and \(\mathbf{q}\) are input vectors with different modalities and \(\mathbf{W}\) is a learnable weight matrix. Moreover, MCB optimizes the computational cost of the outer product operation based on the property of the count sketch projection function.
Multimodal low-rank bilinear pooling (MLB) [103] adopts a CP layer in a data fusion step that can be formulated as follows:
\[\mathbf{z}=\mathbf{1}^{T}\left(\mathbf{W}_{v}\mathbf{v}\circ\mathbf{W}_{q} \mathbf{q}\right), \tag{31}\]
where \(\mathbf{W}_{q}\) and \(\mathbf{W}_{v}\) are prepossessing weight matrices for inputs \(\mathbf{q}\) and \(\mathbf{v}\), respectively and \(\mathbf{1}\) is a vector in which all values are 1. The structure of the MLB method is a special case of LMF (see Sec. 4.1). MLB fusion methods can also be regarded as simple product pooling when the number of modalities is equal to two.
MUTAN [101] is a generalization of MCB and MLB. MUTAN adopts a Tucker layer to learn the bilinear interactions between visual and textual features:
\[\mathbf{z} =\left(\left(\boldsymbol{\mathcal{T}}_{c}\times_{1}^{1}\left( \mathbf{q}^{\top}\mathbf{W}_{q}\right)\right)\times_{2}^{1}\left(\mathbf{v}^{ \top}\mathbf{W}_{v}\right)\right)\times_{3}^{1}\mathbf{W}_{o},\] \[\mathbf{z} =\left(\boldsymbol{\mathcal{T}}_{c}\times_{1}^{1}\tilde{\mathbf{ q}}\right)\times_{2}^{1}\tilde{\mathbf{v}}, \tag{32}\]
where \(\tilde{\mathbf{q}}=\tanh\left(\mathbf{q}^{\top}\mathbf{W}_{q}\right)\) and \(\tilde{\mathbf{v}}=\tanh\left(\mathbf{v}^{\top}\mathbf{W}_{v}\right)\), \(\mathcal{T}_{c}\) is the fusion weight tensor, and \(\mathbf{W}_{o}\) is the output processing weight matrix. Moreover, MUTAN [101] adopts a low rank for the fusion weight tensor \(\boldsymbol{\mathcal{T}}_{c}\), as follows:
\[\boldsymbol{\mathcal{T}}_{c}[:,:,k]=\sum_{r=1}^{R}\mathbf{m}_{r}^{k}\circ \mathbf{n}_{r}^{k\top}, \tag{33}\]
where \(\mathbf{m}_{r}^{k}\) and \(\mathbf{n}_{r}^{k\top}\) are weight vectors and \(R\) is the number of ranks. MUTAN can represent comprehensive bilinear interactions while maintaining a reasonable model size by factorizing the interaction tensors into interpretable elements.
Furthermore, compact trilinear interaction (CTI) [104] was proposed to use an attention-like structure. Instead of presenting the given data as a single vector, this method represents every modality as a matrix \(A\in\mathbb{R}^{n_{1}\times d_{a}}\), where \(d_{a}\) corresponds to the feature dimension and \(n_{1}\) denotes the number of states. CTI simultaneously learns high-level trilinear joint representations in VQA tasks and overcomes both the computational complexity and memory issues in trilinear interaction learning [104].
**Remark.** When fusing information in many multimodal tasks, TNNs can achieve promising results with natural multilinear and compact frameworks. However, the high-order outer product operation used in TNNs may cause unexpected computational complexity increases and even unstable numerical properties. Therefore, it is important to consider an efficient algorithm for reducing memory consumption and apply a feasible initialization algorithm (e.g., [114]) to achieve good stability.
## 5 Quantum Circuit Simulation with TNNs
In the past few years, the development of quantum computing theory has attracted much attention [174, 175, 176]. Quantum systems have superiority in terms of parallelism over classic electronic computers [177], so they can achieve algorithms with lower time complexity. For example, Shor's algorithm [178] based on quantum systems is theoretically exponentially faster than the classic prime number decomposition algorithm. Quantum circuits are computational hardware implements of quantum systems, and they theoretically correspond to TNs and TNNs [179, 180]. Quantum states are mathematical entities of quantum systems and are consistent with higher-order tensors with some constraints [8]. Therefore, TNNs can be used as simulators in classic computers to model realistic quantum circuits [8, 145]. Taking advantage of the ultrahigh parallelism of quantum computing, some special TNNs can be implemented on small, near-term quantum devices [180]. Quantum circuit simulation on TNNs mainly focuses on the roles of TNs as bridges between classic NNs and QNNs rather than the more general TN-based quantum circuit simulation paradigm. Please refer to other papers [8, 30, 145] if readers are interested in general circuit simulation via TNs. In this section, we use the term "classic data" to denote the data in classic electronic computers. We introduce methods for mapping classic data to quantum states through TNs in Section 5.1, then introduce basic supervised and unsupervised processing methods for the mapped
quantum states in Section 5.2, and finally introduce the famous quantum TNN model, i.e., ConvAC in Section 5.3.
### _Quantum State Embedding for Classic Data_
To process machine learning tasks in a quantum system, the input data should be converted into a linear combination of some quantum states as an orthogonal basis:
\[\ket{\psi}=\sum_{d_{1},d_{N}=1}^{M}\mathcal{A}_{d_{1}\ldots d_{N}} \ket{\psi_{d_{1}}}\circ\cdots\circ\ket{\psi_{d_{N}}},\] \[s.t\quad\sum_{d_{1}\ldots d_{N}=1}^{M}\mathcal{A}_{d_{1}\ldots d _{N}}^{2}=1,\quad\mathcal{A}_{d_{1}\ldots d_{N}}\geq 0, \tag{34}\]
where \(\ket{\cdot}\) is the Dirac notation of a vector with complex values [181], and \(\circ\) denotes the outer product operation. The tensor \(\mathcal{A}\) is the combination coefficient tensor and is always represented and analyzed via a low-rank TN [8]. To embed classic data into a quantum state for adapting quantum systems, Stoudenmire and Schwab [105] proposed a quantum state mapping function \(\phi^{i}(x_{i})\) for the \(i\)-th pixel \(x_{i}\) in a grayscale image as
\[\phi^{i}(x_{i})=[\cos(\frac{\pi}{2}x_{i}),\sin(\frac{\pi}{2}x_{i})]. \tag{35}\]
The values of pixels are transformed into the range from 0.0 to 1.0 via the mapping function. Furthermore, a full grayscale image \(\mathbf{x}\) can be represented as outer products of the mapped quantum states of each pixel:
\[\Phi^{1,2,\ldots N}(\mathbf{x})=\phi^{1}(x_{1})\circ\phi^{2}(x_{2})\circ \cdots\phi^{N}(x_{N}), \tag{36}\]
where \(\Phi^{1,2,\ldots N}(\mathbf{x})\in\mathbb{R}^{\overbrace{2\times 2\cdots \times 2}^{N}}\). Through Eq. (36), it is feasible to associate realistic images with real quantum systems.
For a natural language document, the \(i\)-th word \(\ket{x_{i}}\) can also be represented as the sum of orthogonal quantum state bases \(\ket{\phi_{h_{i}}}(h_{i}=1,\ldots,M)\)[106, 107, 108, 182] corresponding to a specific semantic meaning \(M\):
\[\ket{x_{i}}=\sum_{h_{i}=1}^{M}\alpha_{i,h_{i}}\ket{\phi_{h_{i}}},\] \[s.t\quad\sum_{h_{i}=1}^{M}\alpha_{i,h_{i}}^{2}=1,\quad\alpha_{i, h_{i}}\geq 0, \tag{37}\]
where \(\alpha_{i,h_{i}}\) is the associated combination coefficient for each semantic meaning. The constraint of \(\alpha_{i}\) is and. After completing data mapping, the embedded quantum data can be processed by TNNs on a realistic quantum circuit, as shown in Fig. 10. The loss functions of TNNs can also be defined through the properties of quantum circuits. Such a procedure can be simulated on classic electronic computers via TNs and can be theoretically efficiently implemented on realistic quantum systems.
### _Embedded Quantum Data Processing_
Two series of learning methods are important and have potential for designing and optimizing TNNs, which can be implemented on realistic quantum circuits. One involves applying the density matrix renormalization group (DMRG) algorithm [8, 184] to train supervised models. The other adopts the ideas of the Born machine [185] to learn data distributions via an unsupervised procedure. We introduce these methods in the next part.
#### 5.2.1 Supervised TN Models
Supervised models are used to model the conditional probability distributions of labels (output) given input features based on example input-output pairs. Taking embedded quantum data as inputs, Stoudenmire and Schwab [105] proposed supervised MPS-like tensorial multilinear models and adopted DMRG-like algorithms to optimize the model weights. Prior to introducing a specific implementation, their models must first be formulated as procedures that optimize a set of functions indexed by different labels \(\ell\):
\[f^{\boldsymbol{\ell}}(\mathbf{x})=\boldsymbol{\mathcal{W}}^{ \boldsymbol{\ell}}\times_{1,2\cdots N}^{1,2\cdots N}\Phi(\mathbf{x}), \tag{38}\]
where \(\Phi(\cdot)\) is the feature map function in Eq. (36) and \(\boldsymbol{\mathcal{W}}^{\boldsymbol{\ell}}\in\mathbb{R}^{\overbrace{2\times 2 \cdots\times 2}^{N}}\) is the weight tensor. Then, the aforementioned tensorial models can be derived by replacing \(\boldsymbol{\mathcal{W}}^{\boldsymbol{\ell}}\) with an MPS TN:
\[\boldsymbol{\mathcal{W}}^{\boldsymbol{\ell}}_{s_{1}s_{2}\cdots s_ {N}}=\sum_{\{\alpha\}}\mathcal{A}^{(1)}_{s_{1},\alpha_{1}}\cdots\mathcal{A}^{( m)}_{\alpha_{m-1},s_{m},\alpha_{m}}\cdots\mathcal{A}^{(N)}_{\alpha_{N-1},s_{N}}, \tag{39}\]
where \(\mathcal{A}^{(1)}\cdots\mathcal{A}^{(N)}\) are core tensors. Furthermore, the authors proposed an optimization algorithm named Sweeping that was motivated by the DMRG algorithm [8] in quantum mechanics. The optimization algorithm sweeps along an MPS to optimize the quadratic cost function:
\[c=\frac{1}{2}\sum_{n=1}^{N_{T}}\sum_{\ell}\left(f^{\ell}\left( \mathbf{x}_{n}\right)-\mathbf{y}_{n\ell}\right)^{2}, \tag{40}\]
Fig. 10: The processing procedure employed for quantum embedded data [183]. Quantum circuits can be simulated via TNNs on classic electronic computers, and some special TNNs (such as ConvAC) can also be theoretically implemented on a realistic quantum circuit.
Fig. 9: TN diagrams of PTP. CP and TR structures can be adopted in such a strategy.
where \(N_{T}\) denotes the number of training samples, and \(\mathbf{y}_{n}\) denotes the true one-hot label vector of \(\mathbf{x}_{n}\). The optimization process is carried out to minimize this cost function in stages with stochastic gradient descent. A single stage is shown in Fig. 11. In each stage, two MPS tensors \(\mathbf{\mathcal{A}}^{(2)}\) and \(\mathbf{\mathcal{A}}^{(3)}\) are combined into a single bond tensor \(\mathbf{\mathcal{V}}\) via tensor contraction. Then, the tensor \(\mathbf{\mathcal{V}}\) is updated with gradients. Finally, \(\mathbf{\mathcal{V}}\) is decomposed back into separate tensors with the SVD algorithm. The Sweeping method is efficient in optimizing models whose inputs embedded quantum data, and it can also be adopted to train TNNs. In addition, other TNs, including the PEPS structure [186], can be processed with a Sweeping-like method.
#### 5.2.2 Unsupervised TN Models
The goal of unsupervised generative modeling is to model the joint probability distribution of the given data. Generative adversarial networks (GANs) [187] and variational autoencoders (VAE) [188] are successful NN models for addressing classic data distributions. Embedded quantum data are the key to training and designing quantum TNNs for generating probabilistic distributions via TNs. Inspired by the probabilistic interpretation of quantum states in quantum mechanics [189], an MPS-based generative model called the Born machine [185] was proposed. The Born machine is an energy-based model [190] derived from quantum mechanics. The distribution functions of the Born machine are shown as follows:
\[P(\mathbf{x})=\frac{|\Psi(\mathbf{x})|^{2}}{Z}, \tag{41}\]
where \(Z=\sum_{x}|\Psi(x)|^{2}\) is the normalization factor, \(\Psi(\cdot)\) is the quantum state embedding function, and the energy function of \(\mathbf{x}\) can be represented as \(|\Psi(\mathbf{x})|^{2}\) in view of quantum mechanics. \(\Psi(\mathbf{x})\) can be parameterized via a TN format [185]. The learning procedure can also be conducted via a DMRG-like algorithm and a gauge transformation strategy [191]. The distribution definitions in Eq. (41) are also useful for designing the loss functions of quantum TNNs. Furthermore, a series of Born machine structures have been proposed, including tree TNs [109], uniform MPSs for language generation [108], and locally purified states [110]. In the future, it is expected that representing and processing data via a particular quantum view will pave the way for the further implementation of TNNs on realistic quantum computers [109].
### _ConvAC Network_
The expressive power of previously developed quantum data processing models, e.g., the MPS models in Section 5.2.1 and the Born machine in Section 5.2.2, suffers from a lack of nonlinearity. Classic nonlinear operators, e.g., activation functions (such as the rectified linear unit (ReLU) function) and average/max pooling, can significantly benefit model performance. However, classic nonlinearity cannot be directly implemented in a quantum circuit. To solve this problem, the ConvAC network [111, 192] was proposed to adopt quantum deployable product pooling as a nonlinear operator, proving that ConvAC can be transformed into ConvNets with ReLU activations and average/max pooling.
The whole structure of ConvAC can be represented by an HT format and has been proven to be theoretically deployable in realistic quantum systems. A tensor diagram example of ConvAC is shown in Fig. 12, and one hidden layer of ConvAC is in a CP format. ConvAC can also handle language data [107] by mapping natural language sentences into quantum states via Eq. (37). ConvAC is a milestone in that deep convolutional networks, along with nonlinear modules, are implemented on quantum circuits. It serves as an inspiration for the integration of more NNs into quantum systems. For instance, Zhang et al. [112] introduced the tensor space language model (TSLM), which has been shown to be a generalization of the n-gram language model.
**Remark.** The implementation of quantum TNNs is a symbolic milestone that forms a bridge between QNNs and classic NNs via TNs. However, strict mapping algorithms between simulated quantum TNNs and realistic physical systems still need to be explored. Moreover, as real high-performance quantum computers are still a long way from being developed, the concept of verifying the performance of quantum TNNs in the near future is infeasible. Despite these issues, it is still important and meaningful to focus on mapping algorithms and performance verifications for QNNs.
## 6 Training Strategies for TNNs
While the aforementioned TNNs can perform well on various tasks and machines, it is also worth exploring training strategies with more stability, better performance and higher efficiency. In this section, we introduce such strategies in three groups; (1) strategies for stabilizing the training processes of TNNs are presented in Section 6.1, (2) strategies for selecting and searching the ranks of TNNs are provided in Section 6.2, and (3) strategies for applying hardware speedup are shown in Section 6.3.
### _Stable Training Approaches_
Despite a variety of successes, TNNs still suffer from training problems due to their multilinear characteristics. Compared to
Fig. 11: A single stage of the Sweeping method [105]. In each stage, Sweeping only updates the nodes in the sweeping window, which shifts along a zigzag trajectory.
Fig. 12: ConvAC is equivalent to an HT-like TN [180]. \(\mathbf{x}_{1},\ldots,\mathbf{x}_{N}\in\mathbb{R}^{s}\). \(\mathbf{x}_{1}\) corresponds to a local patch from the input image or the feature map, and \(v^{(0,j)}\) is the linear transformation of \(\mathbf{x}_{j}\). The width is 2 for a single block. Notably, a single layer is equivalent to a CP format.
simple linear operations such as matrix production, tensor contraction yields data flows with exponential scales, i.e., features in forward propagation and gradients in backward propagation, when modes linearly increase [114]. One solution is to utilize the full-precision float64 format to denote large weights, which can alleviate these numerical issues to some extent. However, a full-precision format can result in more calculations and higher time consumption compared to a lower-precision format, e.g., float16. Nevertheless, low precision may cause numerical stability issues. To solve these problems, Panagakis et al. [113] proposed a mixed-precision strategy to form a tradeoff. This dynamic precision strategy is efficient in reducing memory occupation and promotes training stability.
Another feasible way to solve the training problem lies in developing a suitable initialization.Commonly used adaptive initialization methods include Xavier [195] initialization and Kaiming [196] initialization, which regulate the variances of the data flows in layers. However, these two initializations cannot calculate the correct scales of TNNs to neglect interactions in tensor contractions. Furthermore, tensor formats are different from each other, causing difficulty in developing a general fitting initialization for diverse tensorial layers. To solve these two problems, Yu initialization [114] proposed a unified initialization paradigm based on Xavier to adaptively initialize for arbitrary TCNNs. Specifically, Pan et al. extracted a backbone graph (BG) from a tensorial convolution hypergraph [61] and then encoded an arbitrary TCNN into an adjacency matrix with this BG. Finally, a suitable initial variance for a TCNN can be directly calculated through the adjacency matrix. We illustrate three cases of using unified initializations in Fig. 13. Although Yu initialization was proposed for TCNNs, it can also be widely used in other NNs, including CNNs, RNNs, and Transformers, since these models basic layers, i.e., convolutional and linear layers, belong to the scope of TCNNs.
### _Rank Selection and Search_
Prior studies [67, 69, 79] focused on finding efficient TN formats (e.g., TTs and TRs) for compressing NNs and achieve significant efficiency for their natural compact structures. However, despite these remarkable successes, efficient algorithms for adjusting or selecting suitable ranks for a TN are lacking since rank selection is an NP-hard problem [153]. As a result, many approaches [64, 68, 77, 69] can only set values for all ranks manually, which severely affects the resulting models' training procedures. Fortunately, the rank selection problem can still be optimized through heuristic strategies, such as Bayesian optimization [118], reinforcement learning (RL) [116] and evolutionary algorithms (EAs) [115]. Here, we introduce some rank selection methods for TNNs.
DNNs utilize neural architecture search (NAS) [197] to search for the optimal network hyperparameters, achieving significant success. As ranks can be treated as architecture hyperparameters, NAS is applicable to searching for optimal tensorial layers with better rank settings. Following this idea, the progressive searching TR network (PSTRN) [115] employs NAS with an EA to select suitable ranks for a TR network (TRN). In detail, the PSTRN employs a heuristic hypothesis for searching: "when a shape-fixed TRN performs well, part or all of its rank elements are sensitive, and each of them tends to aggregate in a narrow region, which is called an interest region". Instructed by the interest region hypothesis, the PSTRN can reach the optimal point with a higher probability than a plain EA method. The PSTRN consists of an evolutionary phase and a progressive phase. During the evolutionary phase, this method validates the ranks in the search space on benchmarks and picks the rank that yields the best performance. Then, in the progressive phase, the PSTRN samples new ranks around the previously picked rank and inserts them into a new search space. After several rounds, the heuristic EA can find a high-performance solution. With such an efficient design, the PSTRN successfully achieves better performance than hand-setting, which demonstrates that its hypothesis is practical.
In addition to NAS, some other efficient methods are also available for rank selection. Zhao et al. [117] inferred a CP rank by implementing a reduction process on a large rank value via a variational Bayesian optimization procedure. Hawkins and Zhang [118] extended this CP procedure [117] to TT-based TNNs and adopted the Stein variational gradient descent method, which combines the flexibility of the Markov chain Monte Carlo (MCMC) approach with the speed of variational Bayesian inference to construct a Bayesian optimization method. In pretrained networks, Kim et al. [120] and Gusak et al. [121] derive approximate ranks by employing Bayesian matrix factorization (BMF) [198] to unfolding weight tensors. Unlike Bayesian methods, Cheng et al. [116] treated the rank searching task as a game process whose search space was irregular, thus applying RL to find comparably suitable ranks for a trained CNN. However, this algorithm is TD-dependent, which indicates that its performance may be influenced by the selected TD method. Yin et al. [119] leveraged the alternating direction
Fig. 13: Three cases of unified TCNN initialization [114]. \(\sigma^{2}\) denotes the initial variance of each weight vertex. \(\mathbf{G}_{f}\) denotes a forward procedure, and \(\mathbf{G}_{b}\) denotes a backward procedure. (i) Standard convolution. The method in [114] degenerates to Xavier/Kaiming initialization on the standard convolution for the same weight variance formulation. (ii) Hyper Tucker-2 (HTK2) convolution. Tucker-2 (TK2) is a common TD that is utilized in ResNet as the bottleneck module [193]. HTK2 is formed by applying a hyperedge to the weight vertices of TK2. (iii) Odd convolution. The odd TD was originally proposed by [194]. The connections among the vertices are irregular, making weight initialization a complex problem. These three successful initialization cases can better demonstrate the potential adaptability of unified initialization to diverse TCNNs.
method of multipliers (ADMM) to gradually transfer the original weight to a low-rank representation (i.e., a TT).
### _Hardware Speedup_
Accelerating the training and inference procedures of TNNs can benefit resource consumption and experimental adjustment, thereby achieving economic gains and green research. A direct and effective approach is to optimize the speed of tensor operations in TNNs to realize hardware acceleration. As inferring TT-format TNNs inevitably results in enormous quantities of redundant calculations, the TIE scheme [122] was proposed to accelerate TT layers by splitting the working SRAM into numerous groups with a well-designed data selection mechanism. Huang et al. [123] designed a parallel computation scheme with higher I/O bandwidth, improving the speed of tensor contractions. Later, they proposed an LTNN [123] to map TT-format TNNs into a 3D accelerator based on CMOS-RRAM, leading to significantly increased bandwidth via vertical I/O connections. As a result, they simultaneously attained high throughput and low power consumption for TNNs. Recently, Qu et al. [124] proposed a spatial 2D processing element (PE) array architecture and built a hardware TT engine consisting of off-chip DRAM. Kao et al. [125] proposed an energy-efficient hardware accelerator for CP convolution with a mixing method that combines the Walsh-Hadamard transform and the discrete cosine transform.
Many more fascinating methods have been developed for the acceleration of generic tensor operations, which are correlated with TNNs. For instance, Huang et al. [199] observed that the tensor matricization operation is usually resource-consuming since its DRAM access is built on a random reading address; thus, they proposed a tensor storage scheme with a sequential address design for better DRAM accessibility. Both T2s-tensor [200] and Tensaurus [201] mainly focus on designing general computation kernels for dense and sparse tensor data. Xie et al. [149] and Liang et al. [202] accelerated search procedures for obtaining an optimal sequence of tensor contractions. Xie et al. [149] solved the massive computational complexity problem of double-layer TN contraction in quantum analysis and mapped such a double-layer TN onto an intersected single-layer TN. Liang et al. [202] implemented multithread optimization to improve the parallelism of contractions. Fawzi et al. [203] also illustrated the potential of RL to build efficient universal tensor operations. In the future, it is expected that more general hardware acceleration schemes based on tensor operations will be developed to implement TNNs with smaller storage and time consumption levels.
**Remark.** The comments are divided into three parts. (1) To achieve training stability, it is possible to borrow ideas concerning identity transition maintenance to construct more stable initializations. In addition, it is also feasible to add adversarial examples to enhance network robustness. (2) Rank search is important for further improving the performance of TNNs. However, as it is an NP-hard problem, rank search has not been sufficiently explored. In the future, suitable ranks can be searched through the guidance of gradient sizes and EAs in searching for TNN architectures. (3) Last, research on hardware has derived some success in terms of speed acceleration and memory reduction. However, these methods are almost ad hoc designs for specific TD formats, so they lack applicability to other TNN structures.
## 7 TNN Toolkoves
In 1973, Pereyra and Scherer [204], as pioneers in this field, developed a programming technique for basic tensor operations. Recently, with the development of modern computers, many more basic tensor operation toolboxes have been developed, and a series of powerful TNN toolboxes have also been proposed for both network compression and quantum circuit simulation, which are the two main applications of TNNs. In this section, toolboxes for TNNs are presented in three categories according to their design purposes: (1) toolboxes for basic tensor operations contain important and fundamental operations (e.g., tensor contraction and permutation) in TNNs (Section 7.1); (2) toolboxes for network compression are high-level TNN architecture toolboxes based on other basic operation tools (Section 7.2); and (3) toolboxes for quantum circuit simulation are software packages for the quantum circuit simulation or quantum machine learning processes that use TNs from a quantum perspective (Section 7.3).
### _Toolboxes for Basic Tensor Operations_
Toolboxes for basic tensor operations aim to implement some specific TD algorithms. Many basic tensor toolboxes based on different programming languages and backends have been designed for this purpose. For example, The online stochastic framework for TD (OSTD) [130] and Tensor Toolbox [128] were constructed for low-rank decomposition and implemented with MATLAB. Regarding Python-based toolboxes, TensorTools based on NumPy [205] implements CP only, while T3F [136] was explicitly designed for TT decomposition on TensorFlow [206]. Similarly, based on TensorFlow, TensorD [131] supports CP and Tucker decomposition. Tntorch [133] is a PyTorch-based library for tensor modeling in the CP, Tucker and TT formats. TorchMPS [134], TT-Toolbox [132] and Scikit-TT [138] are all powerful Python-based specific TT solvers that efficiently implement the DMRG algorithm. Tensorly is a powerful general TD library that supports many decomposition formats and various Python backends including CuPy, Pytorch, TensorFlow and MXNet [207]. TensorNetwork [137] is a powerful general-purpose TN library that supports a variety of Python backends, including JAX, TensorFlow, PyTorch and NumPy. In addition, some toolboxes based on C++ are also available. TenDec++ [208] leverages a unique pointer technology called PointerDeformer in C++ to support the efficient computation of TD functions. ITensor [135] is an efficient and flexible C++ library for general TN calculations.
### _Toolboxes for Network Compression_
Specific TNN toolboxes are used to assist with the development of tensorial layers. Although some general tensor toolboxes such as Tensorly [126] are powerful for TD processing and can use their TD operations to help initialize TNN modules to a certain extent, they still lack support for application programming interfaces (APIs) for building TNNs directly. Therefore, a TNN library (Tensorly-Torch) based on Tensorly was developed to build some tensor layers within any PyTorch network. Pan et al. also developed a powerful TNN library called TedNet [64]. TedNet can quickly set up TNN layers by directly calling the API. In addition, TedNet supports the construction of TCNNs and TRNNs in single lines of code.
### _Toolboxes for Quantum Circuit Simulation_
A number of quantum circuit simulation toolboxes have been designed. For example, some TT toolboxes such as Scikit-TT and TorchMPS can partially simulate quantum circuits to some extent, although they were not specifically designed for quantum circuit
simulation. In contrast, general TN toolboxes, e.g., TensorNetwork and ITensor, can simulate any quantum circuit. In addition, with optimized tensor contraction, TeD-Q [141], a TN-enhanced open-source software framework for quantum machine learning, enables the simulation of large quantum circuits. Furthermore, Yao [139], an extensible and efficient library for designing quantum algorithms, can provide support for dumping a quantum circuit into a TN. Although no practical implementations of quantum TNNs are available, these quantum circuit simulations are potentially useful for the simulation of quantum TNNs.
**Remark.** Despite the success of current toolboxes, some areas for improvement remain. (1) Existing basic tensor operation toolboxes are built using high-level software frameworks, limiting their ability to fully utilize the inherent capability of tensor computations. (2) Existing deep model implementation toolboxes for TNNs can only contain a limited number of predefined TNN structures and cannot allow users to design structures freely. (3) Existing quantum simulation toolboxes focus more on the simulation of quantum circuits using TNs and do not facilitate the processing of embedded quantum data via TNNs.
## 8 Conclusion and Future Prospects
This survey embraces the connection between TNs and NNs, summarizing the techniques for compressing NN parameters, modeling the interactions between different dimensions of data, and bridging QNNs with classic NNs. As previously noted, TNNs have considerable strengths and immense potential, making them applicable in many areas. In the future, research on TNNs can benefit from the optimization and implementation of tensor-friendly hardware; we believe that TNNs will have large impacts on studies involving quantum physics and quantum computers.
**Acceleration based on hardware design.** Although many TNNs have low calculation complexity levels in theory, realistic hardware deployments usually fall short of this objective due to their numerous permutation operations [64] and the absence of sufficient parallelism [123]. Efficient hardware and matching software for universal tensor acceleration or TNN acceleration can be developed. As mentioned in Section 6.3, the existing tensor acceleration software and hardware structures are all aimed at certain TN structures or are based on parallel matrix acceleration. The acceleration of general TN operations is necessary and urgent for the implementation of TNNs.
**Applications in quantum physics.** In some specific physical applications that need to deal with large-scale tensors, such as wave function simulation [209], specifically designed TNNs can be studied to efficiently solve these problems involving higher-order interactions. Inspired by the universal approximation theorem, some simple NNs have been adopted in wave function simulation tasks, such as free boson and fermion systems [210]. However, because of the curse of dimensionality, simple NNs are difficult to apply in extremely large-scale wave function simulation tasks, and TNNs can easily handle tasks with large-scale tensor as a result of the compact nature of TNs.
**Implementations in quantum mechanics.** The existing TNNs mainly adopt the mathematical forms of TNs and seldom consider the physical properties of the quantum systems described by these TNs [113, 112]. Since TNNs are highly related to quantum circuit structures, it is possible to obtain more efficient and effective TNNs by applying the concepts and theory of TNs in quantum mechanics. For example, optimizing and interpreting TNNs from the perspective of entanglement entropy theory [211] could be a meaningful direction for producing interpretable and efficient NNs.
## Acknowledgments
This paper was partially supported by National Key Research and Development Program of China (No. 2018AAA0100204), a key program of fundamental research from Shenzhen Science and Technology Innovation Commission (No. JCYJ20200109113403826), and the Major Key Project of PCL (No. PCL2021A06).
|
2303.08760 | Deep Calibration With Artificial Neural Network: A Performance
Comparison on Option Pricing Models | This paper explores Artificial Neural Network (ANN) as a model-free solution
for a calibration algorithm of option pricing models. We construct ANNs to
calibrate parameters for two well-known GARCH-type option pricing models:
Duan's GARCH and the classical tempered stable GARCH that significantly improve
upon the limitation of the Black-Scholes model but have suffered from
computation complexity. To mitigate this technical difficulty, we train ANNs
with a dataset generated by Monte Carlo Simulation (MCS) method and apply them
to calibrate optimal parameters. The performance results indicate that the ANN
approach consistently outperforms MCS and takes advantage of faster computation
times once trained. The Greeks of options are also discussed. | Young Shin Kim, Hyangju Kim, Jaehyung Choi | 2023-03-15T16:57:10Z | http://arxiv.org/abs/2303.08760v1 | # Deep Calibration With Artificial Neural Network: A Performance Comparison on Option Pricing Models
###### Abstract
This paper explores Artificial Neural Network (ANN) as a model-free solution for a calibration algorithm of option pricing models. We construct ANNs to calibrate parameters for two well-known GARCH-type option pricing models: Duan's GARCH and the classical tempered stable GARCH that significantly improve upon the limitation of the Black-Scholes model but have suffered from computation complexity. To mitigate this technical difficulty, we train ANNs with a dataset generated by Monte Carlo Simulation (MCS) method and apply them to calibrate optimal parameters. The performance results indicate that the ANN approach consistently outperforms MCS and takes advantage of faster computation times once trained. The Greeks of options are also discussed.
_JEL classification:_ C15, C63, C65, G130
keywords: Deep calibration, Artificial Neural Network (ANN), Feedforward Neural Network (FNN), Option pricing models, GARCH model, Duan's GARCH model, Tempered stable model +
## 1 Introduction
Option pricing has been a centric topic in quantitative finance for the past few decades, with remarkable growth in option market. Since the seminal work of Black and Scholes (1973) and Merton (1973), the Black-Scholes model has remained the most fundamental model for option pricing. However, its restrictive assumptions, such as constant volatility or Geometric Brownian Motion (GBM), have been criticized for not reflecting the empirical characteristics of financial markets.
Many subsequent models have since been proposed to relax the assumptions of the Black-Scholes model. One successful approach is employing stochastic volatility under the generalized autoregressive conditional heteroskedastic (GARCH) framework. The early attempt was introduced by Engle and Mustafa (1992) focusing on implied conditional volatilities. Subsequently, Duan (1995) developed a more rigorous framework of the GARCH option pricing model using the locally risk-neutral valuation relationship that one-period ahead conditional variance remains constant under both the risk-neutral measure and the physical measure. The model extended later to the jump-diffusion model in Duan _et al._ (2004) and Duan _et al._ (2006). Duan's GARCH model assumes that the residuals follow a normal distribution, however, empirical evidence shows that the assumption of a normally distributed residual in the model does not properly describe asset return dynamics (Duan (1999) and Menn and Rachev (2009)).
Another approach, known as the Levy distribution, has been developed allowing for jumps, skewness, kurtosis, and heavy tails in underlying distribution to overcome the flaws of the GBM assumption. It is also referred to as a stable distribution, and its one well-known subclass is the classical tempered stable (CTS) distribution.1 The CTS distribution has been researched in several directions such as portfolio management (Tsuchida _et al._ (2012); Beck _et al._ (2013); Georgiev _et al._
(2015); Anand _et al._ (2016); Choi _et al._ (2021)) and momentum strategy (Choi _et al._ (2015)).
As a consolidation of these two approaches in the context of option pricing, Kim _et al._ (2008a), Kim _et al._ (2010a) and Kim _et al._ (2022) enhanced Duan's GARCH model by incorporating the classical tempered stable (CTS) distribution 2 and referred to it as the CTS-GARCH model. The CTS-GARCH model, which takes into account the non-normality in its innovation process, is considered one of the most advanced option pricing models as it addresses two key limitations of the Black-Scholes model simultaneously (Kim _et al._ (2010b)).
Footnote 2: The CTS distribution has been studied under different names including the _truncated Lévy flight_ by Koponen (1995), the _tempered stable_ by Barndorff-Nielsen and Levendorskii (2001), Barndorff-Nielsen and Shephard (2001) and Cont and Tankov (2004), the _KoBoL_ distribution by Boyarchenko and Levendorskii (2000), and the _CGMY_ by Carr _et al._ (2002). The KR distribution of Kim _et al._ (2008b) is an extension of the CTS distribution. Rosinski (2007) generalized CTS distribution, referred to as the tempered stable distribution.
However, these models are mostly high-dimensional, and this becomes problematic in the stage of parameter calibration; parameters need to be tuned, so as to find the best model output closest to the market price. Hence the Black-Scholes model still has been a practically useful model even with the superior performance of multidimensional models.
Recently, machine learning has shed a different light on the curse of dimensionality as a non-parametric or model-free solution. This is a rapidly evolving area with the maturation of computation power and technical advances in algorithms.
One of the main pillars of machine learning is Artificial Neural Network (ANN). ANN, also referred to as neural network or multilayer perceptions, has been introduced by McCulloch and Pitts (1943). ANNs are composed of the input layer, multiple hidden layers, and the output layer where each hidden layer performs a vector-to-vector or vector-to-scalar calculation to make the best output layer approximation. The parallel structure of hidden layers allows using multiprocessors, hence ANN is considerably faster than traditional algorithms at computation speed. With this advantage, ANN has been developed rapidly and different types of ANN have been proposed such as multilayer perceptron neural network, convolutional neural network, radial basis function neural network, recurrent neural network, modular neural network, and so on.
Many studies in finance also sought to find its ANN applications. In particular, ANN receives attention as a promising alternative for chronic computational difficulties in option pricing models as it is built upon multidimensional nonlinear models. The early attempts were applying ANN as a functional approximator for the Black-Scholes formula suggested by Malliaris and Salchenberger (1993) and Hutchinson _et al._ (1994). Since then, over a hundred papers have studied ANN for option pricing with various parameter inputs and performance measures (Ruf and Wang (2019)).
While most of the studies utilize ANN to estimate the option price based on parameters from the market data and measure their performance by out-of-sample tests, a different approach was proposed by Bayer _et al._ (2019) and Alaya _et al._ (2021), recently. Rather than focusing on the approximation of the pricing formula, ANN is applied to the calibration stage and finds model parameters in this approach. Since training sets are generated by Monte Carlo Simulation (MCS) method, hence a more sound dataset avoiding market incompleteness is available. Utilizing enough large datasets is another advantage.
This paper also focuses on ANN as an alternative calibration method. We generate training sets having 100,000 simulated vectors of parameters and prices using the MCS method for each of Duan's GARCH and CTS-GARCH models, and then train ANNs that consist of three hidden layers with twenty nodes for each layer. Specifically, a feedforward neural network is used, which does not include any cycles or loops. The trained ANNs are utilized in the calibration for S&P 500 index call and put option prices from every second Wednesday of each month between June 2021 and May 2022. We find that this approach not only presents a superior performance to the previous MCS method but performs remarkably faster. Additionally, the option Greeks are also computed by using the ANN method.
The remainder of this paper is organized as follows. We discuss the GARCH option pricing models in Section 2. Section 3 presents the construction and training methods for ANN. The performance of ANN in terms of calibration is investigated using empirical data in Section 4. Finally, Section 5 provides our conclusions.
## 2 Preliminaries
In this section, we revisit Duan's GARCH and CTS-GARCH option pricing models that feature _infinitely divisible_ innovations. An infinitely divisible random variable is a random variable that can be represented as an infinite sum of independent, identically distributed random variables. This concept plays a significant role in the study of stable distributions and option pricing models.
We begin by reviewing the CTS distribution as an example of an infinitely divisible distribution, followed by an overview of Duan's GARCH and CTS-GARCH option pricing models. Lastly, we outline European call and put option pricing using the MCS method for both GARCH option pricing models.
### CTS Distribution
\(X\) is referred to as the _Classical Tempered Stable distributed random variable_ and denoted by \(X\sim\text{CTS}(\alpha\), \(C\), \(\lambda_{+}\), \(\lambda_{-}\), \(m)\)(Rachev _et al._ (2011) and Kim _et al._ (2010b)) if the characteristic function of the distribution is expressed as
\[\begin{split}& E[e^{iuX}]=\phi_{\text{CTS}}(u;\alpha,C,\lambda_{+}, \lambda_{-},m)\\ &=\exp\left((m-C\Gamma(1-\alpha)(\lambda_{+}^{\alpha-1}-\lambda_{ -}^{\alpha-1}))iu-C\Gamma(-\alpha)\left((\lambda_{+}-iu)^{\alpha}-\lambda_{+}^ {\alpha}+(\lambda_{-}+iu)^{\alpha}-\lambda_{-}^{\alpha}\right)\right)\end{split} \tag{1}\]
where \(C_{+},C_{-},\lambda_{+},\lambda_{-}\) are positive, \(0<\alpha<2\), \(m\in\mathbb{R}\) and \(\Gamma\) is the gamma function.
If we substitute \(C=(\Gamma(2-\alpha)(\lambda_{+}^{\alpha-2}+\lambda_{-}^{\alpha-2}))^{-1}\) and \(m=0\) for \(Z\sim\text{CTS}(\alpha\), \(C\), \(\lambda_{+}\), \(\lambda_{-}\), \(m)\), then we have \(E[Z]=0\) and \(\text{var}(Z)=1\). In this case, the random variable \(Z\) is called the _standard CTS distribution_, and \(Z\sim\text{stdCTS}(\alpha\), \(\lambda_{+}\), \(\lambda_{-})\). The characteristic function of \(Z\) is given by
\[\begin{split}& E[e^{iuZ}]=\phi_{\text{stdCTS}}(u;\alpha,\lambda_{+}, \lambda_{-})\\ &=\exp\left(\frac{\lambda_{+}^{\alpha-1}-\lambda_{-}^{\alpha-1}}{ (\alpha-1)(\lambda_{+}^{\alpha-2}+\lambda_{-}^{\alpha-2})}iu+\frac{(\lambda_{+ }-iu)^{\alpha}-\lambda_{+}^{\alpha}+(\lambda_{-}+iu)^{\alpha}-\lambda_{-}^{ \alpha}}{\alpha(\alpha-1)(\lambda_{+}^{\alpha-2}+\lambda_{-}^{\alpha-2})} \right).\end{split} \tag{2}\]
We denote the function of log-Laplace transform of \(Z\) as follows (Kim _et al._ (2010b)):
\[l(x) :=\log\left(\phi_{\text{stdCTS}}(-ix;\alpha,\lambda_{+},\lambda_{-})\right) \tag{3}\] \[=\frac{x(\lambda_{+}^{\alpha-1}-\lambda_{-}^{\alpha-1})}{(\alpha- 1)(\lambda_{+}^{\alpha-2}+\lambda_{-}^{\alpha-2})}+\frac{(\lambda_{+}-x)^{ \alpha}-\lambda_{+}^{\alpha}+(\lambda_{-}+x)^{\alpha}-\lambda_{-}^{\alpha}}{ \alpha(\alpha-1)(\lambda_{+}^{\alpha-2}+\lambda_{-}^{\alpha-2})}.\]
### Duan's GARCH and CTS-GARCH Option Pricing Models
Let \((S_{t})_{t\in\{0,1,\cdots,T^{*}\}}\) be the underlying asset price process and \((y_{t})_{t\in\{0,1,2,\cdots,T^{*}\}}\) be the underlying asset log return process, where \(y_{t}=\log(\frac{S_{t}}{S_{t-1}})\) with \(y_{0}=0\), and \(T^{*}<\infty\) in the time horizon. Under the physical measure \(\mathbb{P}=\bigoplus_{t=1}^{T^{*}}\mathcal{P}_{t}\), \((y_{t})_{t\in\{0,1,2,\cdots,T^{*}\}}\) is supposed to follow the GARCH model:
\[\begin{cases}y_{t+1}=\mu_{t+1}+\sigma_{t+1}\epsilon_{t+1}\\ \sigma_{t+1}^{2}=\kappa+\xi\sigma_{t}^{2}\epsilon_{t}^{2}+\zeta\sigma_{t}^{2} \end{cases}\quad\text{ for }t\in\{0,1,2,\cdots,T^{*}-1\}, \tag{4}\]
where \(\mu_{t+1}\) is the daily expected return and \(\epsilon_{t+1}\) follows an infinitely divisible distribution. \(S_{0}\), \(\epsilon_{0}\) and \(\sigma_{0}\) are real constants with \(\epsilon_{0}=0\). The GARCH model involves parameters with \(\kappa\), \(\xi\), \(\zeta\) where \(\xi+\zeta<1\).
We next define \((R_{t})_{t\in\{1,2,\cdots,T^{*}\}}\) and \((d_{t})_{t\in\{1,2,\cdots,T^{*}\}}\) as the sequences of the daily risk-free rate of return and daily dividend rate of the underlying, respectively. Then, there is a risk-neutral measure \(\mathbb{Q}=\bigoplus_{t=1}^{T^{*}}\mathcal{Q}_{t}\) such that:
* \(\eta_{t+1}=\theta_{t+1}+\epsilon_{t+t}\), where \(\theta_{t+1}=\frac{\mu_{t+1}-R_{t+1}+d_{t+1}+w_{t+1}}{\sigma_{t+1}}\) is the market price of risk and \(\omega_{t+1}=\log E_{\mathbb{Q}}\left[e^{\sigma_{t+1}\epsilon_{t+1}}\right]\).
* \(\eta_{t+1}\) is also infinitely divisible under the measure \(\mathbb{Q}\).
By applying change of measures to Eq. (4), we obtain the risk-neutral price process under \(\mathbb{Q}\) as
\[\begin{cases}y_{t+1}=R_{t+1}-d_{t+1}-\omega_{t+1}+\sigma_{t+1}\eta_{t+1}\\ \sigma_{t+1}^{2}=\kappa+\xi\sigma_{t}^{2}(\eta_{t}-\theta_{t})^{2}+\zeta\sigma_ {t}^{2}\end{cases}\quad\text{ for }t\in\{0,1,2,\cdots,T^{*}-1\} \tag{5}\]
with \(\xi+\zeta<1\). To simplify the condition \(\xi+\zeta<1\), we define two parameters \(\psi\) and \(\gamma\) as \(\psi=\frac{\xi}{\zeta}\) and \(\gamma=\xi+\zeta\), respectively. Then we have
\[\sigma_{t}^{2}=\kappa+\frac{\gamma}{\psi+1}\left(\psi\sigma_{t-1}^{2}(\eta_{t-1 }-\theta_{t})^{2}+\sigma_{t-1}^{2}\right), \tag{6}\]
where \(\kappa,\psi,\gamma>0\), \(\eta_{0}=0\) and \(\sigma_{0}>0\).
Under the risk-neutral measure \(\mathbb{Q}\), the underlying asset price is defined as \(S_{t}=S_{0}e^{\sum_{j=1}^{t}y_{j}}\) for \(t\in\{1\), \(2\), \(\cdots\), \(T-1\), \(T\), \(\cdots\), \(T^{*}\}\). The European option with a payoff function \(H(S(T))\) at the maturity \(T\) with \(t\leq T\leq T^{*}\) is given by
\[E_{\mathbb{Q}}\left[e^{-\sum_{j=t+1}^{T}R_{j}}H(S(T))\ \bigg{|}\ \mathcal{F}_{t} \right]=E_{\mathbb{Q}}\left[e^{-\sum_{j=t+1}^{T}R_{j}}H(S_{t}e^{\sum_{j=t+1} ^{T}y_{j}})\ \bigg{|}\ \mathcal{F}_{t}\right]. \tag{7}\]
For example, European vanilla call and put price with strike price \(K\) and time to maturity \(T\) at time \(t=0\) are
\[\text{(Call)}=E_{\mathbb{Q}}\left[e^{-\sum_{j=1}^{T}R_{j}}\max\{S_{0}e^{\sum_{ j=1}^{T}y_{j}}-K,0\}\right] \tag{8}\]
and
\[\text{(Put)}=E_{\mathbb{Q}}\left[e^{-\sum_{j=1}^{T}R_{j}}\max\{K-S_{0}e^{\sum _{j=1}^{T}y_{j}},0\}\right], \tag{9}\]
respectively.
Moreover, let \(m\) be the moneyness defined as \(m=\frac{K}{S_{0}e^{\sum_{l=1}^{T}R_{t}}}\), then we have
\[\text{(Call)}=S_{0}V_{C}\text{ and (Put)}=S_{0}V_{P}, \tag{10}\]
where
\[V_{C}=E_{\mathbb{Q}}\left[\max\left\{\exp\left(\sum_{t=1}^{T}-d_{t}-w_{t}+\sigma_{t }\eta_{t}\right)-m,0\right\}\right] \tag{11}\]
and
\[V_{P}=E_{\mathbb{Q}}\left[\max\left\{m-\exp\left(\sum_{t=1}^{T}-d_{t}-w_{t}+ \sigma_{t}\eta_{t}\right),0\right\}\right]. \tag{12}\]
Now, we present two popular examples of the GARCH option pricing model with infinitely divisible innovations:
* _Duan's GARCH Model_(Duan (1995)): If we assume that \(\eta_{t}\)'s follow the standard Gaussian distribution, which is infinitely divisible, we obtain the Duan's GARCH option pricing model where \(w_{t}=\frac{\sigma_{t}^{2}}{2}\).
* _CTS-GARCH Model_(Kim _et al._ (2010a)): By assuming that \(\eta_{t}\)'s follow the standard CTS distribution, which is also infinitely divisible, we refer to the GARCH model as the CTS-GARCH option pricing model. Specifically, when we set \(\eta_{t}\sim\text{stdCTS}(\alpha,\lambda_{+},\lambda_{-})\) for all \(t\in\{1,2,\cdots\}\) under the measure \(\mathbb{Q}\), we have \[w_{t}=\log\left(\phi_{\text{stdCTS}}(-i\sigma_{t};\alpha,\lambda_{+},\lambda_ {-})\right)=l(\sigma_{t})\] (13) by Eq. (3).
### European Call and Put Option Pricing with the MCS Method
To simplify the model, we make the following assumptions for the remainder of this paper: the daily risk-free return \(R_{t}\) is a constant \(R\), \(\theta_{t}\) is a constant \(\theta\), and \(d_{t}=0\). We define \(t\in\{1\), \(2\), \(\cdots\), \(T\), \(\cdots\), \(T^{*}\}\) for \(0<T\leq T^{*}\) as a set of the time steps, where one step represents one business day. We also assume that there are 250 business days in a year, and we define the annual risk-free return as \(r=250\cdot R\). The year-fraction time is denoted by \(\tau=\frac{T}{250}\), and we define \(m\) as the moneyness,
where \(m=\frac{Ke^{-RT}}{S_{0}}=\frac{Ke^{-r\tau}}{S_{0}}\).
Based on these assumptions, we can generate a set of infinitely divisible random numbers \(\{\eta_{t,n}:t\in\{\)1,2,\(\cdots\), \(T^{*}\}\), \(n\in\{\)1, 2, \(\cdots\), \(N\}\}\) using the MCS method. Next, we apply the GARCH model to obtain the set of volatility \(\{\sigma_{t,n}:t\in\{\)1, 2, \(\cdots\), \(T^{*}\}\), \(n\in\{\)1, 2, \(\cdots\), \(N\}\}\) defined as
\[\sigma_{t,n}=\sqrt{\kappa+\frac{\gamma}{\psi+1}\left(\psi\sigma_{t-1,n}^{2}( \eta_{t-1}-\theta)^{2}+\sigma_{t-1,n}^{2}\right)}, \tag{14}\]
where \(\kappa,\psi,\gamma>0\), \(\eta_{0}=0\) and \(\sigma_{0,n}=\sigma_{0}\) for a constant \(\sigma_{0}\).
For a given time to maturity \(T\leq T^{*}\) and a moneyness \(m\), we approximate \(V_{C}\) and \(V_{P}\) as follows:
\[V_{C}\approx\hat{V}_{C}=\frac{1}{N}\sum_{n=1}^{N}\max\left\{\exp\left(\sum_{t =1}^{250\tau}-w_{t,n}+\sigma_{t,n}\eta_{t,n}\right)-m,0\right\} \tag{15}\]
and
\[V_{P}\approx\hat{V}_{P}=\frac{1}{N}\sum_{n=1}^{N}\max\left\{m-\exp\left(\sum_{ t=1}^{250\tau}-w_{t,n}+\sigma_{t,n}\eta_{t,n}\right),0\right\}. \tag{16}\]
From (15) and (16), call and put option values in Duan's GARCH and CTS-GARCH models can be calculated as follows:
* _Duan's GARCH model:_ We generate a set of standard Gaussian random numbers \(\{\eta_{t,n}:t\in\{\)1,2,\(\cdots\),\(T^{*}\}\), \(n\in\{\)1,2,\(\cdots\),\(N\}\}\) and set \(w_{t,n}=\frac{\sigma_{t,n}^{2}}{2}\). Then we obtain call and put option prices of Duan's GARCH model. In this case, we denote (15) and (16) as \[V_{C}^{Duan}(m,\tau;\kappa,\psi,\gamma,\theta,\sigma_{0})=\hat{V}_{C}\quad\text {and}\quad V_{P}^{Duan}(m,\tau;\kappa,\psi,\gamma,\theta,\sigma_{0})=\hat{V}_{ P},\] (17)
respectively. The call and put option prices can be approximated using MCS as \[\text{Call}^{Duan}(S_{0},K,\tau,r)\approx S_{0}V_{C}^{Duan}(m,\tau;\kappa,\psi, \gamma,\theta,\sigma_{0})\] (18) and \[\text{Put}^{Duan}(S_{0},K,\tau,r)\approx S_{0}V_{P}^{Duan}(m,\tau;\kappa,\psi, \gamma,\theta,\sigma_{0}).\] (19)
* _CTS-GARCH model:_ We simulate a set of standard CTS random numbers \(\{\eta_{t,n}:t\in\{1,2,\cdots,T^{*}\}\), \(n\in\{1,2,\cdots,N\}\}\) with parameters \((\alpha\), \(\lambda_{+}\), \(\lambda_{-})\). By setting \(w_{t,n}=l(\sigma_{t,n})\), we obtain call and put option prices of the CTS-GARCH model. Specifically, we use (15) and (16) to denote the call and put option values as \[V_{C}^{CTS}(m,\tau;\kappa,\psi,\gamma,\theta,\sigma_{0},\alpha,\lambda_{+}, \lambda_{-})=\hat{V}_{C}\quad\text{and}\quad V_{P}^{CTS}(m,\tau;\kappa,\psi, \gamma,\theta,\sigma_{0},\alpha,\lambda_{+},\lambda_{-})=\hat{V}_{P},\] (20) respectively. Using MCS, we can obtain call and put option prices under the CTS-GARCH model such that \[\text{Call}^{CTS}(S_{0},K,\tau,r)\approx S_{0}V_{C}^{CTS}(m,\tau;\kappa,\psi, \gamma,\theta,\sigma_{0},\alpha,\lambda_{+},\lambda_{-})\] (21) and \[\text{Put}^{CTS}(S_{0},K,\tau,r)\approx S_{0}V_{P}^{CTS}(m,\tau;\kappa,\psi, \gamma,\theta,\sigma_{0},\alpha,\lambda_{+},\lambda_{-}).\] (22)
## 3 Artificial Neural Network
In this section, we construct two ANNs to calculate call and put option prices under Duan's GARCH model and CTS-GARCH model. More precisely, we design multi-layer ANNs to gen
erate similar results as the function values of \(V_{C}^{Duan}\), \(V_{P}^{Duan}\), \(V_{C}^{CTS}\), and \(V_{P}^{CTS}\). To facilitate the training, we take the logarithm of these four functions and define new functions as follows:
\[v_{C}^{Duan}(m,\tau;\kappa,\psi,\gamma,\theta,\sigma_{0},\alpha, \lambda_{+},\lambda_{-}) =\log\big{(}V_{C}^{Duan}(m,\tau;\kappa,\psi,\gamma,\theta,\sigma_{ 0},\alpha,\lambda_{+},\lambda_{-})\big{)} \tag{23}\] \[v_{P}^{Duan}(m,\tau;\kappa,\psi,\gamma,\theta,\sigma_{0},\alpha, \lambda_{+},\lambda_{-}) =\log\big{(}V_{P}^{Duan}(m,\tau;\kappa,\psi,\gamma,\theta,\sigma_{ 0},\alpha,\lambda_{+},\lambda_{-})\big{)}\] (24) \[v_{C}^{CTS}(m,\tau;\kappa,\psi,\gamma,\theta,\sigma_{0},\alpha, \lambda_{+},\lambda_{-}) =\log\big{(}V_{C}^{CTS}(m,\tau;\kappa,\psi,\gamma,\theta,\sigma_{ 0},\alpha,\lambda_{+},\lambda_{-})\big{)}\] (25) \[v_{P}^{CTS}(m,\tau;\kappa,\psi,\gamma,\theta,\sigma_{0},\alpha, \lambda_{+},\lambda_{-}) =\log\big{(}V_{P}^{CTS}(m,\tau;\kappa,\psi,\gamma,\theta,\sigma_{ 0},\alpha,\lambda_{+},\lambda_{-})\big{)} \tag{26}\]
### Generating Training Set
We first generate training sets for the four functions \(v_{C}^{Duan}\), \(v_{P}^{Duan}\), \(v_{C}^{CTS}\), and \(v_{P}^{CTS}\) using the MCS method as explained in the previous section. Since \(v_{C}^{Duan}\) and \(v_{P}^{Duan}\) have seven input parameters (\(m\), \(\tau\), \(\kappa\), \(\psi\), \(\gamma\), \(\theta\), \(\sigma_{0}\)), we consider seven nodes in the input layer. For \(v_{C}^{CTS}\), and \(v_{P}^{CTS}\), three additional input parameters (\(\alpha\), \(\lambda_{+}\)\(\lambda_{-}\)) are needed, in addition to the seven input parameters for \(v_{C}^{Duan}\) and \(v_{P}^{Duan}\). The range of the input parameters can be found in Table 1. More details on training set generation process for each model are as follows:
* _Duan's GARCH Model:_ We generate 100,000 seven-dimensional uniformly distributed random vectors for (\(m\), \(\tau\), \(\kappa\), \(\psi\), \(\gamma\), \(\lambda\), \(\sigma_{0}\)) with the boundary specified in Table 1. To avoid the clustering of the random vector, we use the Halton algorithm (Halton (1964)). Then we calculate 100,000 of \(v_{C}^{Duan}\) and \(v_{P}^{Duan}\), respectively, using MCS. In MCS, we use 20,000 sample paths based on the Duan's GARCH model with the seven-dimensional model parameters.
* _CTS-GARCH Model:_ We generate 100,000 ten-dimensional uniformly distributed random vectors for (\(m\), \(\tau\), \(\kappa\), \(\psi\), \(\gamma\), \(\theta\), \(\sigma_{0}\), \(\alpha\), \(\lambda_{+}\), \(\lambda_{-}\)) with the boundary specified in Table 1. Random numbers of \(\lambda_{+}\) and \(\lambda_{-}\) are generated by \(\lambda_{+}=\tan(\frac{u_{1}\pi}{2})+0.1\) and \(\lambda_{-}=\lambda_{+}=\tan(\frac{u_{2}\pi}{2})+0.1\), respectively, for uniform random numbers \(u_{1},u_{2}\in(0,1)\). We also use the Halton algorithm to generate uniform random vectors, as we did in the case of Duan's GARCH model. Then we calculate 100,000 of \(v_{C}^{CTS}\) and \(v_{P}^{CTS}\), respectively, using the MCS. In the MCS, we use
20,000 sample paths based on the CTS-GARCH model with the ten-dimensional model parameters.
### Training Multi-Layer ANNs
Using four training sets for \(v_{C}^{Duan}\), \(v_{P}^{Duan}\), \(v_{C}^{CTS}\), and \(v_{P}^{CTS}\), we train four multi-layer ANNs in this section. Each ANN consisted of three hidden layers with twenty nodes in each layer. The output is a single value, and the activation function of hidden layer nodes is the simple sigmoid function3, while the output activation function is the linear function4. The ANNs for \(v_{C}^{Duan}\) and \(v_{P}^{Duan}\) have seven input nodes, while the ANNs for \(v_{C}^{CTS}\), and \(v_{P}^{CTS}\) have ten input nodes. The architecture is depicted in Fig. 1. We denote the four ANNs corresponding to \(v_{C}^{Duan}\), \(v_{P}^{Duan}\), \(v_{C}^{CTS}\), and \(v_{P}^{CTS}\) as
Footnote 3: \(f(x)=\frac{1}{1+e^{-x}}\)
Footnote 4: \(f(x)=x\)
\[F_{C}^{Duan}(m,\tau;\kappa,\psi,\gamma,\theta,\sigma_{0}), \tag{27}\] \[F_{P}^{Duan}(m,\tau;\kappa,\psi,\gamma,\theta,\sigma_{0}),\] (28) \[F_{C}^{CTS}(m,\tau;\kappa,\psi,\gamma,\theta,\sigma_{0},\alpha, \lambda_{+},\lambda_{-}),\] (29) \[\text{and}\;F_{P}^{CTS}(m,\tau;\kappa,\psi,\gamma,\theta,\sigma_{0 },\alpha,\lambda_{+},\lambda_{-}). \tag{30}\]
\begin{table}
\begin{tabular}{c c c} Parameter & Lower Bound & Upper Bound \\ \hline \(m\) & \(0.5\) & \(1.5\) \\ \(\tau\) & \(0.4\) & \(1\) \\ \(\kappa\) & \(0\) & \(1\cdot 10^{-5}\) \\ \(\psi\) & \(0.1\) & \(0.4\) \\ \(\gamma\) & \(0.5\) & \(0.9999\) \\ \(\theta\) & \(0\) & \(0.8\) \\ \(\sigma_{0}\) & \(1\cdot 10^{-6}\) & \(0.04\) \\ \hline \(\alpha\) & \(0.01\) & \(1.999\) \\ \(\lambda_{+}\) & \(0.1\) & \(\infty\) \\ \(\lambda_{-}\) & \(0.1\) & \(\infty\) \\ \hline \end{tabular}
\end{table}
Table 1: The range of input parameters
Additionally, we set the parameters \(\Theta_{Duan}=(\kappa\), \(\psi\), \(\gamma\), \(\theta\), \(\sigma_{0}\)) of Duan's GARCH model and \(\Theta_{CTS}=(\kappa\), \(\psi\), \(\gamma\), \(\theta\), \(\sigma_{0}\), \(\alpha\), \(\lambda_{+}\), \(\lambda_{-}\)) of CTS-GARCH model. Then, we have
\[\text{Call}_{ANN}^{Duan}(S_{0},K,\tau,r) =S_{0}\exp\left(F_{C}^{Duan}\left(\frac{Ke^{-r\tau}}{S_{0}},\tau; \Theta_{Duan}\right)\right) \tag{31}\] \[\text{Put}_{ANN}^{Duan}(S_{0},K,\tau,r) =S_{0}\exp\left(F_{P}^{Duan}\left(\frac{Ke^{-r\tau}}{S_{0}},\tau; \Theta_{Duan}\right)\right)\] (32) \[\text{Call}_{ANN}^{CTS}(S_{0},K,\tau,r) =S_{0}\exp\left(F_{C}^{CTS}\left(\frac{Ke^{-r\tau}}{S_{0}},\tau; \Theta_{CTS}\right)\right)\] (33) \[\text{Put}_{ANN}^{CTS}(S_{0},K,\tau,r) =S_{0}\exp\left(F_{P}^{CTS}\left(\frac{Ke^{-r\tau}}{S_{0}},\tau; \Theta_{CTS}\right)\right). \tag{34}\]
Subsequently, we obtain
\[\text{Call}^{Duan}(S_{0},K,\tau,r) \approx\text{Call}_{ANN}^{Duan}(S_{0},K,\tau,r), \tag{35}\] \[\text{and Put}^{Duan}(S_{0},K,\tau,r) \approx\text{Put}_{ANN}^{Duan}(S_{0},K,\tau,r), \tag{36}\]
Fig. 1: ANN structure of the calibration
under Duan's GARCH model, and
\[\text{Call}^{CTS}(S_{0},K,\tau,r) \approx\text{Call}^{CTS}_{ANN}(S_{0},K,\tau,r), \tag{37}\] \[\text{and}\ \text{Put}^{CTS}(S_{0},K,\tau,r) \approx\text{Put}^{CTS}_{ANN}(S_{0},K,\tau,r), \tag{38}\]
under CTS-GARCH model.
We train the four ANNs using functions using the Deep Learning toolbox in Matlab\({}^{TM}\). The default training algorithm for a function fitting network is Levenberg-Marquardt, as stated in the documentation5. The mean squared error (MSE) is presented as the performance measure of the network.
Footnote 5: [https://www.mathworks.com/help/deeplearning/ref/fitnet.html](https://www.mathworks.com/help/deeplearning/ref/fitnet.html)
Fig. 2 exhibits MSE for the epoch during the training of the four ANNs: \(F_{C}^{Duan}\), \(F_{P}^{Duan}\), \(F_{C}^{CTS}\), and \(F_{P}^{CTS}\). The minimum values of MSEs for \(F_{C}^{Duan}\) and \(F_{P}^{Duan}\) are 0.0169 and 0.0061 at epochs 84 and 178, respectively. For \(F_{C}^{CTS}\) and \(F_{P}^{CTS}\), the training was limited to 300 epochs, with minimum MSEs being 0.0541 and 0.0447 at epoch 300, respectively.
It is also noteworthy that the ANNs in this study are not intended for forecasting, but rather for finding better analytic approximations. Therefore, we do not utilize a validation set.
## 4 Calibration
In this section, we discuss the calibration of parameters using two different methods: the ANN method and the MCS method, with the MCS method serving as a benchmark method. We calibrate the parameters \(\Theta_{Duan}\) and \(\Theta_{CTS}\) using call and put option data of the S&P 500 index. For this investigation, we have selected 12 Wednesdays from the second week of each month, ranging from June 2021 to May 2022. The time to maturity of the option contracts varies between 7 and 90 days, and we exclude the options with zero bid prices or zero ask prices. Furthermore, we only calibrate parameters for out-of-the-money (OTM) options for both calls and puts.
## 4 Conclusion
Fig. 2: Number of epochs and mean squared error. Top left and right are the MSE of call and put prices for Duan’s GARCH model. Bottom left and right are the MSE of the CTS-GARCH model.
We first specify ANNs for OTM options as
\[F_{OTM}^{Duan}(m,\tau;\Theta_{Duan})=F_{P}^{Duan}(m,\tau;\Theta_{Duan})\cdot 1_{m <1}+F_{C}^{Duan}(m,\tau;\Theta_{Duan})\cdot 1_{m\geq 1} \tag{39}\]
and
\[F_{OTM}^{CTS}(m,\tau;\Theta_{CTS})=F_{P}^{CTS}(m,\tau;\Theta_{CTS})\cdot 1_{m <1}+F_{C}^{CTS}(m,\tau;\Theta_{CTS})\cdot 1_{m\geq 1}. \tag{40}\]
We calibrate parameters using the relative root mean square error (rel-RMSE) minimization method as follows:
\[\min_{\Theta_{Duan}} \left(\sum_{Kne^{-rT_{n}}<S_{0}}\left(\frac{F_{OTM}^{Duan}\left( \frac{K_{n}e^{-rT_{n}}}{S_{0}},\frac{T_{n}}{250};\Theta_{Duan}\right)-\log \left(\frac{P_{market}(K_{n},T_{n})}{S_{0}}\right)}{\log\left(\frac{P_{market}( K_{n},T_{n})}{S_{0}}\right)}\right)^{2}\right.\] \[+\sum_{Kne^{-rT_{n}}\geq S_{0}}\left(\frac{F_{OTM}^{Duan}\left( \frac{K_{n}e^{-rT_{n}}}{S_{0}},\frac{T_{n}}{250};\Theta_{Duan}\right)-\log \left(\frac{C_{market}(K_{n},T_{n})}{S_{0}}\right)}{\log\left(\frac{C_{market }(K_{n},T_{n})}{S_{0}}\right)}\right)^{2}\right)^{\frac{1}{2}} \tag{41}\]
and
\[\min_{\Theta_{CTS}} \left(\sum_{Kne^{-rT_{n}}<S_{0}}\left(\frac{F_{OTM}^{CTS}\left( \frac{K_{n}e^{-rT_{n}}}{S_{0}},\frac{T_{n}}{250};\Theta_{CTS}\right)-\log \left(\frac{P_{market}(K_{n},T_{n})}{S_{0}}\right)}{\log\left(\frac{P_{market }(K_{n},T_{n})}{S_{0}}\right)}\right)^{2}\right.\] \[+\sum_{Kne^{-rT_{n}}\geq S_{0}}\left(\frac{F_{OTM}^{CTS}\left( \frac{K_{n}e^{-rT_{n}}}{S_{0}},\frac{T_{n}}{250};\Theta_{CTS}\right)-\log \left(\frac{C_{market}(K_{n},T_{n})}{S_{0}}\right)}{\log\left(\frac{C_{market }(K_{n},T_{n})}{S_{0}}\right)}\right)^{2}\right)^{\frac{1}{2}}, \tag{42}\]
where \(S_{0}\) is the S&P 500 index price of the given Wednesday, and \(C_{market}(K_{n},T_{n})\) and \(P_{market}(K_{n},T_{n})\) are mid-prices of observed bid and ask prices for the call and put options with strike price \(K_{n}\) and
time to maturity \(T_{n}\)6.
Footnote 6: To solve the nonlinear optimization problems, we used the function lsqcurvefit() in Matlab\({}^{TM}\) with Trust-Region-Reflective Least Squares algorithm. See Coleman and Li (1994, 1996) and [https://www.mathworks.com/help/optim/ug/lsqcurvefit.html](https://www.mathworks.com/help/optim/ug/lsqcurvefit.html) for the details.
As a benchmark, we use the MCS method to calibrate the same parameters. Similar to the ANN cases, we define
\[v_{OTM}^{Duan}(m,\tau;\Theta_{Duan})=v_{C}^{Duan}(m,\tau;\Theta_{Duan})\cdot 1 _{m\geq 1}+v_{P}^{Duan}(m,\tau;\Theta_{Duan})\cdot 1_{m<1} \tag{43}\]
and
\[v_{OTM}^{CTS}(m,\tau;\Theta_{CTS})=v_{C}^{CTS}(m,\tau;\Theta_{CTS})\cdot 1_{m \geq 1}+v_{P}^{CTS}(m,\tau;\Theta_{CTS})\cdot 1_{m<1}. \tag{44}\]
We calibrate parameters using the MCS method as described in (41) and (42) by replacing \(F_{OTM}^{Duan}\) and \(F_{OTM}^{CTS}\) with \(v_{OTM}^{Duan}\) and \(v_{OTM}^{CTS}\), respectively. More details of the parameter calibration for the option pricing with the CTS-GARCH model are presented in Kim _et al._ (2010a), Kim _et al._ (2019), and Kim _et al._ (2022).
The calibrated parameters for Duan's GARCH model are provided in Table 2 and Table 3, while those for the CTS-GARCH model are provided in Table 4 and Table 5. Table 3 and 5 present the results obtained using the ANN method, while Table 2 and Table 4 present the results obtained using the MCS method. The rel-RMSE values are presented for performance analysis. Table 6 collects all rel-RMSEs for the two models (Duan's GARCH and CTS-GARCH) and both option pricing methods (MCS and ANN).
According to the table, we observe that:
* CTS-GARCH model with the ANN method shows the smallest rel-RMSE and therefore performed the best, with the exception of four cases on 11/10/2022, 12/8/2021, 3/9/2022, and 5/10/2022.
* Duan's GARCH model with the MCS method has the smallest rel-RMSE for the cases on 3/8/2022 and 5/10/2022.
* Duan's GARCH model with the ANN method has the smallest rel-RMSE for the cases on 11/10/2022 and 12/8/2022.
* CTS-GARCH model with the MCS method has the largest rel-RMSE values in this investigation.
Compared to Duan's GARCH model, CTS-GARCH model is more flexible as it has three more parameters. However, the complexity of the CTS-GARCH model makes model calibration with the MCS method inefficient. In this regard, the ANN offers an efficient alternative for model calibration, enabling the use of the CTS-GARCH model in practical applications.
In Fig. 3, the left column plots present the market prices and calibrated model prices of the Duan's GARCH model and CTS-GARCH model for OTM calls and puts on 6/9/2021. The right column plots exhibit the implied volatility curves for the market prices and Duan's GARCH and CTS-GARCH models with respect to the MCS method and ANN method. The first row plots of Fig. 3 show option prices and implied volatility for 9 days to maturity, the second row plots show 37 days to maturity, and the third row plots show 72 days to maturity. The curves suggest that the implied volatility curve of CTS-GARCH model prices with the ANN method is the closest to the market implied volatility curve compared to the other methods investigated in this study.
One of the advantages of using the ANN method is the ability to easily obtain the Greeks of options, which is not almost possible with the MCS method. For instance, Table 7 exhibits Delta (\(\Delta\)), Gamma (\(\Gamma\)), Theta (\(\Theta\)) and Rho (\(\rho\)) of at-the-money (ATM) calls and puts for the parameters calibrated on 6/9/2021 with maturities of 7, 37, and 72 days, respectively7. The Vega (\(\nu\)) is not
considered since the volatility is not constant but stochastic in this investigation. We calculate Greeks using the finance difference method for each variable.
It is widely acknowledged that the calculation time of the ANN method is remarkably faster than the MCS method. Table 8 presents the calculation time for OTM calls and puts using the MCS and ANN methods on Duan's GARCH and CTS-GARCH models, respectively. For example, we observed 558 prices of calls and puts on 6/9/2021. The MCS and ANN methods respectively take 0.13 and 0.085 seconds to calculate 558 calls and puts on Duan's GARCH model, and 0.3286 and 0.0187 seconds on the CTS-GARCH model. The number of call and put price observations (Obs.) varies each day and is shown in the Obs. column in Table 8. The calculation times using the ANN method of Duan's GARCH model are about 9 times faster than the MCS method. Similarly, in the CTS-GARCH model case, the MCS method takes approximately 20 times longer than the ANN method.
## 5 Conclusion
In this paper, we review Duan's GARCH model and the CTS-GARCH model for option pricing and investigate the use of ANNs to enhance calibration performance. To achieve this goal, we generate training sets for various model parameters and compute calls and puts prices using the MCS method. We then train a three-layer ANN with twenty nodes per layer using the generated
\begin{table}
\begin{tabular}{c|c c c c|c|c} \hline Date & \(\theta\) & \(\kappa\) & \(\xi\) & \(\zeta\) & \(\sigma_{0}\) & rel-RMSE \\ \hline
6/9/2021 & \(0.9293\) & \(1.68\cdot 10^{-6}\) & \(0.2778\) & \(0.5000\) & \(0.0085\) & \(0.3048\) \\
7/7/2021 & \(0.8984\) & \(1.21\cdot 10^{-6}\) & \(0.2931\) & \(0.5000\) & \(0.0068\) & \(0.3585\) \\
8/11/2021 & \(0.9513\) & \(1.05\cdot 10^{-6}\) & \(0.2847\) & \(0.5000\) & \(0.0057\) & \(0.3927\) \\
9/8/2021 & \(1.0877\) & \(9.88\cdot 10^{-7}\) & \(0.2492\) & \(0.5000\) & \(0.0065\) & \(0.4349\) \\
10/6/2021 & \(0.9826\) & \(2.70\cdot 10^{-6}\) & \(0.2584\) & \(0.5000\) & \(0.0094\) & \(0.2837\) \\
11/10/2021 & \(0.9905\) & \(1.65\cdot 10^{-6}\) & \(0.2681\) & \(0.5000\) & \(0.0075\) & \(0.2952\) \\
12/8/2021 & \(0.8833\) & \(2.27\cdot 10^{-6}\) & \(0.2943\) & \(0.5000\) & \(0.0081\) & \(0.2763\) \\
1/12/2022 & \(0.8447\) & \(2.13\cdot 10^{-6}\) & \(0.3000\) & \(0.5000\) & \(0.0073\) & \(0.2799\) \\
2/9/2022 & \(0.8745\) & \(2.94\cdot 10^{-6}\) & \(0.2869\) & \(0.5000\) & \(0.0091\) & \(0.2305\) \\
3/9/2022 & \(1.0237\) & \(5.27\cdot 10^{-6}\) & \(0.2464\) & \(0.5000\) & \(0.0158\) & \(0.1881\) \\
4/6/2022 & \(1.0867\) & \(1.47\cdot 10^{-6}\) & \(0.2434\) & \(0.5000\) & \(0.0092\) & \(0.3980\) \\
5/10/2022 & \(1.0220\) & \(3.97\cdot 10^{-6}\) & \(0.1918\) & \(0.5971\) & \(0.0176\) & \(0.2311\) \\ \hline \end{tabular}
\end{table}
Table 2: Calibration of Duan’s GARCH model parameters using the MCS method
\begin{table}
\begin{tabular}{c|c c c c c c c|c} \hline Date & \(\theta\) & \(\kappa\) & \(\xi\) & \(\zeta\) & \(\sigma_{0}\) & \(\alpha\) & \(\lambda_{+}\) & \(\lambda_{-}\) & rel-RMSE \\ \hline
6/9/2021 & \(0.5526\) & \(1.56\cdot 10^{-8}\) & \(0.0954\) & \(0.7906\) & \(0.0046\) & \(1.4202\) & \(0.1019\) & \(0.5492\) & \(0.1662\) \\
7/7/2021 & \(2.5598\) & \(4.65\cdot 10^{-6}\) & \(0.0631\) & \(0.5124\) & \(0.0065\) & \(1.0542\) & \(13.6212\) & \(0.1155\) & \(0.2372\) \\
8/11/2021 & \(1.2173\) & \(1.43\cdot 10^{-8}\) & \(0.1960\) & \(0.6349\) & \(0.0076\) & \(0.7306\) & \(46.0982\) & \(0.1629\) & \(0.2936\) \\
9/8/2021 & \(1.1724\) & \(6.67\cdot 10^{-10}\) & \(0.1582\) & \(0.7666\) & \(0.0068\) & \(0.8133\) & \(3.9428\) & \(0.1019\) & \(0.2781\) \\
10/6/2021 & \(0.5119\) & \(2.51\cdot 10^{-8}\) & \(0.0014\) & \(0.5016\) & \(0.0163\) & \(0.5610\) & \(13.7536\) & \(12.3532\) & \(0.2504\) \\
11/10/2021 & \(1.9605\) & \(4.93\cdot 10^{-8}\) & \(0.1048\) & \(0.5092\) & \(0.0079\) & \(0.6510\) & \(1.4461\) & \(4.1668\) & \(0.2701\) \\
12/8/2021 & \(0.0281\) & \(3.46\cdot 10^{-6}\) & \(0.2965\) & \(0.7125\) & \(0.0091\) & \(0.7452\) & \(31.5555\) & \(0.5439\) & \(0.2474\) \\
1/12/2022 & \(0.6766\) & \(2.27\cdot 10^{-6}\) & \(0.2979\) & \(0.5750\) & \(0.0075\) & \(0.5862\) & \(20.2640\) & \(3.5172\) & \(0.2233\) \\
2/9/2022 & \(0.4874\) & \(2.72\cdot 10^{-6}\) & \(0.2624\) & \(0.6696\) & \(0.0100\) & \(0.7577\) & \(27.9998\) & \(1.5475\) & \(0.2152\) \\
3/9/2022 & \(0.5884\) & \(2.91\cdot 10^{-6}\) & \(0.1191\) & \(0.8183\) & \(0.0134\) & \(0.6454\) & \(21.7075\) & \(4.0287\) & \(0.2684\) \\
4/6/2022 & \(1.5418\) & \(9.84\cdot 10^{-6}\) & \(0.1040\) & \(0.5271\) & \(0.0078\) & \(0.9238\) & \(21.6161\) & \(1.5811\) & \(0.3571\) \\
5/10/2022 & \(0.8144\) & \(3.15\cdot 10^{-8}\) & \(0.1669\) & \(0.7220\) & \(0.0178\) & \(0.7674\) & \(28.8533\) & \(4.4167\) & \(0.2756\) \\ \hline \end{tabular}
\end{table}
Table 4: Calibration of CTS-GARCH model parameters using the MCS method
\begin{table}
\begin{tabular}{c|c c c c c|c} \hline Date & \(\theta\) & \(\kappa\) & \(\xi\) & \(\zeta\) & \(\sigma_{0}\) & rel-RMSE \\ \hline
6/9/2021 & \(1.1103\) & \(4.64\cdot 10^{-7}\) & \(0.2258\) & \(0.5395\) & \(0.0064\) & \(0.1972\) \\
7/7/2021 & \(1.0196\) & \(6.32\cdot 10^{-7}\) & \(0.2354\) & \(0.5621\) & \(0.0049\) & \(0.2793\) \\
8/11/2021 & \(1.1729\) & \(1.74\cdot 10^{-7}\) & \(0.1994\) & \(0.5737\) & \(0.0056\) & \(0.3383\) \\
9/8/2021 & \(1.1810\) & \(2.89\cdot 10^{-7}\) & \(0.1927\) & \(0.5783\) & \(0.0064\) & \(0.4107\) \\
10/6/2021 & \(0.9015\) & \(8.75\cdot 10^{-7}\) & \(0.2115\) & \(0.6441\) & \(0.0074\) & \(0.2826\) \\
11/10/2021 & \(1.1827\) & \(4.99\cdot 10^{-7}\) & \(0.1968\) & \(0.5700\) & \(0.0066\) & \(0.2634\) \\
12/8/2021 & \(1.1276\) & \(7.31\cdot 10^{-7}\) & \(0.2238\) & \(0.5424\) & \(0.0060\) & \(0.2078\) \\
1/12/2022 & \(0.6377\) & \(4.46\cdot 10^{-7}\) & \(0.3000\) & \(0.6252\) & \(0.0047\) & \(0.3632\) \\
2/9/2022 & \(0.8367\) & \(6.29\cdot 10^{-7}\) & \(0.2081\) & \(0.6739\) & \(0.0076\) & \(0.2626\) \\
3/9/2022 & \(0.9301\) & \(1.64\cdot 10^{-6}\) & \(0.1905\) & \(0.6666\) & \(0.0121\) & \(0.2297\) \\
4/6/2022 & \(1.0649\) & \(5.37\cdot 10^{-7}\) & \(0.2088\) & \(0.5990\) & \(0.0070\) & \(0.3616\) \\
5/10/2022 & \(1.0130\) & \(1.67\cdot 10^{-6}\) & \(0.1506\) & \(0.7046\) & \(0.0141\) & \(0.2417\) \\ \hline \end{tabular}
\end{table}
Table 3: Calibration of Duan’s GARCH model parameters using the ANN method
\begin{table}
\begin{tabular}{c|c c c c c c|c} \hline Date & \(\theta\) & \(\kappa\) & \(\xi\) & \(\zeta\) & \(\sigma_{0}\) & \(\alpha\) & \(\lambda_{+}\) & \(\lambda_{-}\) & rel-RMSE \\ \hline
6/9/2021 & \(0.9386\) & \(1.69\cdot 10^{-6}\) & \(0.2805\) & \(0.5050\) & \(0.0086\) & \(1.9998\) & \(14.0138\) & \(10.0760\) & \(0.8819\) \\
7/7/2021 & \(0.9074\) & \(1.23\cdot 10^{-6}\) & \(0.2960\) & \(0.5050\) & \(0.0068\) & \(1.9921\) & \(11.2482\) & \(11.9459\) & \(0.9014\) \\
8/11/2021 & \(0.9608\) & \(1.06\cdot 10^{-6}\) & \(0.2876\) & \(0.5050\) & \(0.0057\)
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & \multicolumn{2}{c}{Duan’s GARCH Model} & \multicolumn{2}{c}{CTS-GARCH Model} \\ \cline{2-5} Date & MCS & ANN & MCS & ANN \\ \hline
6/9/2021 & \(0.3048\) & \(0.1972\) & \(0.8819\) & **0.1662** \\
7/7/2021 & \(0.3585\) & \(0.2793\) & \(0.9014\) & **0.2372** \\
8/11/2021 & \(0.3927\) & \(0.3383\) & \(0.9211\) & **0.2936** \\
9/8/2021 & \(0.4349\) & \(0.4107\) & \(0.9113\) & **0.2781** \\
10/6/2021 & \(0.2837\) & \(0.2826\) & \(0.8716\) & **0.2504** \\
11/10/2021 & \(0.2952\) & **0.2634** & \(0.8938\) & \(0.2701\) \\
12/8/2021 & \(0.2763\) & **0.2078** & \(0.8832\) & \(0.2474\) \\
1/12/2022 & \(0.2799\) & \(0.3632\) & \(0.8971\) & **0.2233** \\
2/9/2022 & \(0.2305\) & \(0.2626\) & \(0.8822\) & **0.2152** \\
3/9/2022 & **0.1881** & \(0.2297\) & \(0.8142\) & \(0.2684\) \\
4/6/2022 & \(0.3980\) & \(0.3616\) & \(0.8864\) & **0.3571** \\
5/10/2022 & **0.2311** & \(0.2417\) & \(0.7809\) & \(0.2756\) \\ \hline \hline \end{tabular}
\end{table}
Table 6: Relative RMSE values for model calibrations
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline Call/Put & \(S_{0}\) & \(r\) & \(K\) & \(\tau\) & Model & \(\Delta\) & \(\Gamma\) & \(\Theta\) & \(\rho\) \\ \hline Call & \(4219.55\) & \(0.0020\) & \(4220\) & \(0.0360\) & Duan’s GARCH & \(0.4109\) & \(5.7074\cdot 10^{-3}\) & \(382.59\) & \(61.56\) \\ & & & & & & CTS-GARCH & \(0.2933\) & \(3.5068\cdot 10^{-3}\) & \(274.16\) & \(43.88\) \\ & & & & \(0.1480\) & Duan’s GARCH & \(0.6089\) & \(3.2794\cdot 10^{-3}\) & \(311.14\) & \(370.61\) \\ & & & & & CTS-GARCH & \(0.4744\) & \(2.4125\cdot 10^{-3}\) & \(312.57\) & \(288.33\) \\ & & & & \(0.2880\) & Duan’s GARCH & \(0.6656\) & \(2.2740\cdot 10^{-3}\) & \(181.63\) & \(780.57\) \\ & & & & & CTS-GARCH & \(0.5042\) & \(1.3049\cdot 10^{-3}\) & \(194.84\) & \(586.98\) \\ \hline Put & \(4219.55\) & \(0.0020\) & \(4215\) & \(0.0360\) & Duan’s GARCH & \(-0.2923\) & \(2.6660\cdot 10^{-3}\) & \(439.87\) & \(-45.64\) \\ & & & & & CTS-GARCH & \(-0.2196\) & \(1.7826\cdot 10^{-3}\) & \(389.37\) & \(-34.41\) \\ & & & & \(0.1480\) & Duan’s GARCH & \(-0.4410\) & \(2.0529\cdot 10^{-3}\) & \(442.75\) & \(-288.81\) \\ & & & & & CTS-GARCH & \(-0.2956\) & \(1.4555\cdot 10^{-3}\) & \(283.71\) & \(-194.58\) \\ & & & & \(0.2880\) & Duan’s GARCH & \(-0.4726\) & \(1.5977\cdot 10^{-3}\) & \(152.80\) & \(-613.73\) \\ & & & & CTS-GARCH & \(-0.2807\) & \(1.0474\cdot 10^{-3}\) & \(178.97\) & \(-369.67\) \\ \hline \hline \end{tabular}
\end{table}
Table 7: Greeks for ATM calls and puts on 6/9/2021
## 6 Conclusion
Figure 3: Calibrated call and put option prices and implied volatility curves at 6/9/2021
training set. Additionally, we create four ANNs for calls and puts under both Duan's GARCH model and the CTS-GARCH model. Finally, we demonstrate the effectiveness of the trained ANNs by using them to calibrate market option prices. The results show that the ANN is significantly faster than the MCS method once it has been trained and it outperforms the MCS method in terms of calibration performance. Furthermore, the ANN method allows us to calculate Greeks, which is not available with the MCS method.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & & \multicolumn{2}{c}{Duan’s GARCH Model} & \multicolumn{2}{c}{CTS-GARCH Model} \\ \cline{3-6} Date & Obs. & MCS (sec) & ANN (sec) & MCS (sec) & ANN (sec) \\ \hline
6/9/2021 & \(558\) & \(0.1300\) & \(0.0185\) & \(0.3286\) & \(0.0187\) \\
7/7/2021 & \(644\) & \(0.1387\) & \(0.0191\) & \(0.3411\) & \(0.0175\) \\
8/11/2021 & \(706\) & \(0.1339\) & \(0.0169\) & \(0.3025\) & \(0.0172\) \\
9/8/2021 & \(729\) & \(0.1518\) & \(0.0166\) & \(0.3363\) & \(0.0172\) \\
10/6/2021 & \(845\) & \(0.1422\) & \(0.0178\) & \(0.3390\) & \(0.0178\) \\
11/10/2021 & \(858\) & \(0.1463\) & \(0.0183\) & \(0.3229\) & \(0.0175\) \\
12/8/2021 & \(843\) & \(0.1538\) & \(0.0174\) & \(0.3342\) & \(0.0182\) \\
1/12/2022 & \(878\) & \(0.1598\) & \(0.0169\) & \(0.3356\) & \(0.0172\) \\
2/9/2022 & \(928\) & \(0.2017\) & \(0.0193\) & \(0.3539\) & \(0.0207\) \\
3/9/2022 & \(923\) & \(0.1625\) & \(0.0176\) & \(0.3351\) & \(0.0185\) \\
4/6/2022 & \(818\) & \(0.1410\) & \(0.0165\) & \(0.3285\) & \(0.0189\) \\
5/10/2022 & \(570\) & \(0.1211\) & \(0.0163\) & \(0.2978\) & \(0.0171\) \\ \hline \hline \end{tabular}
\end{table}
Table 8: Comparison of calculation time using the MCS and ANN methods for Duan’s GARCH and CTS-GARCH models |
2302.00773 | Toward Physically Plausible Data-Driven Models: A Novel Neural Network
Approach to Symbolic Regression | Many real-world systems can be described by mathematical models that are
human-comprehensible, easy to analyze and help explain the system's behavior.
Symbolic regression is a method that can automatically generate such models
from data. Historically, symbolic regression has been predominantly realized by
genetic programming, a method that evolves populations of candidate solutions
that are subsequently modified by genetic operators crossover and mutation.
However, this approach suffers from several deficiencies: it does not scale
well with the number of variables and samples in the training data - models
tend to grow in size and complexity without an adequate accuracy gain, and it
is hard to fine-tune the model coefficients using just genetic operators.
Recently, neural networks have been applied to learn the whole analytic model,
i.e., its structure and the coefficients, using gradient-based optimization
algorithms. This paper proposes a novel neural network-based symbolic
regression method that constructs physically plausible models based on even
very small training data sets and prior knowledge about the system. The method
employs an adaptive weighting scheme to effectively deal with multiple loss
function terms and an epoch-wise learning process to reduce the chance of
getting stuck in poor local optima. Furthermore, we propose a parameter-free
method for choosing the model with the best interpolation and extrapolation
performance out of all the models generated throughout the whole learning
process. We experimentally evaluate the approach on four test systems: the
TurtleBot 2 mobile robot, the magnetic manipulation system, the equivalent
resistance of two resistors in parallel, and the longitudinal force of the
anti-lock braking system. The results clearly show the potential of the method
to find parsimonious models that comply with the prior knowledge provided. | Jiří Kubalík, Erik Derner, Robert Babuška | 2023-02-01T22:05:04Z | http://arxiv.org/abs/2302.00773v3 | # Neural Networks for Symbolic Regression
###### Abstract
Many real-world systems can be described by mathematical formulas that are human-comprehensible, easy to analyze and can be helpful in explaining the system's behaviour. Symbolic regression is a method that generates nonlinear models from data in the form of analytic expressions. Historically, symbolic regression has been predominantly realized using genetic programming, a method that iteratively evolves a population of candidate solutions that are sampled by genetic operators crossover and mutation. This gradient-free evolutionary approach suffers from several deficiencies: it does not scale well with the number of variables and samples in the training data, models tend to grow in size and complexity without an adequate accuracy gain, and it is hard to fine-tune the inner model coefficients using just genetic operators. Recently, neural networks have been applied to learn the whole analytic formula, i.e., its structure as well as the coefficients, by means of gradient-based optimization algorithms. In this paper, we propose a novel neural network-based symbolic regression method that constructs physically plausible models based on limited training data and prior knowledge about the system. The method employs an adaptive weighting scheme to effectively deal with multiple loss function terms and an epoch-wise learning process to reduce the chance of getting stuck in poor local optima. Furthermore, we propose a parameter-free method for choosing the model with the best interpolation and extrapolation performance out of all models generated through the whole learning process. We experimentally evaluate the approach on four test systems: the TurtleBot 2 mobile robot, the magnetic manipulation system, the equivalent resistance of two resistors in parallel, and the longitudinal force of the anti-lock braking system. The results clearly show the potential of the method to find sparse and accurate models that comply with the prior knowledge provided.
Symbolic regression, neural networks, physics-aware modeling.
## I Introduction
Symbolic regression (SR) is a data-driven method that generates models in the form of analytic formulas. It has been successfully used in many nonlinear data-driven modeling tasks with quite impressive results [1, 2, 3, 4]. Historically, SR has been predominantly realized using genetic programming (GP) [1, 5, 6, 7, 8], a method that evolves a population of candidate solutions through a number of generations. This gradient-free learning process is driven by a selection strategy that prefers high-quality solutions to poor ones, and new candidate solutions are created by genetic operators of crossover and mutation. Some GP-based approaches make use of the loss function gradient to fine-tune the inner coefficients of the model [9, 10, 11].
SR has several advantages over other data-driven modeling methods. For example, contrary to (deep) neural networks, which belong to data-hungry approaches, SR can construct good models even from very small training data sets [12]. SR is also suitable for dealing with prior knowledge [8, 13, 14, 15]. This is very important especially when the data set does not sufficiently cover the input space or when some parts of the input space are completely omitted from the data set. Even when a sufficiently large and informative set of training data is available, methods that minimize only the training error tend to yield partially incorrect models, for instance, in terms of their steady-state characteristics, local, and even global behavior. Using prior knowledge about the desired properties of the modeled system within the learning process allows for learning models that, in addition to a small training error, also exhibit high compliance with the physical properties of the given system.
Despite their high popularity, GP-based SR methods suffer from several deficiencies. The models tend to increase in size and complexity without an adequate increase in the model's performance. The phenomenon is known as code bloat [16]. Furthermore, GP-based approaches do not scale well with the number of variables and samples in the training data set, since an entire population of formulas has to be evolved and evaluated repeatedly through many generations. Last but not least, it is hard to tune the coefficients of the models using just genetic operators.
Recently, several approaches using neural networks (NNs) to learn analytic formulas by means of gradient-based optimization algorithms have been proposed [17, 18, 19, 20, 21, 22]. They all share the idea that analytic models are represented by a heterogeneous NN with units implementing mathematical operators and functions, such as {+, -, *, /, sin, exp, etc.}. The weights of the network are adjusted using standard gradient-based methods with the ultimate goal of minimizing the training error while maximally reducing the number of active units, i.e., only those units that have above-threshold input/output weights and are involved in the computation of the final NN output. Thus the final NN represents a simple analytic formula. The individual approaches differ in how the learning process is driven towards these sparse analytic models.
In this paper, we propose a novel NN-based SR approach, N4SR (pronounced as "enf:x:so"), that allows for using prior
knowledge represented by and evaluated on constraint samples as introduced in [14, 15]. The NN uses an EQL-like architecture [17] with skip connections [20]. The learning process is divided into epochs. During the learning process, models of varying sizes, measured by the number of active weights, are generated. Then, the final model is chosen as the best-performing model among the least complex ones. The model's performance is judged based on its validation root-mean-square error and compliance with the validity constraints and prior knowledge.
Note that the problem of seeking a sparse model that has a low training error and exhibits desired characteristics is a multi-objective optimization problem. This implies that there is a strong interplay among the respective terms of the loss function. It may happen that some terms become dominant in the loss function while suppressing the effect of some other terms. To remedy this problem, we propose a self-adaptive strategy to keep the pair-wise ratios between the terms around the required values during the whole optimization process.
The main contributions of this paper are:
* We introduce an NN-based SR approach that uses training data and prior knowledge to generate precise and physically plausible models.
* We introduce a self-adaptive strategy to control the contributions of the training error term, regularization term, and constraint error term during the whole optimization process. We show that this method is effective compared to the static one. Moreover, it reduces the number of parameters to be tuned for each single SR instance.
* We propose the final model selection method based on the model's complexity and the constraint violation error. The method does not require any extrapolation test set. We show that the proposed method is competitive with the one based on the extrapolation error where data sampled from the extrapolation domain are needed.
* The proposed N4SR is thoroughly evaluated on four test problems, including the validation of our design choices, and compared to relevant methods.
The paper is organized as follows. The related work is surveyed in Section II. Then, the particular SR problem considered in this work is defined in Section III. The proposed method is described in Section IV. Section V describes the experiments set up and presents and discusses the results obtained on the four test problems. Finally, Section VI concludes the paper.
## II Related work
One of the first works on using NN for symbolic regression is [17], where the original Equation Learner (EQL) was introduced. It works with a simple feed-forward multi-layer architecture with several unit types - sigmoid, sine, cosine, identity, and multiplication. The network is trained using a stochastic gradient descent algorithm with mini-batches, Adam [23], and a Lasso-like objective combining the \(L_{2}\) training loss and \(L_{1}\) regularization. Moreover, it uses a hybrid regularization strategy that starts with a certain number of update steps without regularization, followed by a regularization phase to enforce a sparse network structure to emerge, and ends with the final phase with disabled regularization but enforced the \(L_{0}\) norm of the weights (i.e., keeping all weights that are close to 0 at 0). An important question is how to choose the right network instance (i.e., the final model) among all the network instances generated during the learning process. In [17], they solve it by ranking the network instances w.r.t. validation error and sparsity and selecting the one with the smallest \(L_{2}\) norm (in rank-space). However, it was shown that in some cases, this does not select a network instance with the best performance metrics.
In [18], an extended version of EQL, denoted as EQL\({}^{+}\), was proposed. In addition to the original EQL, it allows for modelling divisions using a modified architecture that places the division units in the output layer. The objective function is a linear combination of \(L_{2}\) training loss and \(L_{1}\) regularization extended by a penalty term for invalid denominator values \(P^{\theta}\). Furthermore, special _penalty epochs_ are injected at regular intervals into the training process to prevent output values on data from extrapolation regions having a very different magnitude than the outputs observed on training data. During the penalty epochs, only the penalty function \(P^{\theta}+P^{bound}\) is minimized, where \(P^{bound}\) penalizes for the outputs larger than the maximal desired value observed on all data points (including the extrapolation ones). This way, a reasonable but not necessarily correct behaviour of the model on the extrapolation region is enforced. Moreover, one must estimate the maximal desired output value in advance, which cannot be done reliably in general. Here, the model selection method chooses the network instance that minimizes the sum of normalized interpolation and extrapolation validation errors, where the extrapolation error is calculated on few measured extrapolation points. On the one hand, this method was shown to work better than the one used in EQL. On the other hand, it still relies on known extrapolation points, though just few of them. Another extension of EQL is the informed EQL (iEQL) proposed in [19]. It uses expert knowledge about what are permitted or prohibited equation components and a domain-dependent structured sparsity prior. The authors demonstrated in artificial as well as real-world experiments that iEQL can learn interpretable models of high predictive power. However, the expert knowledge might be hard to define reliably for some problems as even nontrivial nested structures may be beneficial in some cases.
In [21], the EQL architecture with other deep learning architectures and \(L_{0.5}\) regularization was proposed. Its power was demonstrated on a simple arithmetic task where a convolutional network is used to extract handwritten MNIST1 digits and a set of experiments, where the EQL network was applied to analyze physical time-varying systems. Partially inspired by the EQL network, a new multi-layer NN architecture, OccamNet, that represents a probability distribution over functions was proposed in [20]. It uses skip-connections similar to those in DenseNet [24] and a temperature-controlled connectivity scheme, which uses the probabilistic interpretation of the softmax function by sampling sparse paths through
a network, to maximize sparsity. The Mathematical Operation Network (MathONet) proposed in [22] also uses EQL-like NN architecture. The sparse sub-graph of the NN is sought using the proposed Bayesian learning approach that incorporates structural and non-structural sparsity priors. The system was shown to be able to discover ordinary and partial differential equations from the observations.
A different NN-based SR approach class uses deep neural network transformers such as the GPT-2 [25]. They learn the transformer model using a large amount of training data where each sample is typically a tuple of the form (formula, data sampled from the formula). During inference, the transformer model constructs the formula based on a particular data set query. This approach has its own advantages and disadvantages, but this is out of the scope of this work. We refer interested readers to [26, 27, 28, 29].
## III Problem definition
In this work, we consider a regression problem where a neural network model representing the function \(\phi:\mathbb{R}^{n}\rightarrow\mathbb{R}\) is sought, with \(n\) the number of input variables. In particular, a maximally sparse neural network model representing a desirably concise analytic expression is trained so that it simultaneously minimizes the error observed on the training data set, maximizes its validity, and maximizes its compliance with the constraints imposed on the model (defining the model's desired physical properties). The model's validity reflects the fact that the neural network may contain units with singularities, such as the division one, which calculates \(a/b\) and exhibits a singularity at \(b=0\). Multiple types of singularity units can be considered here. We denote the set of singularity types used in the network as \(T^{s}\). The constraint satisfaction measure is calculated on a set of constraint samples generated specifically for each constraint type as proposed in [14, 15]. The set of all constraints defined for the particular SR problem is denoted as \(T^{c}\). The sparsity of the model is measured by the number of active weights and units in the neural network. Note that the ultimate goal is to generate models that not only work well on the training data, i.e., exhibit good interpolation performance, but also work well on the data sampled outside the training data domain, i.e., exhibit good extrapolation performance as well. Thus, while the model is trained on the training data plus the constraint samples representing general model's properties, the final model's performance is evaluated on the test data set that comes from different regions of the input space than the training data set.
## IV Method
This section describes the main components of N4SR, namely the architecture of the neural network, the forms of the loss function used in different stages of the run, the self-adaptive loss terms weighting scheme, the learning process procedure, and the final model selection rule.
### _Data sets used for learning_
Before we describe the method itself, we introduce the following data sets used to train the neural network model:
* contains data samples of the form \(\mathbf{d}_{i}=(\mathbf{x}_{i},y_{i})\), where \(\mathbf{x}_{i}\in\mathbb{R}^{n}\) is sampled from the training domain \(\mathbb{D}_{t}\).
* contains data samples of the same form as \(D_{t}\) sampled from \(\mathbb{D}_{t}\) such that \(D_{v}\cap D_{t}=\varnothing\). This data set is used to choose intermediate models within the learning process, see Section IV-F, and to choose the final output model according to the proposed model selection method, see Section IV-E.
* we adopt the constraint representation and evaluation scheme as proposed in [14]. It assumes that any type of prior knowledge can be written as nonlinear inequality and equality constraints that the system must obey. Synthetic constraint samples (i.e., samples not measured on the system) are generated specifically for each constraint in \(T^{c}\) and the desired inequality or equality relation is defined on them. Then, the constraint violation error measures how much the model violates the desired inequality or equality relations over the constraint samples.
### _Architecture_
The approach we propose in this paper uses an EQL-like architecture similar to the ones introduced in [19] and [20]. It takes advantage of the skip connections so that simple structures present in shallow layers can be efficiently learned due to a direct propagation of the error gradients. Moreover, these shallow structures can be refined and reused in the subsequent layers. We also allow for using units with singularities. Contrary to the EQL\({}^{+}\), singularity units can be used at any layer of the network.
Figure 1 shows the core architecture components on an example of the neural network with two hidden layers and one output layer unit. Each hidden layer contains "learnable
Fig. 1: Network architecture with two hidden layers and one output layer unit. The blue lines mark links with learnable weights. The red lines are the skip connections leading from the source units of layer \(k-1\) to the copy units in layer \(k\). These links are permanently set to 1. For simplicity, this scheme does not show bias links leading to every \(z\) node.
units" and the "copy units" (the term introduced in [19]). The learnable units are units whose input weights can be tuned within the learning process (i.e., the links shown in blue). The _learnable weights_ of all learnable units are collected in the set \(W_{l}\). The copy units in layer \(k\) are copies of all units from the previous layer \(k-1\). Their weights are permanently set to 1 and are not subject to the learning process (these are the links shown in red). Units represent elementary unary functions, e.g., \(\sin\), \(\cub\), \(\tanh\), and binary functions such as multiplication * and division /. Each unit \(i\) in layer \(l\) calculates its output as
\[y_{i}^{l}=g(z_{i,0}^{l})\text{, for unary elementary function }g \tag{1}\]
and
\[y_{i}^{l}=h(z_{i,0}^{l},z_{i,1}^{l})\text{, for binary elementary function }h \tag{2}\]
where \(z_{i,\text{ }}^{l}\) is an affine transformation of the whole previous layer's output calculated as
\[z_{i_{\text{-}}}^{l}=\mathbf{W}_{i}^{l}\,\mathbf{y}^{l-1}+b_{i}^{l}\,. \tag{3}\]
In Figure 1, each hidden layer contains learnable units composed of a single instance of each unary and binary unit type. In general, there may be multiple instances of each unit type. Similarly, the output layer may have multiple output units, not just a single one, and they can be of any type out of the unary and binary unit types, not necessarily the identity one as used here. This depends on the SR problem solved and on the expert knowledge about the formula form sought.
### _Loss functions_
Throughout the learning process, the following three loss functions are used in the different stages of the learning process, described in Section IV-F.
\[\begin{split}\mathcal{L}_{1}&=\mathcal{L}^{t}+ \mathcal{L}^{s}\\ \mathcal{L}_{2}&=\mathcal{L}^{t}+\mathcal{L}^{s}+ \mathcal{L}^{c}\\ \mathcal{L}_{3}&=\mathcal{L}^{t}+\mathcal{L}^{s}+ \mathcal{L}^{c}+\mathcal{L}^{r}\end{split} \tag{4}\]
The loss functions are composed of the following terms:
* this is the root-mean-square error (RMSE) observed on the training data set \(D_{t}\).
* this term is defined as the weighted sum of scaled RMSE values calculated for each singularity unit type \(sterm_{j}\) over the aggregated data set \(D=D_{t}\cup D_{v}\cup D_{c}\). Note that all these data samples can be used to check the validity of the model as we do not need to know the required output value \(y\). Instead, just the values of the respective \(z\) node of each singularity unit are checked whether they take on values greater than or equal to a user-defined threshold \(\theta_{j}^{s}\) for the given singularity type \(j\). Each \(sterm_{j}\) value is divided by its \(hist_{j}\), which is a scaling coefficient reflecting the current trend over the last \(sterm_{j}\) values as described in Section IV-D. The \(\mathcal{L}^{s}\) is formally defined as \[\begin{split}\mathcal{L}^{s}&=\alpha\sum_{j\in T^{s }}\frac{sterm_{j}}{hist_{j}}\,,\\ stern_{j}&=\sqrt{\frac{1}{|S_{j}||D|}\sum_{u\in S_{j }}\sum_{d\in D}(\mathrm{m_{j}}(\theta_{j}^{s},z_{u}))^{2}}\end{split}\] (5) where \(S_{j}\) is the set of all singularity units of the given type \(j\) in the model, \(\mathrm{m_{j}}(\theta_{j}^{s},z_{u})\) is a function defining a suitable error metric for the given singularity type, and \(z_{u}\) is the critical \(z\) node of the respective singularity unit \(u\) (e.g., the denominator in case of the division unit). The coefficient \(\alpha\) is determined using a self-adaptive scheme described below. We adopt the implementation of the division operation originally proposed in [18] and further extended in [19] that uses the following error metric \[\mathrm{m_{j}}(\theta_{j}^{s},z_{u})=\max(\theta_{j}^{s}-z_{u},0)\,.\] It makes use of the fact that real systems do not operate at the pole \(b\to 0\); thus, only the positive branch of the hyperbola, \(1/b\) where \(b>0\), is sufficient to represent the division while the numerator \(a\in\mathbb{R}\) determines the sign of the division output value. Here, \(b=z_{u}\) and the threshold \(\theta_{j}^{s}=0.0001\) is used to prevent the \(z_{u}\) values from converging very close to the pole value. This representation can also be used for other units exhibiting a singularity such as \(\log\) that is defined on the interval \((0,\infty)\) with the singularity at 0. Constraint loss, \(\mathcal{L}^{c}\)
- this term accumulates the error of the model in terms of the prior knowledge violation. Like the \(\mathcal{L}^{s}\), this term is calculated as the weighted sum of scaled RMSE values calculated for each constraint on its own specific constraint data set. Formally, the \(\mathcal{L}^{c}\) is defined as \[\mathcal{L}^{c}=\beta\sum_{j\in T^{c}}\frac{cterm_{j}}{hist_{j}}\] (6) where \(cterm_{j}\) is the root-mean-square error calculated for the constraint \(j\) on its constraint data set and \(hist_{j}\) is the current trend of the \(cterm_{j}\) values, see below. Again, the coefficient \(\beta\) is determined using the self-adaptive scheme.
* this term drives the learning process towards a sparse neural network representing a concise analytic expression. Here, we adopt a smoothed \(L_{0.5}\) regularization, \(L_{0.5}^{*}\), as proposed in [21]. It exhibits several good properties. It is a trade-off between \(L_{0}\) and \(L_{1}\) regularization as it represents an optimization problem that can be solved using gradient descent, contrary to \(L_{0}\), and at the same time, it enforces sparsity more strongly than \(L_{1}\) while penalizing less the magnitude of the weights. Contrary to the original \(L_{0.5}\) regularization, \(L_{0.5}^{*}\) does not suffer from the singularity in the gradient as the weights converge to 0 and uses a piece-wise function to smooth out the function at small magnitudes. For a detailed setup of \(L_{0.5}^{*}\), see [21]. The \(\mathcal{L}^{r}\) term is calculated
as the sum of \(L_{0.5}^{s}\) contributions of all _active weights_, i.e., the weights that have the absolute value no less than a user-defined threshold \(\theta^{a}\)
\[\begin{split}\mathcal{L}^{r}&=\gamma\frac{rterm}{rhist }\,,\\ rterm&=\sum_{w\in W_{a}}L_{0.5}^{*}(w)\end{split} \tag{7}\]
where \(W_{a}\subseteq W_{l}\) such that \(W_{a}=\{w:w\in W_{l}\wedge\mathrm{abs}(w)\geq\theta^{a}\}\), \(rhist\) is the current trend of the \(rterm\) values, and the coefficient \(\gamma\) is determined using the self-adaptive scheme. The number of active weights is used as the _complexity measure_ of the NN model during the learning process.
### _Self-adaptive loss terms weighting scheme_
All three forms of the loss function involve multiple terms, which leads to a multi-objective optimization problem. Besides the fact the terms may be competing with each other (e.g., \(\mathcal{L}^{t}\) vs. \(\mathcal{L}^{r}\), the model's precision and complexity), they may also differ substantially in the scale of the values they attain. Typically, a weighted sum of the individual terms is used as the final loss to be minimized, which may lead to an unwanted dominance of certain terms.
To remedy this situation, we propose a self-adaptive method that adapts the coefficients \(\alpha\), \(\beta\), and \(\gamma\) involved in the loss terms \(\mathcal{L}^{s}\), \(\mathcal{L}^{c}\), and \(\mathcal{L}^{r}\) throughout the whole learning process in order to keep the desired ratios \(r_{s/t}=\mathcal{L}^{s}:\mathcal{L}^{t}\), \(r_{c/t}=\mathcal{L}^{c}:\mathcal{L}^{t}\), and \(r_{r/t}=\mathcal{L}^{r}:\mathcal{L}^{t}\). It uses a sliding window strategy that works with lists of values of \(\mathcal{L}^{t}\), \(\mathit{sterm}_{j}\), \(\mathit{cterm}_{j}\), and \(\mathit{rterm}\), respectively, observed in the last \(N_{w}\) iterations of the learning process. The weights \(\alpha\), \(\beta\), and \(\gamma\) are updated in each iteration according to Algorithm 1.
```
Input:B\({}^{t}\dots\) set of \(N_{w}\) last \(\mathcal{L}^{t}\) values \(type\dots\) type of the loss term to be processed; \(type\in\{S,C,R\}\) Depending on the \(type\), B is a set of \(N_{w}\) last \(\frac{\mathit{sterm}_{j}}{\mathit{shist}_{j}}\) values for each singularity unit type \(j\in T^{s}\) or \(\frac{\mathit{cterm}_{j}}{\mathit{shist}_{j}}\) values for each constraint type \(j\in T^{c}\) or \(\frac{\mathit{rterm}_{j}}{\mathit{rhist}_{j}}\) values \(\frac{\mathit{rterm}_{j}}{r_{\dots}}\) desired ratio of the given loss term to \(\mathcal{L}^{t}\) Output:\(cf\dots\) given loss term's weighting coefficient if\(type==S\)then \(acc\leftarrow\sum_{j\in T^{s}}\mathrm{mean}(\frac{\mathit{sterm}_{j,1}}{ \mathit{shist}_{j}},\dots,\frac{\mathit{sterm}_{N_{w}}}{\mathit{shist}_{j}})\) if\(type==C\)then \(acc\leftarrow\sum_{j\in T^{c}}\mathrm{mean}(\frac{\mathit{cterm}_{j,1}}{ \mathit{shist}_{j}},\dots,\frac{\mathit{cterm}_{j,N_{w}}}{\mathit{shist}_{j}})\) if\(type==R\)then \(acc\leftarrow\mathrm{mean}(\frac{\mathit{rterm}_{j}}{\mathit{rhist}},\dots,\frac{ \mathit{rterm}_{N_{w}}}{\mathit{rhist}})\) if\(acc==0\)then \(cf\gets 1\) else \(cf\gets r\frac{\mathrm{mean}(\texttt{B}^{t})}{acc}\) return\(cf\)
```
**Algorithm 1**Function getTermWeight(\(\textbf{B}^{t}\), \(\textbf{B}\), \(type\), \(r\)).
First, the mean value of the respective \(N_{w}\) terms (i.e., \(\frac{\mathit{sterm}_{j}}{\mathit{shist}_{j}}\), \(\frac{\mathit{cterm}_{j}}{\mathit{chist}_{j}}\), or \(\frac{\mathit{rterm}}{\mathit{rhist}}\)) is calculated, denoted as \(acc\). If the \(acc\) is zero, then the respective loss term weight is set to 1. Otherwise, the weight is set to reflect the corresponding desired ratio. The \(\mathit{shist}_{j}\), \(\mathit{chist}_{j}\), and \(rhist\) values are calculated as the mean of the last \(\mathit{sterm}_{j}\), \(\mathit{cterm}_{j}\), and \(\mathit{rterm}\) values observed during the last \(N_{w}\) iterations. They are used to normalize the raw constraint, singularity, and regularization terms so that they all contribute to \(acc\) with values of the same magnitude. Moreover, each singularity and constraint type contributes to \(acc\) relatively to its current trend and independently to the other types. Thus, if the \(\mathit{cterm}_{j}\) equals the \(\mathit{chist}_{j}\), then the constraint \(j\) contributes to \(acc\) with the value of 1. If \(\mathit{cterm}_{j}\) is less than \(\mathit{chist}_{j}\), then the constraint \(j\) contributes to \(acc\) with a value less than one and vice versa. It works the same for the singularity terms. Without this normalization, some singularity or constraint type may dominate within the respective loss term if its values are by orders of magnitude higher than the others.
The weights \(\alpha\), \(\beta\), and \(\gamma\) are used to scale the raw loss terms values so that the actual ratios \(r_{s/t}^{*}\), \(r_{c/t}^{*}\), and \(r_{r/t}^{*}\) follow the desired ratios. After the scaling is applied, i.e., the current values of \(\mathcal{L}^{s}\), \(\mathcal{L}^{c}\), and \(\mathcal{L}^{r}\) have been calculated, it is further checked whether the obtained loss terms do not exceed the maximum value for which the actual ratio is less than or equal to the desired one. If this condition is violated, the respective loss term value is set to the value that implies the actual ratio \(r_{-/t}^{*}\) is equal to the desired one \(r_{-/t}\).
Note that only \(\mathit{sterm}_{j}\), \(\mathit{cterm}_{j}\), and \(\mathit{rterm}\) are scaled in each iteration. The \(\mathcal{L}^{t}\) serves as a baseline relative to which the other terms are adjusted. Since the primary goal is to fit well the training data, each of the \(r_{s/t}\), \(r_{c/t}\), and \(r_{r/t}\) should be set to a value less than 1.
### _Final model selection_
During the learning process, many NN models are generated. It is important to select the best one in the end, while the pursued criteria are, in general, the model complexity, and its interpolation and extrapolation performance. In [17, 18], and [19] as well as in [20] and [21], the final model selection method always builds on the fact that "few" extrapolation points are known, i.e., both the input variable values as well as the target value of the points are known. Despite the fact that the extrapolation points are not used within the learning process, the assumption that such a concrete piece of information about the desired model's performance outside the domain of the training data set is available for the final model selection makes the whole approach dependent on such type of data. This renders such approaches limited as they can not be used, for example, in the situation when no extrapolation points can be measured on the system by the time the system is to be modeled.
Here, we propose a final model section strategy that does not require any set of extrapolation points. Instead, it uses just the model's complexity, its validation RMSE, and the measures of its compliance with the prior knowledge and the singularity units' constraints. Note that in practice, it is much easier
to define the desired "high-level" model's properties than to obtain particular measurements on the system. The proposed method uses an acceptance rule where one of the following two conditions must hold in order for the new model, \(model\), to be accepted as the best-so-far model, \(model^{*}\):
1. The \(model\)'s complexity is lower than the complexity of \(model^{*}\).
2. The \(model\)'s complexity is equal to the complexity of \(model^{*}\) and for all \(j\in T^{*}\), the \(model\)'s \(sterm_{j}\) is not worse than that of \(model^{*}\) and for all \(j\in T^{c}\), the \(model\)'s \(term_{j}\) is not worse than that of \(model^{*}\) and the \(model\)'s validation RMSE is not worse than that of \(model^{*}\).
Thus, the final \(model^{*}\) is the least complex model found with the best values of all of the \(sterm_{j}\), \(cterm_{j}\), and the validation RMSE objectives among the models of the same minimum complexity.
### _Epoch-wise learning process_
The whole learning process, see Algorithm 2, is divided into three stages - initial, exploration-focus, and final stage.
The goal of the initial stage is to evolve the NN such that it exhibits at least partial capabilities (1) to fit the training data, (2) to satisfy the constraints imposed on the singularity units, and (3) to satisfy the constraints imposed on the desired model's properties. In the first half of this stage, the \(\mathcal{L}_{1}\) loss function is optimized, while in the rest of this stage the \(\mathcal{L}_{2}\) is optimized. The complexity of the NN does not matter in this stage. The NN then passes to the exploration-focus stage where it is further trained in an epoch-wise manner. Each epoch starts with the exploration phase, where all learnable weights \(W_{l}\) are considered, followed by the focus phase, where only active weights \(W_{a}\) are further refined. The active weights are collected in the maskWeights() function at line 18 as the weights with the absolute value no less than the threshold \(\theta^{a}\). This is the only phase of the learning process where the \(\mathcal{L}^{r}\) term is used to drive the search towards a simpler model. The final stage performs the fine-tuning of the active weights of the NN. All weights that become inactive at any iteration of this stage, line 25, are set to zero for the rest of the run and only active weights are updated in each learningStep() using \(\mathcal{L}_{2}\). Thus, the complexity of the model can only decrease in this stage. Finally, the \(model^{*}\) in the form of an analytic expression represented by the best-of-run NN model is returned.
```
Input: Neural network with the set of learnable weights \(W_{l}\) \(N_{init}\), \(N_{final}\)... number of iterations of the initial and final stage \(N_{w}\)... size of the loss term weights' adaptation window \(N_{e}\), \(N_{f}\)... number of iterations of the exploration and focus phase \(E\)... number of the exploration-focus stage epochs Output: Model in the form of an analytic expression represented by the final sparse network
1
2
3
4 end for
5
6 end for
7
8 end for
```
**Algorithm 2**N4SR algorithm
As described above, a different loss function is used in different phases of the learning process. Also, either all learnable weights or just the active ones are considered for being adjusted within the gradient descent learning step. These two things are passed as input parameters to the learningStep() function in each iteration. Depending on the loss function used, respective weighting coefficients are updated after each learning step (e.g., lines 7-8 executed after line 6). Besides the weight update itself, the learningStep() function updates several other objects. Firstly, it updates structures storing the history of the last \(N_{w}\) values of the relevant loss terms. It also updates, when applicable, the model \(m^{*}\) which serves as the seed for each epoch of the exploration-focus stage, see line 19 (and line 6 where \(m^{*}\) is initialized for the first time). Lastly, the function returns the current version of the \(model^{*}\), which is the best-so-far model with respect to the defined final model selection strategy, see Section IV-E.
The core of the learning process is the exploration-focus stage. This stage implements a restarted optimization strategy to avoid premature convergence to a potentially poor suboptimal model. It runs several epochs, each executes the exploration and focus phase one by one. At the beginning of each epoch, all learnable weights are set to the weights of the
current \(m^{*}\), line 11. Then, the maximum acceptable RMSE observed on the validation set \(D_{v}\), \(\theta^{v}\), is calculated, line 12. It is defined as \((1+\epsilon)\) times the mean validation RMSE over all models in \(M^{*}\), which is a variable storing the final \(m^{*}\) of the last \(k\) epochs, see lines 9 and 23. The threshold \(\theta^{v}\) is used in the acceptance rule that determines whether the new NN model updated within learningStep() will be accepted as the \(m^{*}\) of the current epoch. It gets accepted iff its complexity is not higher than the current \(m^{*}\) complexity and its validation RMSE is not higher than the threshold \(\theta^{v}\). The parameter \(\epsilon\) defines a tolerance margin, i.e., how much the current \(m^{*}\) can be worse in terms of the validation RMSE than the mean of \(k\) last \(m^{*}\) models. Here we use \(\epsilon=0.5\).
During the whole exploration phase, all weights in \(W_{l}\) are optimized with respect to the \(\mathcal{L}_{2}\) loss function, line 14. After the exploration phase, the focus phase is carried out. Each iteration of this phase starts with extracting the set of active weights \(W_{a}\), line 18, which are then optimized using \(\mathcal{L}_{3}\). Once the focus phase has been completed, \(M^{*}\) is updated using the current \(m^{*}\).
## V Experiments
### _Algorithms compared_
In this study, we experiment with three different methods. We compare multiple variants of N4SR, an alternative neural network-based method from the literature, and a genetic programming-based algorithm.
The N4SR algorithms are divided into variants denoted as N4SR-WSCL with:
* the adaptive (A) or static (S) loss terms weighting scheme. In the static variant, the weighting coefficients \(\alpha\), \(\beta\), and \(\gamma\) do not adapt. Instead, they are determined at the moment when they are used for the first time and stay constant for the rest of the run. In particular, coefficient \(\alpha\) is calculated at the very first iteration of the run using the current \(\mathcal{L}^{t}\) and \(\mathit{sterm}_{j}\) values as \(\alpha=r_{s/t}\cdot\mathcal{L}^{t}/\sum_{j\in T^{s}}\mathit{sterm}_{j}\). Similarly, coefficient \(\beta\) is calculated at the first iteration of the second phase of the initial stage, line 8 of Algorithm 2, using the current \(\mathcal{L}^{t}\) and \(\mathit{cterm}_{j}\) values as \(\beta=r_{c/t}\cdot\mathcal{L}^{t}/\sum_{j\in T^{c}}\mathit{cterm}_{j}\). Coefficient \(\gamma\) is calculated at the first iteration of the first pass through the focus phase, line 22 of Algorithm 2, using the current \(\mathcal{L}^{t}\) and \(\mathit{rterm}\) values as \(\gamma=r_{r/t}\cdot\mathcal{L}^{t}/\mathit{rterm}\). Then, the loss terms \(\mathcal{L}^{s}\), \(\mathcal{L}^{c}\), and \(\mathcal{L}^{r}\) are calculated using modified (5), (6), and (7) without the normalization, so that \(\mathit{hist}_{j}=1\), \(\mathit{chist}_{j}=1\), and \(\mathit{rhist}=1\).
* the constraint satisfaction-based (C) final model selection rule, defined in Section IV-E, or an extrapolation-based (E) final model selection strategy. The extrapolation-based strategy is a modification of the one defined in Section IV-E such that only the \(\mathit{model}\)'s complexity and its RMSE observed on a few extrapolation points are considered.
* denoting whether the constraints are used (Y) or not (N) within the learning process. The variant without constraints works according to Algorithm 2 with the modification that the loss functions \(\mathcal{L}_{2}\) and \(\mathcal{L}_{3}\) do not involve the \(\mathcal{L}^{c}\) term. It also implies that this variant can work only with the extrapolation-based final model selection rule.
* the epoch-wise (E) or a single-epoch (S) learning used within the exploration-focus stage. The single-epoch variant goes through a single epoch of the exploration-focus stage with \(N_{f}\) set so that the total number of iterations spent in this stage is the same as for the epoch-wise variant.
EQL\({}^{\texttt{+}}\) - the EQL algorithm with division units and the improved model selection method working with the extrapolation-validation dataset, proposed in [18]. We used the publicly available implementation2.
Footnote 2: [https://github.com/martius-lab/EQL](https://github.com/martius-lab/EQL)
mSNGP-LS - the multi-objective SNGP using the local search procedure to estimate coefficients of the evolved linear models, proposed in [14].
### _Evaluation data sets_
The following data sets were used to evaluate models obtained with the compared algorithms:
* contains data samples of the same form as \(D_{t}\) sampled from the extrapolation domain. We use the extrapolation domain \(\mathbb{D}_{e}\) to denote the parts of the problem's input space whose samples are either very sparsely present in \(D_{t}\) and \(D_{v}\) or are entirely omitted in these data sets. This data set is not used for learning. It is used just to select the final output model according to the methods that rely on a few known samples measured in the extrapolation domain. This is the case of the EQL\({}^{\texttt{+}}\) algorithm and the N4SR variants with S \(=\texttt{E}\).
* contains data samples of the same form as \(D_{t}\) sampled from \(\mathbb{D}_{t}\) such that \(D_{i}\cap(D_{t}\cup D_{v})=\varnothing\). This data set is used to evaluate models' interpolation performance.
* contains data samples of the same form as \(D_{t}\) sampled from the extrapolation domain \(\mathbb{D}_{e}\). This data set is used to evaluate models' extrapolation performance.
### _Test problems_
The proposed method was experimentally evaluated on the following four problems. We chose these problems since we possess detailed knowledge of the data and the desired properties of the models sought. Moreover, we can illustrate interesting application scenarios for these problems, such as using very sparse or unevenly distributed training data. Standard data-driven modeling approaches fail to generate physically plausible models when only such insufficient data sets are available.
#### V-B1 TurtleBot
This problem is to find a discrete-time model of a real physical system, the two-wheel TurtleBot 2 mobile robot (Figure 2).
The robot's state is captured by the state vector \(\mathbf{x}=(x_{pos},y_{pos},\phi)^{\top}\), with \(x_{pos}\) and \(y_{pos}\) the robot's position coordinates and \(\phi\) the robot's heading. The control input is \(\mathbf{u}=(v_{f},v_{a})^{\top}\), with \(v_{f}\) and \(v_{a}\) the desired forward and angular velocity, respectively. In this work, we model only the \(x_{pos}\) component of the robot's motion model since (1) the \(y_{pos}\) component is analogous and (2) it is more illustrative than the \(\phi\) component as there are more types of prior knowledge defined for \(x_{pos}\). The model has the form of the following nonlinear difference equation
\[x_{pos,k+1}=f^{x_{pos}}(x_{pos,k},y_{pos,k},\phi_{k},v_{f,k},v_{a,k})\,,\]
where \(k\) denotes the discrete time step.
**Data sets**. We used the data sets introduced in [15], which were collected during the operation of the real robot. Five sequences of samples starting from the initial state \(\mathbf{x_{0}}=(0,0,0)^{\top}\) were generated with a sampling period \(T_{s}=0.2\,\)s. In each sequence, we steered the robot by random inputs drawn from the domain \(v_{f}\in[0,0.3]\,\)m\(\,\)s\({}^{-1}\), \(v_{a}\in[-1,1]\,\)rad\(\,\)s\({}^{-1}\). Of these five sequences, a randomly chosen one was used to create the training data set, another one was used for the validation data set, and the remaining three sequences were used for the test data sets.
**Prior knowledge**. We use the prior knowledge defined for the TurtleBot in [15]. All three prior knowledge types that the \(x_{pos}\) variable should comply with are of the invariant type. This means that when the model for the \(x_{pos}\) variable is evaluated on the relevant constraint sample, it should always output the value equal to its original value. The following three types of prior knowledge about the \(x_{pos}\) were used:
1. Steady-state behavior: If the control inputs, \(v_{f}\) and \(v_{a}\), are set to zero, then the robot may change neither its position nor its heading. This is represented by the following equality constraint: \[x_{pos}=f^{x_{pos}}(x_{pos},y_{pos},\phi,0,0)\,.\]
2. Axis-parallel moves: If the robot moves parallel to the \(y\)-axis, then its \(x_{pos}\) does not change. This is represented by the following equality constraints: \[x_{pos}=f^{x_{pos}}(x_{pos},y_{pos},-\pi/2,v_{f},0)\,,\] \[x_{pos}=f^{x_{pos}}(x_{pos},y_{pos},\pi/2,v_{f},0)\,.\]
3. Turning on the spot: If the forward velocity is zero, the robot may not change its position. This is represented by the following equality constraint: \[x_{pos}=f^{x_{pos}}(x_{pos},y_{pos},\phi,0,v_{a})\,.\]
The values of the state variables \(x_{pos}\), \(y_{pos}\), \(\phi\), and of the control inputs \(v_{f}\) and \(v_{a}\) were randomly sampled within the same limits as for the training data. We generated 50 constraint samples for each prior knowledge type, so 150 samples in total.
#### Iii-B2 Magnetic manipulation
The magnetic manipulation system, magman, consists of an iron ball moving on a rail and an electromagnet placed at a fixed position under the rail (Figure 3).
The goal is to find a model of the nonlinear magnetic force affecting the ball, \(f(x)\), as a function of the horizontal distance, \(x\), between the iron ball and the electromagnet, given a constant current \(i\) through the coil. We use data measured on a real system and an empirical model \(\tilde{f}(x)=-ic_{1}x/(x^{2}+c_{2})^{3}\) proposed in the literature [30] as the _reference model_, see Figure 4. Parameters \(c_{1}\) and \(c_{2}\) were found empirically for the given system.
**Data sets**. The region of operation considered for the magman spans over the interval \(-0.075\,\)m\(\,\leq x\leq 0.075\,\)m. However, only its small part, \(\mathbb{D}_{t}=[-0.027,0.027]\,\)m, is covered by the data measured on the real system [31], see Figure 4. A proper form of the model outside the sampled interval is governed by the constraints imposed on the model, see below. The whole data set of 858 measured samples was split into two sets in the ratio of 7:3. The larger one was further split into the \(D_{t}\) and \(D_{v}\) sets so that \(|D_{t}|=400\)
Fig. 4: Training data and the reference model for the magman problem.
Fig. 3: Magman: A schematic (a) and a photo of the system (b).
Fig. 2: TurtleBot mobile robot. A schematic (a) and a photo of the system (b).
and \(|D_{v}|=201\). The smaller one was used for the test interpolation data set \(D_{i}\). Additionally, two data sets, \(\bar{D}_{e}\) and \(D_{e}\), with samples from the extrapolation domain \(\mathbb{D}_{e}=[-0.075,-0.027]\cup[0.027,0.075]\,\)m were generated with the following sizes \(|\bar{D}_{e}|=40\) and \(|D_{e}|=200\). The target values of \(\bar{D}_{e}\) and \(D_{e}\) samples were determined using the reference model.
**Prior knowledge**. Five types of prior knowledge were defined. The model sought returns positive values on the interval \([-0.075,0]\,\)m and negative ones on the interval \([0,0.075]\,\)m. It is monotonically increasing on the intervals \([-0.075,-0.02]\,\)m and \([0.02,0.075]\,\)m and monotonically decreasing on the interval \([-0.006,0.006]\,\)m. Finally, the model's function goes through the origin, i.e., \(f(0)=0\), and approaches zero at negative and positive infinity, respectively, which is represented by the constraints \(f(-0.075)=10^{-3}\) and \(f(0.075)=-10^{-3}\), respectively. The constraint set contains 50 samples for the positive values, negative values, increasing monotonicity, and decreasing monotonicity plus 3 samples for the desired exact values, resulting in 203 constraint samples.
#### Vi-B3 Resistors
This problem was proposed in [8] to test SR method based on genetic programming with formal constraints. Originally, it considers a sparse set of noisy samples derived using the equivalent resistance of two resistors in parallel, \(r=r_{1}r_{2}/(r_{1}+r_{2})\), used here as a _reference model_. The goal is to find such a model \(f(r_{1},r_{2})\) that fits the training data and exhibits the same properties as the _reference model_, see below.
**Data sets**. Here, we use two variants of the data set used within the learning process. One with only 10 samples and the other one with 500 samples. The values of \(r_{1}\) and \(r_{2}\) are sampled uniformly from the interval [0.0001, 20]\(\Omega\). The target values are disturbed with a noise randomly generated with a normal distribution \(\mathcal{N}(0,0.05\,\sigma_{y})\), where \(\sigma_{y}\) is a standard deviation of the original output values obtained with the _reference model_. When using the large set, it is split into \(D_{t}\) and \(D_{v}\) in the ratio of 7:3. When the smaller one is used then eight samples go for \(D_{t}\), and the remaining two samples are in \(D_{v}\). The interpolation test set \(D_{i}\) is sampled from the training domain as well. Two data sets are sampled from the extrapolation domain \(\mathbb{D}_{e}=[20,40]^{2}\), \(\bar{D}_{e}\) with 40 samples and \(D_{e}\) with 500 samples.
**Prior knowledge**. We used the following three prior knowledge types as defined in [8] and used as well in [14]:
* symmetry with respect to arguments: \(f(r_{1},r_{2})=f(r_{2},r_{1})\),
* domain-specific constraint: \(r_{1}=r_{2}\Longrightarrow f(r_{1},r_{2})=\frac{r_{1}}{2}\),
* domain-specific constraint: \(f(r_{1},r_{2})\leq r_{1}\), \(f(r_{1},r_{2})\leq r_{2}\).
The constraint set contains a total of 150 constraint samples, 50 for each constraint.
#### Vi-B4 Anti-lock braking system - magic formula
This problem considers the control of an anti-lock braking system and particularly the longitudinal force \(F(\kappa)\) as the function of the wheel slip \(\kappa\). The force \(F(\kappa)\) is described by the'magic' formula of the following form
\[F(\kappa)=m\,g\,d\sin(c\arctan(b(1-e)\kappa+e\arctan(b\kappa)))\,, \tag{8}\]
where \(b\), \(c\), \(d\) and \(e\) are road surface-specific constants. The magic formula is an empirical model commonly used to simulate steady-state tire forces and moments. By adjusting the function coefficients, the same special function can be used to describe longitudinal and lateral forces (sine function) and self-aligning moment (cosine function). Here, we consider the _reference model_ used in [32] with \(m=407.75\,\)kg, \(g=9.81\,\)m\(\cdot\,\)s\({}^{-1}\), and the slip force parameters \((b,c,d,e)=(55.56,1.35,0.4,0.52)\), see Figure 5, which correspond to a wet asphalt for a water level of 3 mm [33].
**Data sets**. A data set of 110 samples generated using the reference model (8). The whole set was divided into \(D_{t}\) and \(D_{v}\) in the ratio of 4:1. The data are intentionally sampled unevenly. The steep left and the right flat region \(\kappa\) in the interval [0, 0.02] and [0.2, 0.99], representing the interpolation domain \(\mathbb{D}_{i}\), are densely sampled with 50 samples each. Target values of these samples are disturbed with a noise randomly
Fig. 5: Magic formula reference model and the data sets. (a) Training and validation data. (b) Interpolation test data and extrapolation data.
generated with a normal distribution \(\mathcal{N}(0,0.0025)\). Contrary, the peak of the function with \(\kappa\) values in the interval [0.03, 0.1] is covered very sparsely in the data with only 10 samples. Moreover, a larger noise drawn from \(\mathcal{N}(0,0.005)\) is added to the target values. This corresponds to the fact that, in reality, the system is unstable around the peak and it is hard to collect precise data there. Thus, the peak represents the extrapolation domain \(\mathbb{D}_{e}\) in the sense that it is rather poorly defined by the data. Again, the data deficiency is compensated in our method by the use of prior knowledge, see below. Additionally, three data sets for models' performance evaluation were used: \(D_{i}\) of size 200 sampled from \(\mathbb{D}_{i}\) and data sets \(\bar{D}_{e}\) and \(D_{e}\) of sizes \(40\) and \(100\), respectively, sampled from \(\mathbb{D}_{e}\). The target values of the samples were generated as the noiseless output of the reference model.
**Prior knowledge**. Three types of prior knowledge were defined for this problem, reflecting the key properties of the model sought. The model should return zero for \(\kappa=0\). Further, in the right part of \(\mathbb{D}_{i}\), the model is monotonically decreasing and approaching from above a certain value in the limit. So, its second derivative in this region should be positive. The last property of the model is that it has a single maximum located within \(\mathbb{D}_{e}\). This can be described by a constraint that enforces the model to be concave everywhere in \(\mathbb{D}_{e}\). The constraint set contains a total of 101 constraint samples, 1 sample for the exact value at \(\kappa=0\) and 50 samples for each of the other two constraints.
We briefly illustrate the implementation of constraints on the monotonically decreasing function with a positive second derivative property. As described in Section IV-A, a set of \(N_{j}\) constraint samples is generated for each constraint \(j\). Here, the samples have the form \((\kappa_{l},\kappa_{c},\kappa_{r})\), where \(\kappa_{c}\) is randomly sampled in \(\mathbb{D}_{e}\) and \(\kappa_{l}=\kappa_{c}-\delta\), \(\kappa_{r}=\kappa_{c}+\delta\), and \(\delta=0.001\). For each such constraint sample \(k\), the error value \(e_{k}\) is calculated as
\[e_{k}=\max( (f(\kappa_{r})-f(\kappa_{c})),0)+\max((f(\kappa_{c})-f(\kappa_{l} )),0)\] \[+\max((f(\kappa_{c})-f(\kappa_{r}))-(f(\kappa_{l})-f(\kappa_{c}) ),0)\,,\]
where the first line represents contributions for a non-decreasing property observed on the given constraint data triple \((\kappa_{l},\kappa_{c},\kappa_{r})\) and the second line represents a penalty for a negative second derivative observed on the constraint sample. Then, the corresponding \(cterm_{j}\) is calculated as the root-mean-square error over all \(e_{k}\) observed for the constraint.
### _Experiment set up_
#### Iv-D1 Network architecture
We used a network architecture with three hidden layers, denoted as the 'general' architecture. The first two hidden layers contain four elementary functions \(\{\sin,\tanh,\mathrm{ident},*\}\), each with two copies. The third hidden layer contains, in addition to that, one division unit. The output layer contains a single identity unit ident. Note, the ident unit calculates a weighted sum of its inputs so it can realize both the addition and subtraction units. The same architecture was used for all test problems but the magic one. There we used a function set with \(\mathrm{arctan}\) instead of \(\tanh\) in order to make the set-up consistent with the magic formula reference model (8). Moreover, for experiments with the resistors problem, we also used an architecture with a limited function set \(\{\mathrm{ident},*,/\}\), denoted as the 'informed' architecture, that corresponds to the one used in [14]. Its name reflects the fact that we know the minimum set of elementary functions needed to compose the correct formula. The first two hidden layers contain 4 copies of the multiplication and \(\mathrm{ident}\) unit each. The third hidden layer contains 3 copies of each of those two unit types. The number of units was chosen so that the number of learnable weights of the two architectures was as close as possible. Thus, the first and the second architecture comprises 396 and 403 learnable weights in total, respectively.
#### Iv-D2 Algorithms' configuration
The algorithms were tested with the following parameter setting:
* N4SR: \(N_{init}=2000\), \(N_{e}=20\), \(N_{f}=980\) and \(E=87\) for epoch-wise variants, \(N_{f}=86980\) and \(E=1\) for single epoch variants, \(N_{final}=1000\), \(N_{w}=10\), \(r_{s/t}=0.5\), \(r_{c/t}=0.5\), \(r_{r/t}=0.5\). The parameters are chosen so that the total number of iterations is always \(T=90000\).
* EQL\({}^{+}\): We adopted the configuration used in [18] with the exception that the total number of iterations \(T\) was set to 90000. The topology of the network was set the same as in the corresponding N4SR experiments, with the exception that the division unit is not in the third hidden layer. Instead, it is the only unit of the output layer, which is a design feature of EQL\({}^{+}\).
* mSNGP-LS: We adopted the configuration used in [14] with a few modifications. The set of elementary functions is set to comply with the set of elementary functions used in N4SR networks for the given problem. The maximum model's complexity is bounded from above by two parameters, the maximum number of features \(n_{f}=5\) and the maximum feature's depth \(\delta=3\), used with the same values for all test problems. Note the maximum feature's depth is equal to the number of hidden layers of the N4SR networks. Thus, when the features are aggregated in the final model, then its maximum possible depth is the same as the maximum depth of the N4SR models.
#### Iv-D3 Experiments evaluation
Fifty independent runs were carried out for each tested pair of the method and the training data set. The best model is returned at the end of each run according to the model selection strategy used. For the mSNGP-LS, we adopt the selection method proposed in [15]. It uses two performance metrics -- the RMSE calculated on the validation data set \(D_{v}\) and the constraint violation error observed on the validation constraint data set. The validation constraint data set is generated in the same way as the constraint samples used in N4SR methods and the constraint violation error is calculated as the sum of \(cterm_{j}\) values for all constraints in \(T^{c}\). Then, the mSNGP-LS method chooses among all models in the last population of the run the model that has the best validation RMSE out of all models with the constraint violation value less than the population's median.
On the turtlebot problem, the models' performance is presented using a simulation RMSE, which is calculated as the root-mean-square-error between \(x_{pos,k+1}\) and \(\hat{x}_{pos,k+1}\) for all points \(k=1\ldots|D|-1\) in the data sequence \(D\), where
\(\hat{x}_{pos,k+1}\) is the value predicted by the model according to \(\hat{x}_{pos,k+1}=f^{x_{pos}}(\hat{x}_{pos,k},y_{pos,k},\phi_{k},v_{f,k},v_{a,k})\).
Finally, the median value of the following performance measures over the fifty best-of-run models are presented:
* model complexity defined as the number of active weights and active units, respectively.
* RMSE calculated on test data \(D_{i}\), \(D_{e}\), and \(D_{i}\cup D_{e}\), respectively.
* simulation RMSE values calculated on the turtlebot problem, and the \(RMSE_{sum}\) value calculated as the sum of all \(RMSE_{test}\) values.
* the number of runs in which the method yields a nontrivial model, i.e., a model with more than one active link.
### _Results_
In this section, tabular results are presented for all of the compared methods, accompanied by examples of models produced by the N4SR variants. In the tables, the N4SR-ACYE is highlighted in bold as this is the N4SR variant that corresponds to the proposed method.
#### Iv-E1 TurtleBot
Table I shows results obtained with the compared algorithms on the turtlebot problem.
Fig. 6: Examples of the turtlebot simulation trajectories generated using the \(f^{x_{pos}}(\cdot)\) models obtained with N4SR-ACYE (a)-(c) and N4SR-ACYE (d)-(f), respectively, on the three test data sets. Out of all best-of-run models collected over all runs for each experiment, a trajectory of the median model wrt. \(RMSE_{sum}\) (shown in red) and a trajectory of the model with the best \(RMSE_{sum}\) value (shown in green) are presented. The reference ground truth trajectory is shown in blue.
We can see that the variants with the adaptive weighting scheme have significantly better validation RMSE than the static ones. However, they all perform comparably on the test sequences. The adaptive variants also generate significantly simpler models in terms of the number of active weights and active units. The proposed constraint satisfaction-based final model selection method works comparably to the extrapolation-based one; see N4SR-ACYE vs. N4SR-AEVE and N4SR-ACYS vs. N4SR-AEYS. This is an important observation that demonstrates the ability of the method to reliably identify the final model even when no extrapolation domain samples are provided. Further, we observe a clear benefit of using prior knowledge for learning. When no prior knowledge is used, poor models are produced, see N4SR-SENE, N4SR-AENE, and the EQL\({}^{\div}\) models. Finally, the best-performing method on this problem is the mSNGP-LS. We discuss a possible reason for what causes the difference between N4SR and mSNGP-LS in Section V-F.
Figure 6 shows examples of trajectories generated with selected models. Particularly, the median and the best models w.r.t. \(RMSE_{sum}\) produced by N4SR-ACYE (Figure 6a-c) and N4SR-AENE (Figure 6d-f) are presented. These plots clearly illustrate the benefits of using prior knowledge. One can see that both the median and the best model produced by N4SR-ACYE generate trajectories that accurately imitate the shape of the ground truth one. Moreover, the trajectories generated with the best model have a minimal offset from the ground truth one along the whole trajectory. On the contrary, trajectories generated with the models produced by N4SR-AENE exhibit larger discrepancies in terms of both the shape and the offset.
The raw analytic expression represented by the best N4SR-ACYE model is
\[x_{pos,k+1}=-0.365\left((-0.470\,\sin(0.993\,\phi_{k}+1.552))\right)\] \[(0.994\,v_{f,k}))+1.00008\,x_{pos,k}\,,\]
and can be further simplified to
\[x_{pos,k+1}=0.171\,v_{f,k}\,\sin(0.993\,\phi_{k}+1.552)\] \[+1.00008\,x_{pos,k}.\]
We can see that the best N4SR-AENE model, represented by the following simplified analytic expression, is much more complex:
\[x_{pos,k+1}=0.170-0.088\,v_{f,k}+1.0003\,x_{pos,k}+0.169\] \[(0.555\,\sin(-0.140\,\phi_{k}^{2}+0.871\,\phi_{k}\,v_{f,k}+0.087\, \phi_{k}\] \[+2.587\,v_{f,k}+0.281)+0.204)\,(1.309\,v_{f,k}+0.053\,y_{pos,k}\] \[-0.223\,(0.535\,\phi_{k}+1.588)\,(0.271\,\phi_{k}-1.691\,v_{f,k}+ 0.489)\] \[+0.096\,\sin(-0.140\,\phi_{k}^{2}+0.871\,\phi_{k}\,v_{f,k}+0.087\, \phi_{k}\] \[+2.587\,v_{f,k}+0.281))\,.\]
#### V-B2 Magnetic manipulation
The results obtained on the magmam problem are shown in Table II. We can see that N4SR-ACYE and N4SR-ACYS produce models that have the best performance on the extrapolation data as its \(RMSE_{ext}\) is, by order of magnitude, better than the EQL\({}^{\div}\), mSNGP-LS, and N4SR variants using the static weighting scheme. Only models obtained with N4SR-AEVE and N4SR-AEYS exhibit better extrapolation performance. But this can be attributed to the fact that these variants have certain knowledge of the models' extrapolation performance already when selecting the final model of each run. N4SR-ACYE performs best, even better than mSNGP-LS, in terms of the overall \(RMSE_{int+ext}\) metric. Again, the constraint satisfaction-based final model selection performs equally to the extrapolation-based one.
All N4SR variants but N4SR-AENE exhibit the same behavior in that they are vulnerable to collapsing to a trivial model with only one active weight, particularly the one of the output unit's bias link. Only about 40 % of the runs end up with nontrivial models. The N4SR-AENE finds a nontrivial model in all runs, but these fit well only in the interpolation domain. Outside the interpolation domain, the models go wild since the method cannot use any helpful information to direct the search towards better models.
Interestingly, EQL\({}^{\div}\)models achieve comparable performance to N4SR and mSNGP-LS in terms of the RMSE metrics. This can also be attributed to its feature that the models are selected based on the few known extrapolation points \(\bar{D}_{e}\). Despite its rather good test RMSE values, the models are still not truly useful as they do not comply with the desired properties defined for the extrapolation domain, see Figure 7c. Moreover, the EQL\({}^{\div}\) models are much more complex than the N4SR ones.
Figure 7 also shows models generated by N4SR-SCYE, N4SR-ACYE, and mSNGP-LS. Again, the median and the
best models w.r.t. \(RMSE_{int+ext}\) over the set of the non-trivial models are presented. One can see that neither of the N4SR-SCYE and mSNGP-LS can reliably produce nontrivial models that are perfect in terms of the increasing monotonicity constraint. In particular, N4SR-SCYE failed to generate such a model in all runs and mSNGP-LS succeeded only in 3 out of 50 runs. Contrary to that, only one nontrivial model produced by N4SR-ACYE did not comply with this monotonicity constraint. For illustration, we show the best N4SR-ACYE model
\[f(x)=-0.713\,\sin(4.729\,\tanh(8.035\,\tanh(5.835\,x)))\] \[+1.149\,\tanh(8.035\,\tanh(5.835\,x))\] \[-1.863\,\tanh(2.850\,\tanh(8.035\,\tanh(5.835\,x)))\]
and the best N4SR-SCYE model
\[f(x)=-1.210\,\sin(4.755\,\tanh(46.011\,x)-0.027)\] \[-1.448\,\tanh(1.220\,\tanh(46.011\,x))\,.\]
#### V-B3 Resistors
Results obtained on the resistors problem are shown in Table III for the general architecture and in Table IV for the informed architecture. The first observation is that the NN architecture matters. If the topology contains only the units that are necessary to compose the desired expression, then the N4SR methods produce significantly better models both in terms of the test accuracy and the model's complexity. N4SR-ACYE is able to find very good models even with only ten training samples measured in the interpolation domain. Moreover, N4SR-ACYE is much better than N4SR-SCYE and its models are simpler. This again demonstrates the advantages of the adaptive weighting strategy over the static one. On this problem, N4SR methods are stable as only a few runs yield the trivial model. Again, the constraint satisfaction-based model selection method is comparable to the extrapolation-based one. Interestingly, N4SR variants without prior knowledge perform much better than EQL\({}^{+}\), which completely fails to find good models. Note that both methods use the extrapolation data for choosing the final model. Altogether, mSNGP-LS performs best on this problem except in the case when the informed architecture is used with the large training data. There, the best method is the N4SR-ACYE.
Fig. 7: Examples of models generated for the magma problem with N4SR-SCYE (a), N4SR-ACYE (b), EQL\({}^{+}\) (c), and mSNGP-LS (d) method. Out of all best-of-run models collected over all runs for each experiment, the median model w.r.t. \(RMSE_{int+ext}\) (shown in red) and the model with the best \(RMSE_{int+ext}\) value (shown in green) are presented.
\begin{tabular}{c c c c c c c} \hline \hline \(|D_{t}\cup D_{v}|\) & Method & complexity & \(N_{nt}\) & \(RMSE_{int}\) & \(RMSE_{ext}\) & \(RMSE_{int+ext}\) \\ \hline
500 & **M4SR-ACYE** & 11 / 3 & 50 & 0.050 & 0.128 & 0.097 \\
500 & **M4SR-ACYS** & 9 / 4 & 47 & 0.297 & 0.956 & 0.766 \\
10 & **M4SR-ACYS** & 22 / 6 & 50 & 0.610 & 1.290 & 1.038 \\
10 & **M5SR-ACYS** & 21 / 7 & 50 & 0.620 & 1.830 & 1.386 \\ \hline
500 & **M4SR-AEVS** & 11 / 3 & 50 & 0.061 & 0.076 & 0.069 \\
500 & **M5SR-REYS** & 9 / 4 & 47 & 0.287 & 0.887 & 0.702 \\
500 & **M4SR-AENE** & 36.5 / 9 & 50 & 0.085 & 0.702 & 0.502 \\
10 & **M4SR-AEVS** & 22 / 6 & 50 & 0.623 & 1.157 & 0.948 \\
10 & **M4SR-AEVS** & 21 / 7 & 50 & 0.657 & 1.749 & 1.311 \\
10 & **M4SR-AENE** & 24 / 7 & 50 & 1.040 & 2.370 & 1.850 \\ \hline
500 & **M4SR-SUTE** & 47 / 10 & 45 & 0.074 & 0.504 & 0.221 \\
500 & **M4SR-SCTS** & 50 / 11 & 45 & 0.074 & 0.360 & 0.258 \\
500 & **M4SR-SENE** & 47 / 11 & 44 & 0.091 & 0.781 & 0.556 \\
10 & **M4SR-SCTS** & 50 / 10 & 47 & 0.351 & 0.467 & 0.432 \\
10 & **M4SR-SCTS** & 48 / 11 & 47 & 0.398 & 0.508 & 0.469 \\
10 & **M4SR-SENE** & 29 / 8 & 45 & 1.050 & 2.450 & 1.900 \\ \hline
500 & **EQL\({}^{+}\)** & 30 / 18 & 50 & 0.550 & 8.460 & 6.010 \\
500 & **m5NGP-LS** & NA & 50 & 0.023 & 0.032 & 0.029 \\
10 & **EQL\({}^{+}\)** & 30 / 18 & 50 & 18.820 & 17.420 & 22.810 \\
10 & **m5NGP-LS** & NA & 50 & 0.008 & 0.009 & 0.008 \\ \hline \hline \end{tabular} TABLE IV: Results obtained with \(\texttt{EQL}^{+}\), \(\texttt{mSHGP-LS}\), and \(\texttt{M4SR}\) methods with the informed architecture on the \(\texttt{mESISTORS}\) problem. The complexity is given as the number of active links / number of active units.
Fig. 8: Examples of models generated for the \(\texttt{resistors}\) problem with \(\texttt{M4SR-ACYE}\) method. (a) is the reference model, (b) shows models attained when using the learning data set of size \(|D_{t}\cup D_{v}|=500\), and (c) shows models attained with the learning data set of size 10. Out of all best-of-run models collected over all runs for each experiment, the median model w.r.t. \(RMSE_{int+ext}\) (shown in red) and the model with the best \(RMSE_{int+ext}\) value (shown in green) are presented.
Figure 8 demonstrates the performance of the median and the best w.r.t. the \(RMSE_{int+ext}\) models generated by the N4SR-ACYE method using small and large training data sets, respectively. It shows the difference between the N4SR model and the reference model on the whole domain. One can see that the models are doing well on the interpolation as well as on the extrapolation domain, even when trained on the small data set. The raw analytic expression represented by the best N4SR-ACYE model is
\[f(r_{1},r_{2})=0.767\,((-1.012\,((-0.745\,r1+1.080\,r_{2})\] \[(-0.962\,r_{1}+0.695\,r_{2})))/(2.346\,r_{1}+2.346\,r_{2}))\] \[+0.237\,r_{1}+0.248\,r_{2}\,,\]
which can be rewritten as
\[f(r_{1},r_{2})=(0.0004\,r_{1}^{2}+2.347\,r_{1}\,r_{2}-0.0002\,r_{2} ^{2})\] \[/(2.346\,r_{1}+2.346\,r_{2})\,.\]
Note that this is not the right expression but it is a close approximation of the reference model. Please, see more on this in Section V-F.
#### V-D4 Anti-lock braking system - magic formula
Results obtained on the magic problem are summarized in Table V. The overall best method is again mSNGP-LS. As for the N4SR, one can observe that the variants using the static weighting method slightly outperform the ones using the adaptive scheme in terms of the RMSE measures. Also, the number of nontrivial models is about twice as high for the variants with the static weighting method. This indicates that setting up the loss terms weights just once for the whole run was sufficient in this case. However, the models generated with the static method are larger than the ones produced with the adaptive scheme.
On the one hand, the single-epoch learning strategy works equally to the epoch-wise one when used together with the static weighting scheme, compare N4SR-SCYS to N4SR-SCYE. On the other hand, it gets much worse than the epoch-wise learning strategy when used with the adaptive weighting scheme, compare N4SR-ACYS to N4SR-ACYE. Then its median performance becomes even slightly worse than the EQL\({}^{+}\) method. Still, its above-average models are much better than the EQL\({}^{+}\) ones, see the green curves in Figures 8(c) and 8(d).
Another observation is that even the N4SR-AENE method performs comparably to the N4SR-ACYE one in terms of the RMSE measures. However, a detailed visual inspection reveals imperfections of the N4SR-AENE models. In Figure 8(a), one can see that neither the median nor the best model fully complies with the constraints imposed on the model. On the contrary, N4SR-ACYE provides the most reliable models, see Figure 8(b). For illustration, we show the raw analytic expression of the best N4SR-ACYE model
\[f(\kappa)=(-0.429\,\sin(-0.593\,((-1.470\,\arctan(-31.132\, \kappa\] \[-0.126)-0.741)\,(1.462\,\arctan(-31.132\,\kappa-0.126)\] \[+0.753))+1.5\,\arctan(-31.132\,\kappa-0.126)))\]
and its equivalent obtained by a symbolic simplification
\[f(\kappa)=-0.429\,\sin(1.275\,\arctan(-31.132\,\kappa-0.126)^{2}\] \[+2.799\,\arctan(-31.132\,\kappa-0.126)+0.331)\,.\]
### _Discussion_
The results obtained with the proposed N4SR-ACYE method are very promising. We observed that the method is capable of finding sparse and accurate models that exhibit desired properties defined as the prior knowledge for the given problem. Nevertheless, we also observe that the GP-based mSNGP-LS approach often outperforms N4SR-ACYE. Here, we analyze the results and propose our hypothesis on why it is so.
A detailed inspection of the final models obtained with mSNGP-LS on turtlebot problem revealed that the analytic expressions of the best-performing models are composed of simple elementary structures. By'simple', we mean that the arithmetic operators and the \(\sin\) and \(\tanh\) functions, which are at the lower levels of the expression tree, operate on 'raw' variables (i.e., the variables are weighted by the coefficient of 1). An example is the best mSNGP-LS model
\[x_{pos,k+1}=1.0011\,x_{pos,k}-0.0833\,\sin(\phi_{k}-v_{f,k})\] \[+0.0815\,\sin(\phi_{k}+v_{f,k})\] \[+0.0018\,\sin(\phi_{k}+\sin(v_{a,k}))\] \[+0.0029\,\tanh(v_{a,k}+1.0)-0.003\]
composed of simple elementary structures \(\sin(\phi_{k}-v_{f,k})\), \(\sin(\phi_{k}+v_{f,k})\), \(\sin(\phi_{k}+\sin(v_{a,k}))\), and \(\tanh(v_{a,k}+1.0)\). Note that the mSNGP-LS works in a 'bottom-up' manner. It starts the search process with many of the elementary structures easily available and tries to combine them in the final expression optimally. Thus, it works with building blocks that are already there and 'just' searches for their best combination.
On the contrary, for NN-based SR approaches of the N4SR type, it is hard to converge to expressions like this. The method can be seen as a 'top-down' approach. It starts with the complete NN topology where each function unit has its \(z\) node(s) realized as the random affine transformation of _all_ previous layer's units outputs. Moreover, the learned weights are randomly initialized to rather small values, thus far from the desired value of 1. Then, it has to carefully eliminate all the useless units and all useless inputs to each active function unit through many iterations of the gradient-based optimization process to get a sparse model composed of simple elementary structures. This leads to models like this:
\[x_{pos,k+1}=-0.3654\,(-0.4699\sin(\mathbf{0.9933}\,\phi_{k}\] \[+1.5518))\,\mathbf{0.9935}\,v_{f,k}+\mathbf{1.00008}\,x_{pos,k}\,.\]
This is the best N4SR-ACYE model on the turtlebot problem. We can see that the variables' multiplication coefficients are set to values very close to but not exactly one.
Another analysis revealed that on the resistors problem, N4SR-ACYE could hardly find the models that perfectly fit the expression of the reference model. In particular, only 3 out of 50 runs of N4SR-ACYS converged to the model
Fig. 9: Examples of models generated for the magic problem with N4SR-AENE (a), N4SR-ACYE (b), N4SR-ACYS (c), and EOL\({}^{+}\) (d) method. Out of all best-of-run models collected over all runs for each experiment, the median model w.r.t. \(RMSE_{int+ext}\) (shown in red) and the model with the best \(RMSE_{int+ext}\) value (shown in green) are presented. |
2308.05650 | Asymptotic-preserving neural networks for multiscale
Vlasov-Poisson-Fokker-Planck system in the high-field regime | The Vlasov-Poisson-Fokker-Planck (VPFP) system is a fundamental model in
plasma physics that describes the Brownian motion of a large ensemble of
particles within a surrounding bath. Under the high-field scaling, both
collision and field are dominant. This paper introduces two
Asymptotic-Preserving Neural Network (APNN) methods within a physics-informed
neural network (PINN) framework for solving the VPFP system in the high-field
regime. These methods aim to overcome the computational challenges posed by
high dimensionality and multiple scales of the system. The first APNN method
leverages the micro-macro decomposition model of the original VPFP system,
while the second is based on the mass conservation law. Both methods ensure
that the loss function of the neural networks transitions naturally from the
kinetic model to the high-field limit model, thereby preserving the correct
asymptotic behavior. Through extensive numerical experiments, these APNN
methods demonstrate their effectiveness in solving multiscale and high
dimensional uncertain problems, as well as their broader applicability for
problems with long time duration and non-equilibrium initial data. | Shi Jin, Zheng Ma, Tian-ai Zhang | 2023-08-10T15:47:51Z | http://arxiv.org/abs/2308.05650v1 | Asymptotic-preserving neural networks for multiscale Vlasov-Poisson-Fokker-Planck system in the high-field regime
###### Abstract
The Vlasov-Poisson-Fokker-Planck (VPFP) system is a fundamental model in plasma physics that describes the Brownian motion of a large ensemble of particles within a surrounding bath. Under the high-field scaling, both collision and field are dominant. This paper introduces two Asymptotic-Preserving Neural Network (APNN) methods within a physics-informed neural network (PINN) framework for solving the VPFP system in the high-field regime. These methods aim to overcome the computational challenges posed by high dimensionality and multiple scales of the system. The first APNN method leverages the micro-macro decomposition model of the original VPFP system, while the second is based on the mass conservation law. Both methods ensure that the loss function of the neural networks transitions naturally from the kinetic model to the high-field limit model, thereby preserving the correct asymptotic behavior. Through extensive numerical experiments, these APNN methods demonstrate their effectiveness in solving multiscale and high dimensional uncertain problems, as well as their broader applicability for problems with long time duration and non-equilibrium initial data.
## 1 Introduction
The Vlasov-Poisson-Fokker-Planck (VPFP) system is a fundamental kinetic model in plasma physics, which takes into account the interactions between electrons and a surrounding bath via the Coulomb force [6]. It consists of a Liouville equation with a Fokker-Planck operator in velocity space, along with a Poisson equation corresponding to the electrostatic force [43]. The high-field limit of the kinetic equations characterizes a usual scenario where the external field, such as an electric field, is strong enough to balance the collision term [14]. Section 2 provides a detailed exposition of the VPFP system and its high-field limit.
The high dimensionality of the phase space in the kenitie equations (6 dimensions plus time for the general cases) raises a significant challenge in simulating the VPFP system. Moreover, the uncertainties of the kinetic models can introduce more dimensions [16, 37, 38, 18]. The uncertainties may arise from the collision kernels, scattering coefficients, initial or boundary data, source or forcing terms and so on, which are important for the application of the kinetic models to practical systems [2, 3, 15, 22]. Many efforts have been devoted to tackling the curse of dimensionality. Classical techniques, such as the Monte Carlo
methods [13, 31], are hindered by their low-order accuracy or slow convergence rates. Recently, machine learning, especially deep neural networks (DNNs), shows potential in resolving the Partial Differential Equations (PDEs) of high dimensionality [21, 23, 30, 24, 33]. The key idea is to approximate the objective functions by DNNs, and optimize the network parameters by minimizing the loss function encoding the PDEs and initial-boundary conditions. Many work has been devoted to building suitable loss functions to develope DNN methods. The popular methods are the Physics-Informed Neural Networks (PINNs) [40] and the Deep Galerkin Method (DGM) [41], which minimize the \(L^{2}\)-residual of the PDEs and the initial-boundary conditions. The Deep Ritz Method [47] was designed to solve PDEs with variational structures by leveraging its Ritz formulation. Moreover, the new area of operator learning has risen, which involves developing novel learning frameworks to approximate the solution operator of PDEs. Such techniques include deep operator networks (DeepONets) [7, 33], neural operators [23, 28] and associated variants tailored for physics problems [29, 44]. For supplementary information on machine learning methods for solving PDEs, interested readers are advised to consult the comprehensive review [45]. Applications of these methods to the kinetic equations are detailed in references [8, 11, 27, 29, 32].
Besides the curse of dimensionality, the multiscale nature of the VPFP system with high-field limit poses another challenge. General DNN methods, like PINNs, have difficulties in dealing with multiscale problems [17]. Specifically, PINNs only capture the leading order term related to a single scale in loss functions when the scale parameter is small [20], thus fail to capture the correct asymptotic limit equation of the VPFP system in the high-field regime as shown in Section 3. Asymptotic-Preserving (AP) schemes are effective numerical schemes for multiscale problems. AP schemes preserve the continuous asymptotic limit in a numerically uniformly stable way [15]. In recent years, AP schemes are widely adopted for kinetic and hyperbolic equations with multiple time and space scales [14], and also applied in the plasma [10] and the high-field regime [4, 9, 19, 48]. However, the classical AP schemes still encounter challenges in the high dimensional cases.
To address both the curse of dimensionality and multiscale difficulty, the primary goal of this work is to develope DNN methods with AP property, which are applied to VPFP system in the high-field regime. We propose two Asymptotic-Preserving Neural Network (APNN) methods for the VPFP system. The first is based on the micro-macro model of the VPFP system [9]. By reformulating the loss function of the vanilla PINN method based on the micro-macro decomposition, we construct an associated loss function with AP property. This method has been successfully applied to the linear transport equations [20], the steady radiative transfer equations [34] and so on [17, 26]. However, this method necessitates initial data close to the local Maxwellian, since its foundation is the micro-macro decomposition. Since kinetic models are more useful when the system is far from the equilibrium initial data [19], we try to retain this advantage. Hence, we propose the second APNN method, which is free from the explicit expression of the local Maxwellian and the requirement for the equilibrium initial data. This method is based on the mass conservation law, and has the potential for a broader applicability for the cases with long time duration or non-equilibrium initial data.
Although our numerical experiments are conducted in one-dimensional physical and velocity spaces, we do include the examples that involve high-dimensional uncertainties to demonstrate the ability of our APNNS for high-dimensional problems.
The paper is organized as follows: Section 2 introduces the VPFP system and its high-field limit, along with the micro-macro model of the VPFP system. Section 3 presents our two proposed APNN methods and demonstrates their AP property. In Section 4, we evaluate and compare the performance of the two proposed APNN methods through a series of numerical examples. We conclude the paper in Section 5.
The VPFP system and its high-field limit
This section introduces the Vlasov-Poisson-Fokker-Planck (VPFP) system and its high-field limit. The micro-macro model for the VPFP system is also presented, which provides a suitable framework for deriving the high-field limit.
### The VPFP system
In kinetic equations, a strong external field, such as the electric or gravitational field, often exists and counteracts the collision term, leading to the high-field limit [5]. In electrostatic plasmas for example, the electrons interact with the surrounding bath through the Coulomb force. Thus the time evolution of the electron distribution function \(f:(t,x,v)\in\mathbb{R}_{+}\times\mathbb{R}^{N}\times\mathbb{R}^{N}\to \mathbb{R}_{+}\) is governed by the VPFP equations, with the action of a self-consistent potential \(\phi(t,x)\). Specifically, the VPFP system is:
\[\partial_{t}f+v\cdot\nabla_{x}f-\frac{1}{\varepsilon}\nabla_{x}\phi\cdot\nabla _{v}f=\frac{1}{\varepsilon}\nabla_{v}\cdot[vf+\nabla_{v}f]:=\frac{1}{ \varepsilon}\mathcal{Q}(f), \tag{2.1a}\] \[-\triangle_{x}\phi(t,x)=\rho(t,x)-h(x), \tag{2.1b}\]
where
\[\rho(t,x)=\int_{\mathbb{R}^{N}}f(t,x,v)\mathrm{d}v \tag{2.2}\]
is the density of electrons. \(\mathcal{Q}\) is defined as linear Fokker-Planck operator:
\[\mathcal{Q}(f)(t,x,v)=\nabla_{v}\cdot\left[vf(t,x,v)+\nabla_{v}f(t,x,v)\right]. \tag{2.3}\]
The function \(h(x)\) represents a given positive background charge density. Hence the assumed global neutrality relation is:
\[\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}f^{0}(x,v)\mathrm{d}x\;\mathrm{d}v =\int_{\mathbb{R}^{N}}h(x)\mathrm{d}x. \tag{2.4}\]
The parameter \(\varepsilon\) denotes the ratio between the mean free path and the Debye length. The limiting process \(\varepsilon\to 0\) indicates the high-field limit of the VPFP system. In the high-field regime, the strong forcing term balances the Fokker-Planck diffusion term [1], which is different from the low-field limit (or named as parabolic limit) [39].
### The high-field limit
In this section, we give the limit equation of the VPFP system 2.1a-2.1b at \(\varepsilon\to 0\). First, integrating the Vlasov equation 2.1a over \(v\) in \(\mathbb{R}^{N}\) gives:
\[\partial_{t}\int_{\mathbb{R}^{N}}fdv+\nabla_{x}\cdot\int_{\mathbb{R}^{N}}vfdv -\frac{1}{\varepsilon}\int_{\mathbb{R}^{N}}\nabla_{v}\cdot(\nabla_{x}\phi fdv )=\frac{1}{\varepsilon}\int_{\mathbb{R}^{N}}\nabla_{v}\cdot(vf+\nabla_{v}f) \,dv. \tag{2.5}\]
Integrating by parts, one has:
\[\partial_{t}\rho+\nabla_{x}\cdot j=0, \tag{2.6}\]
where the flux \(j\) is defined as:
\[j=\int_{\mathbb{R}^{N}}vf(t,x,v)\mathrm{d}v. \tag{2.7}\]
Next, one multiplies the Vlasov equation (2.1a) by \(v\) then integrates over \(\mathbb{R}^{N}\), and takes the limit \(\varepsilon\to 0\), yielding:
\[0=\int_{\mathbb{R}^{N}}f\nabla_{x}\phi+vf+\nabla_{v}fdv. \tag{2.8}\]
Deriving this equation gives:
\[j=-\rho\left(\nabla_{x}\phi\right). \tag{2.9}\]
Substituting it in equation (2.6) yields the high-field limit equation:
\[\left\{\begin{array}{l}\partial_{t}\rho-\nabla_{x}\cdot(\rho\nabla_{x}\phi)= 0,\\ -\Delta_{x}\phi=\rho-h(x).\end{array}\right. \tag{2.10}\]
The rigorous proof for the high-field limit of the VPFP system in one-dimension can be found in [12, 36].
### The micro-macro model and the high-field limit
This section presents the micro-macro model, from which the high-field limit can be readily derived. First, an equivalent formulation of the VPFP system (2.1a)-(2.1b) is:
\[\partial_{t}f+v\cdot\nabla_{x}f =\frac{1}{\varepsilon}\nabla_{v}\cdot[(v+\nabla_{x}\phi)f+ \nabla_{v}f]:=\frac{1}{\varepsilon}\mathscr{L}f, \tag{2.11}\] \[-\Delta_{x}\phi =\rho-h(x).\]
The linear operator \(\mathscr{L}\) retains the formulation of a Fokker-Planck operator dependent on \(\nabla_{x}\phi\). The null space of \(\mathscr{L}\) is:
\[\mathcal{N}(\mathscr{L})=\text{Span}\{\mathscr{M}\}=\{f=\rho\mathscr{M}, \text{where}\,\rho:=\langle f\rangle\}, \tag{2.12}\]
where \(\mathscr{M}\) is the shifted Maxwellian depending on \((t,x)\) through the potential \(\phi\) in the form:
\[\mathscr{M}(t,x,v)=\frac{1}{\sqrt{2\pi}}\exp\left(-\frac{|v+\nabla_{x}\phi(t, x)|^{2}}{2}\right). \tag{2.13}\]
Notation \(\langle f\rangle\) is defined as:
\[\langle f\rangle=\int_{\mathbb{R}^{N}}f(v)dv. \tag{2.14}\]
The rank of \(\mathscr{L}\) is:
\[\mathscr{R}(\mathscr{L})=(\mathcal{N})^{\perp}(\mathscr{L})=\{f\text{ such that }\langle f\rangle=\int_{\mathbb{R}^{N}}fdv=0\}. \tag{2.15}\]
The idea for the micro-macro model begins with the decomposition of \(f\) into the equilibrium part and the non-equilibrium part [9, 25]:
\[f=\rho\mathscr{M}+\varepsilon g, \tag{2.16}\]
where \(\rho\mathscr{M}\) is the equilibrium part of \(f\). The non-equilibrium part \(g\) satisfies \(\langle g\rangle=0\), which implies \(g\in\mathscr{R}(\mathscr{L})\). The orthogonal projector \(\Pi\) in \(L^{2}\left(\mathscr{M}^{-1}dv\right)\) onto \(\mathcal{N}(\mathscr{L})\) is defined by:
\[\Pi\varphi=\langle\varphi\rangle\mathscr{M}. \tag{2.17}\]
In deriving the micro-macro model, we first consider the one-dimensional VPFP system formulated as:
\[\partial_{t}f+v\partial_{x}f=\frac{1}{\varepsilon}\partial_{v}\left[(v+ \partial_{x}\phi)f+\partial_{v}f\right]=:\frac{1}{\varepsilon}\mathscr{L}f. \tag{2.18}\]
Applying the micro-macro decomposition (2.16) to the equation (2.18) yields:
\[\partial_{t}(\rho\mathcal{M})+\varepsilon\partial_{t}g+v\partial_{x}(\rho \mathcal{M})+\varepsilon v\partial_{x}g=\mathcal{L}g. \tag{2.19}\]
Operating \((I-\Pi)\) on equation (2.19) gives:
\[(I-\Pi)\left[\partial_{t}(\rho\mathcal{M})+v\partial_{x}(\rho\mathcal{M}) \right]+\varepsilon(I-\Pi)\left[\partial_{t}g+v\partial_{x}g\right]=(I-\Pi) \mathcal{L}g.\]
Since \(\Pi(g)=\Pi\left(\partial_{t}g\right)=\Pi(\mathcal{L}g)=0\)[9], and
\[\Pi\left[\partial_{t}(\rho\mathcal{M})\right]=\Pi\left[\mathcal{M}\partial_{ t}\rho-\rho\left(\partial_{xt}\phi\right)(v+\partial_{x}\phi)\mathcal{M} \right]=\mathcal{M}\partial_{t}\rho,\]
one gets
\[\varepsilon\partial_{t}g+\varepsilon(I-\Pi)\left(v\partial_{x}g\right)= \mathcal{L}g-(I-\Pi)\left(v\partial_{x}(\rho\mathcal{M})\right)-\rho( \partial_{t}\mathcal{M}). \tag{2.20}\]
Then the \(\Pi\) projection (2.17) is applied to the equation (2.19)
\[\partial_{t}\rho+\partial_{x}(v(\rho\mathcal{M}))+\varepsilon\partial_{x}(vg) =0. \tag{2.21}\]
Therefore, one reformulates the original VPFP system (2.1a)-(2.1b) as:
\[\begin{cases}\partial_{t}g+(v\partial_{x}g-\partial_{x}(vg) \mathcal{M})=\frac{1}{\varepsilon}\{\mathcal{L}g-[v\partial_{x}(\rho\mathcal{ M}) \tag{2.22}\] \[-\partial_{x}(v(\rho\mathcal{M})).\mathcal{M}]-\rho(\partial_{t} \mathcal{M})\},\\ \partial_{t}\rho+\partial_{x}(v(\rho\mathcal{M}))+\varepsilon\partial_{x}(vg) =0,\\ -\partial_{x}^{2}\phi=\rho-h,\\ \langle g\rangle=0.\end{cases} \tag{2.22}\]
The equivalence between this model and the original VPFP system (2.1a)-(2.1b) has been demonstrated in [9]. Note that the conservation condition \(\langle g\rangle=0\) is usually satisfied by classical mesh-based AP schemes, but it fails to hold by DNN-predicated methods, unless incorporated into the loss function, as done in [17].
Now, the formal derivation of the high-field limit model can also be done easily from the micro-macro model (2.22). Taking the limit as \(\varepsilon\to 0\) in the second equation of the micro-macro model (2.22) yields:
\[\partial_{t}\rho+\partial_{x}\langle v(\rho\mathcal{M})\rangle=0. \tag{2.23}\]
Substituting the explicit expression for \(\mathcal{M}\) in the equation (2.23) reformulates it as:
\[\partial_{t}\rho-\partial_{x}(\rho\partial_{x}\phi)=0. \tag{2.24}\]
This results in the limit model:
\[\begin{cases}\partial_{t}\rho-\partial_{x}(\rho\partial_{x}\phi)=0,\\ -\partial_{x}^{2}\phi=\rho-h.\end{cases} \tag{2.25}\]
It is exactly the high-field limit model (2.10) in one-dimension, which justifies the AP property of the micro-macro model.
Finally, as shown in [9, 25], the Initial and periodic boundary conditions imposed on \(f\) can be directly translated into boundary conditions for \(\rho\) and \(g\).
Two formulations of asymptotic-preserving neural networks (APNNs)
We seek to approximate the solution to the VPFP system by neural networks. Under the framework of PINNs and other neural network-based methods, loss functions encoding the PDE constraints are minimized to converge on solutions. PINNs is our starting point but vanilla DNN approximations face limitations in tackling multiscale problems where both microscopic and macroscopic dynamics are involved. To deal with this difficulty, asymptotic-preserving neural networks (APNNs) have been proposed [17]. The key idea is to design a loss with AP property. Specifically, the loss function automatically transits from kinetic system to it asymptotic limit system when scale parameter \(\varepsilon\to 0\). In practice, the original multiscale system is first reformulated into an equivalent model with the AP property. This modified form is then embedded into the vanilla PINN loss function, with initial and boundary conditions treated as regularization terms.
To improve multiscale VPFP approximations of vanilla PINNs, we propose two types of loss functions with AP property. The constructions of the novel AP loss functions are respectively inspired by the derivations of the high-field limit models in Sections 1.2 and 1.3.
Consider the VPFP system with initial and boundary conditions over a bounded domain as an approximation of the original system (2.1a)-(2.1b), in which \((t,x,v)\in\mathcal{T}\times\mathcal{D}\times\Omega\) and \(\Omega\) is symmetric in \(v\):
\[\begin{cases}\partial_{t}f+v\cdot\nabla_{x}f=\frac{1}{\varepsilon}\mathcal{L }f,&(t,x,v)\in\mathcal{T}\times\mathcal{D}\times\Omega,\\ -\Delta_{x}\phi(t,x)=\rho(t,x)-h(x),&(t,x)\in\mathcal{T}\times\mathcal{D},\\ \mathcal{B}f=F_{\rm B},&(t,x,v)\in\mathcal{T}\times\partial\mathcal{D}\times \Omega,\\ \mathcal{B}\phi=\Phi_{\rm B},&(t,x)\in\mathcal{T}\times\partial\mathcal{D}\\ \mathcal{I}f=f_{0},&(t,x,v)\in\{t=0\}\times\mathcal{D}\times\Omega,\\ \mathcal{I}\phi=\phi_{0},&(t,x)\in\{t=0\}\times\mathcal{D}\end{cases} \tag{3.1}\]
where \(F_{\rm B},f_{0},\Phi_{\rm B},\phi_{0}\) are given functions; \(\partial\mathcal{D}\) is the boundary of \(\mathcal{D}\); and \(\mathcal{B},\mathcal{I}\) are boundary and initial operators, respectively. In the following sections, the limitations of vanilla PINNs for the multiscale VPFP system will be explained. We will also provide the detailed constructions of the two proposed AP loss functions and formally prove their AP properties.
### The failure of PINNs to resolve small scales
In this section, the deficiency of vanilla PINN are demonstrated in the case with small scale parameter \(\varepsilon\). First we use DNNs to parameterize two functions \(f\), \(\phi\) and denote the set of neural network parameters by \(\eta\) for brevity. Then two neural networks are implemented:
\[f_{\gamma}^{\rm NN}(t,x):=\log\left(1+\exp\left(\vec{f}_{\gamma}^{\rm NN}(t,x )\right)\right)\approx f(t,x),\ \ \text{and}\ \ \phi_{\gamma}^{\rm NN}(t,x)\approx\phi(t,x). \tag{3.2}\]
\(\vec{f}_{\gamma}^{\rm NN}\) and \(\phi_{\gamma}^{\rm NN}\) are both fully-connected neural networks. Note that the setting for \(f_{\gamma}^{\rm NN}\) guarantees the non-negativity property of distribution function \(f\).
The vanilla PINN loss is constructed by the least square of the residuals of the original VPFP system,
combined with penalty terms encoding the initial and boundary conditions:
\[\begin{split}\mathcal{R}^{\varepsilon}_{\text{PINN}}& =\frac{\mu_{1}^{\text{Vlasov}}}{\left|\mathcal{T}\times\mathcal{D}\times \Omega\right|}\int_{\mathcal{T}}\int_{\mathcal{D}}\int_{\Omega}\left| \varepsilon\partial_{t}f_{\gamma}^{\text{NN}}+\varepsilon v\partial_{x}f_{ \gamma}^{\text{NN}}-\mathcal{L}f_{\gamma}^{\text{NN}}\right|^{2}\ \mathrm{d}v\ \mathrm{d}x\mathrm{d}t\\ &+\frac{\mu_{1}^{\text{Poisson}}}{\left|\mathcal{T}\times\mathcal{D }\right|}\int_{\mathcal{T}}\int_{\mathcal{D}}\left|-\partial_{x}^{2}\phi_{ \gamma}^{\text{NN}}-\left(\rho_{\gamma}^{\text{NN}}-h\right)\right|^{2}\ \mathrm{d}x\mathrm{d}t\\ &+\frac{\mu_{2}^{f}}{\left|\mathcal{T}\times\mathcal{D}\times \Omega\right|}\int_{\mathcal{T}}\int_{\partial\mathcal{D}}\int_{\Omega}\left| \mathcal{B}f_{\gamma}^{\text{NN}}-F_{\text{B}}\right|^{2}\ \mathrm{d}v\mathrm{d}x\mathrm{d}t\\ &+\frac{\mu_{2}^{\phi}}{\left|\mathcal{T}\times\mathcal{D}\Omega \right|}\int_{\mathcal{T}}\int_{\partial\mathcal{D}}\left|\mathcal{B}\phi_{ \gamma}^{\text{NN}}-\Phi_{\text{B}}\right|^{2}\mathrm{d}x\mathrm{d}t\\ &+\frac{\mu_{3}^{f}}{\left|\mathcal{D}\times\Omega\right|}\int_ {\mathcal{D}}\int_{\Omega}\left|\mathcal{I}f_{\gamma}^{\text{NN}}-f_{0}\right| ^{2}\ \mathrm{d}v\mathrm{d}x+\frac{\mu_{3}^{\phi}}{\left|\mathcal{D}\right|}\int_{ \mathcal{D}}\left|\mathcal{I}\phi_{\gamma}^{\text{NN}}-\phi_{0}\right|^{2} \mathrm{d}x,\end{split} \tag{3.3}\]
where \(\mu_{1}\), \(\mu_{2}\) and \(\mu_{3}\) are the penalty weights to be tuned. For brevity, a formal notation is introduced:
\[\mathcal{R}^{\varepsilon}_{\text{PINN}}:=\mu_{1}\mathcal{R}^{\varepsilon}_{ \text{residual}}+\mu_{2}\mathcal{R}^{\varepsilon}_{\text{bc}}+\mu_{3} \mathcal{R}^{\varepsilon}_{\text{ic}}. \tag{3.4}\]
We now examine whether this loss of PINN method possesses the AP property. Wee only need to focus on the first term of (3.3):
\[\begin{split}\mathcal{R}^{\varepsilon}_{\text{residual}}=& \frac{1}{\left|\mathcal{T}\times\mathcal{D}\times\Omega\right|} \int_{\mathcal{T}}\int_{\mathcal{D}}\int_{\Omega}\left|\varepsilon\partial_{t }f_{\gamma}^{\text{NN}}+\varepsilon v\partial_{x}f_{\gamma}^{\text{NN}}- \mathcal{L}f_{\gamma}^{\text{NN}}\right|^{2}\ \mathrm{d}v\ \mathrm{d}x\mathrm{d}t\\ &+\frac{1}{\left|\mathcal{T}\times\mathcal{D}\right|}\int_{ \mathcal{T}}\int_{\mathcal{D}}\left|-\partial_{x}^{2}\phi_{\gamma}^{\text{NN}} -\left(\rho_{\gamma}^{\text{NN}}-h\right)\right|^{2}\ \mathrm{d}x\mathrm{d}t.\end{split} \tag{3.5}\]
Taking \(\varepsilon\to 0\), formally this leads to:
\[\begin{split}\mathcal{R}^{\varepsilon}_{\text{residual}}=& \frac{1}{\left|\mathcal{T}\times\mathcal{D}\times\Omega\right|} \int_{\mathcal{T}}\int_{\mathcal{D}}\int_{\Omega}\left|\mathcal{L}f_{\gamma}^{ \text{NN}}\right|^{2}\ \mathrm{d}v\ \mathrm{d}x\mathrm{d}t\\ &+\frac{1}{\left|\mathcal{T}\times\mathcal{D}\right|}\int_{ \mathcal{T}}\int_{\mathcal{D}}\left|-\partial_{x}^{2}\phi_{\gamma}^{\text{NN}} -\left(\rho_{\gamma}^{\text{NN}}-h\right)\right|^{2}\ \mathrm{d}x\mathrm{d}t.\end{split} \tag{3.6}\]
It is the least square loss of the following system:
\[\begin{cases}\left[\mathcal{L}f=0,\right.\\ -\partial_{x}^{2}\phi=\rho-h.\end{cases} \tag{3.7}\]
At the small scaling of \(\varepsilon\), our deviation shows that the PINN loss is dominated by its leading order terms. We are actually solving the equation \(\mathcal{L}f=0\), which gives \(f=\rho\mathcal{M}\). Clearly, this equation fails to produce the desired high-field limit equation (2.25). Therefore, training neural networks with vanilla PINN loss may lead to an inaccurate solution when \(\varepsilon\) is small. This will be furthermore, demonstrated by numerical experiments in Section 3.
### The micro-macro decomposition based APNN method
We now present an APNN method based on the micro-macro decomposition for the VPFP system. The main idea is to utilize PINN to solve the micro-macro system (2.22) rather than the original system (2.1a)-(2.1b). DNNs are employed to parameterize functions \(\rho\), \(g\), \(\phi\). Three independent neural networks are adopted for three functions. The set of parameters and the parameterized functions are separately denoted
by \(\theta\) and \(\rho^{\rm NN}_{\theta},g^{\rm NN}_{\theta}\), \(\phi^{\rm NN}_{\theta}\). In detail, \(\rho(t,x)\) is parameterized as:
\[\rho^{\rm NN}_{\theta}(t,x):=\log\Big{(}1+\exp\Big{(}\tilde{\rho}^{\rm NN}_{ \theta}(t,x)\Big{)}\Big{)}\approx\rho(t,x), \tag{3.8}\]
which conserves the non-negative property of density \(\rho\). And \(\phi(t,x)\) is parameterized as:
\[\phi^{\rm NN}_{\theta}(t,x)\approx\phi(t,x). \tag{3.9}\]
Moreover, \(g(t,x,v)\) is parameterized as:
\[g^{\rm NN}_{\theta}(t,x,v)=\tilde{g}^{\rm NN}_{\theta}(t,x,v)-\frac{1}{| \Omega|}\int_{\Omega}\tilde{g}^{\rm NN}_{\theta}(t,x,v)dv\approx g(t,x,v). \tag{3.10}\]
Here \(\tilde{\rho}^{\rm NN}_{\theta}\) and \(\tilde{g}^{\rm NN}_{\theta}\) and \(\phi^{\rm NN}_{\theta}\) are all fully-connected neural networks.
The design of \(g^{\rm NN}_{\theta}\) in (3.10) automatically guarantees the conservation condition in the micro-macro model (2.22):
\[\left\langle g^{\rm NN}_{\theta}\right\rangle=0,\quad\forall t,x. \tag{3.11}\]
As demonstrated in [17], this representation of \(g^{\rm NN}_{\theta}\) improves the accuracy of predicted results, compared to the method of incorporating this conservation mechanism as a soft constraint in loss. To verify the conservation property of \(g^{\rm NN}_{\theta}\), first, \(\left\langle\tilde{g}^{\rm NN}_{\theta}\right\rangle:=\int_{\mathbb{R}}\tilde {g}^{\rm NN}_{\theta}dv\) is approximated by \(\int_{\Omega}\tilde{g}^{\rm NN}_{\theta}dv\) in bounded domain. Then one has:
\[\begin{split} g^{\rm NN}_{\theta}(t,x,v)&=\tilde {g}^{\rm NN}_{\theta}(t,x,v)-\frac{1}{|\Omega|}\int_{\Omega}\tilde{g}^{\rm NN }_{\theta}(t,x,v)dv\\ &\approx\tilde{g}^{\rm NN}_{\theta}(t,x,v)-\frac{1}{|\Omega|} \langle\tilde{g}^{\rm NN}_{\theta}(t,x,v)\rangle.\end{split} \tag{3.12}\]
Finally, the following can be obtained:
\[\left\langle g^{\rm NN}_{\theta}\right\rangle\approx\int_{\Omega}g^{\rm NN}_ {\theta}dv=\int_{\Omega}\tilde{g}^{\rm NN}_{\theta}dv-\frac{1}{|\Omega|}(\int _{\Omega}\tilde{g}^{\rm NN}_{\theta}dv)(\int_{\Omega}dv)=0 \tag{3.13}\]
which confirms the conservation property.
To construct the loss function for the APNN method, we take the least squares of the residuals from the micro-macro system (2.22):
\[\begin{split}\mathcal{R}^{e}_{\rm APNN}=&\frac{ \lambda^{\rm macro}_{1}}{|\mathcal{T}\times\mathcal{D}|}\int_{\mathcal{T}}\int _{\mathcal{D}}\left|\partial_{t}\rho^{\rm NN}_{\theta}+\partial_{x}\langle v( \rho^{\rm NN}_{\theta}\mathcal{M}^{\rm NN}_{\theta})\rangle+e\partial_{x} \langle vg^{\rm NN}_{\theta}\rangle\right|^{2}\;\mathrm{d}x\mathrm{d}t\\ &+\frac{\lambda^{\rm macro}_{1}}{|\mathcal{T}\times\mathcal{D} \times\mathcal{D}|}\int_{\mathcal{T}}\int_{\mathcal{D}}\left|\varepsilon \partial_{t}g^{\rm NN}_{\theta}+\varepsilon v(\partial_{x}g^{\rm NN}_{\theta} -\partial_{x}\langle vg^{\rm NN}_{\theta}\rangle\mathcal{M}^{\rm NN}_{ \theta})\right.\\ &-[\mathcal{L}_{g^{\rm NN}_{\theta}}-v\partial_{x}\langle\rho^{\rm NN }_{\theta}\mathcal{M}^{\rm NN}_{\theta}\rangle+\partial_{x}\langle v(\rho^{\rm NN }_{\theta}\mathcal{M}^{\rm NN}_{\theta})\rangle\mathcal{M}^{\rm NN}_{\theta}- \rho^{\rm NN}_{\theta}(\partial_{t}\mathcal{M}^{\rm NN}_{\theta})]\right|^{2} \;\mathrm{d}v\mathrm{d}x\mathrm{d}t\\ &+\frac{\lambda^{\rm Poisson}_{1}}{|\mathcal{T}\times\mathcal{D}|} \int_{\mathcal{T}}\int_{\mathcal{D}}\left|-\partial_{x}^{2}\phi^{\rm NN}_{ \theta}-(\rho^{\rm NN}_{\theta}-h)\right|^{2}\;\mathrm{d}x\mathrm{d}t\\ &+\frac{\lambda^{f}_{2}}{|\mathcal{T}\times\partial\mathcal{D} \times\Omega|}\int_{\mathcal{T}}\int_{\partial\mathcal{D}}\int_{\Omega}\left| \mathcal{B}(\rho^{\rm NN}_{\theta}\mathcal{M}^{\rm NN}_{\theta}+\varepsilon g ^{\rm NN}_{\theta})-F_{\rm B}\right|^{2}\;\mathrm{d}v\mathrm{d}x\mathrm{d}t \\ &+\frac{\lambda^{\rm d}_{2}}{|\mathcal{T}\times\partial\mathcal{D} |}\int_{\mathcal{T}}\int_{\partial\mathcal{D}}\left|\mathcal{B}\phi^{\rm NN}_{ \theta}-\Phi_{\rm B}\right|^{2}\;\mathrm{d}x\mathrm{d}t\\ &+\frac{\lambda^{f}_{3}}{|\mathcal{D}\times\mathcal{D}|}\int_{ \mathcal{D}}\int_{\Omega}\left|\mathcal{I}(\rho^{\rm NN}_{\theta}\mathcal{M}^{ \rm NN}_{\theta}+\varepsilon g^{\rm NN}_{\theta})-f_{0}\right|^{2}\;\mathrm{ d}v\mathrm{d}x\\ &+\frac{\lambda^{\rm d}_{3}}{|\mathcal{D}|}\int_{\mathcal{D}} \int_{\Omega}\left|\mathcal{I}\phi^{\rm NN}_{\theta}-\phi_{0}\right|^{2}\; \mathrm{d}x,\end{split} \tag{3.14}\]
where \(\mathcal{M}_{\theta}^{\text{NN}}\) can be calculated from \(\rho_{\theta}^{\text{NN}}\), \(g_{\theta}^{\text{NN}}\) and \(\phi_{\theta}^{\text{NN}}\). The loss is formally denoted as:
\[\mathcal{R}_{\text{APNN}}^{e}:=\lambda_{1}\mathcal{R}_{\text{residual}}^{e}+ \lambda_{2}\mathcal{R}_{\text{bc}}^{e}+\lambda_{3}\mathcal{R}_{\text{ic}}^{e}. \tag{3.15}\]
Here, we consider the periodic BCs. As precised at the end of Section 1.3, ICs and the periodic BCs of \(\rho\) and \(g\) can be directly deduced from the IC and periodic BC imposed on \(f\). Hence the initial and boundary conditions for \(f\) in the bounded domain VPFP system (3.1) are reformulated as:
\[\begin{cases}\mathcal{B}\rho=P_{\text{B}},&(t,x)\in\mathcal{T}\times \partial\mathcal{D}\\ \mathcal{B}g=G_{\text{B}},&(t,x,v)\in\mathcal{T}\times\partial\mathcal{D}\times \Omega\\ \mathcal{B}\phi=\Phi_{\text{B}},&(t,x)\in\mathcal{T}\times\partial\mathcal{D} \\ T\rho=\rho_{0},&(t,x)\in\{t=0\}\times\mathcal{D}\\ Tg=g_{0},&(t,x,v)\in\{t=0\}\times\mathcal{D}\times\Omega\\ T\phi=\phi_{0},&(t,x)\in\{t=0\}\times\mathcal{D}\end{cases} \tag{3.16}\]
where \(P_{\text{B}},G_{\text{B}},\Phi_{\text{B}},\rho_{0},\phi_{0}\) can be obtained from the corresponding settings of \(f\). In this case, the loss (3.14) is rewritten as:
\[\begin{split}\mathcal{R}_{\text{APNN}}^{e}&=\lambda_ {1}\mathcal{R}_{\text{residual}}^{e}+\frac{\lambda_{2}^{e}}{|\mathcal{T} \times\partial\mathcal{D}|}\int_{\mathcal{T}}\int_{\partial\mathcal{D}}\left| \mathcal{B}\mathcal{P}_{\theta}^{\text{NN}}-P_{\text{B}}\right|^{2}\,\text{d} x\text{d}t\\ &+\frac{\lambda_{2}^{g}}{|\mathcal{T}\times\partial\mathcal{D} \times\Omega\,|}\int_{\mathcal{T}}\int_{\partial\mathcal{D}}\left|\mathcal{B} \mathcal{B}_{\theta}^{\text{NN}}-G_{\text{B}}\right|^{2}\,\text{d}\text{vd}x \text{d}t+\frac{\lambda_{2}^{\theta}}{|\mathcal{T}\times\partial\mathcal{D}|} \int_{\mathcal{T}}\int_{\partial\mathcal{D}}\left|\mathcal{B}\mathcal{\phi}_{ \theta}^{\text{NN}}-\Phi_{\text{B}}\right|^{2}\text{d}x\text{d}t\\ &+\frac{\lambda_{3}^{e}}{|\mathcal{D}|}\int_{\mathcal{D}}\int_{ \Omega}\left|\mathcal{I}\rho_{\theta}^{\text{NN}}-\rho_{0}\right|^{2}\text{d} x+\frac{\lambda_{3}^{g}}{|\mathcal{D}\times\Omega|}\int_{\mathcal{D}}\int_{ \Omega}\left|\mathcal{I}g_{\theta}^{\text{NN}}-g_{0}\right|^{2}\,\text{d} \text{vd}x\\ &+\frac{\lambda_{3}^{\theta}}{|\mathcal{D}|}\int_{\mathcal{D}}\int_{ \Omega}\left|\mathcal{I}\phi_{\theta}^{\text{NN}}-\phi_{0}\right|^{2}\text{d} x.\end{split} \tag{3.17}\]
Wee now formally show the AP property of the loss function (3.14). Consider small \(\varepsilon\), we only need to focus on the residual term:
\[\begin{split}\mathcal{R}_{\text{residual}}^{e}&= \frac{1}{|\mathcal{T}\times\mathcal{D}|}\int_{\mathcal{T}}\int_{\mathcal{D}} \left|\partial_{r}\rho_{\theta}^{\text{NN}}+\partial_{x}\langle v(\rho_{ \theta}^{\text{NN}}\mathcal{M}_{\theta}^{\text{NN}})\rangle+\varepsilon \partial_{x}\langle vg_{\theta}^{\text{NN}}\rangle\right|^{2}\,\text{d}x \text{d}t\\ &+\frac{1}{|\mathcal{T}\times\mathcal{D}\times\Omega|}\int_{ \mathcal{T}}\int_{\mathcal{D}}\int_{\mathcal{\Omega}}\left|\,\varepsilon \partial_{t}g_{\theta}^{\text{NN}}+\varepsilon(\vartheta_{x}g_{\theta}^{\text{ NN}}-\partial_{x}\langle vg_{\theta}^{\text{NN}}\rangle\mathcal{M}_{ \theta}^{\text{NN}})\right.\\ &-\left[\mathcal{L}g_{\theta}^{\text{NN}}-v\partial_{x}\langle \rho_{\theta}^{\text{NN}}\mathcal{M}_{\theta}^{\text{NN}}\rangle+\partial_{x} \langle v(\rho_{\theta}^{\text{NN}}\mathcal{M}_{\theta}^{\text{NN}})\rangle \mathcal{M}_{\theta}^{\text{NN}}-\rho_{\theta}^{\text{NN}}(\partial_{r} \mathcal{M}_{\theta}^{\text{NN}})\right]|^{2}\,\text{d}v\text{d}x\text{d}t\\ &+\frac{1}{|\mathcal{T}\times\mathcal{D}|}\int_{\mathcal{T}}\int_{ \mathcal{D}}\left|-\partial_{x}^{2}\phi_{\theta}^{\text{NN}}-(\rho_{\theta}^{ \text{NN}}-h)\right|^{2}\,\text{d}x\text{d}t.\end{split} \tag{3.18}\]
Taking \(\varepsilon\to 0\), it formally becomes:
\[\begin{split}\mathcal{R}_{\text{residual}}^{e}&= \frac{1}{|\mathcal{T}\times\mathcal{D}|}\int_{\mathcal{T}}\int_{\mathcal{D}} \left|\partial_{\mu}\rho_{\theta}^{\text{NN}}+\partial_{x}\langle v(\rho_{ \theta}^{\text{NN}}\mathcal{M}_{\theta}^{\text{NN}})\rangle\right|^{2}\,\text{d} x\text{d}t\\ &+\frac{1}{|\mathcal{T}\times\mathcal{D}\times\Omega|}\int_{ \mathcal{T}}\int_{\mathcal{D}}\int_{\Omega}\left|-[\mathcal{L}g_{\theta}^{\text{ NN}}-v\partial_{x}\langle\rho_{\theta}^{\text{NN}}\mathcal{M}_{\theta}^{\text{NN}} \rangle+\partial_{x}\langle v(\rho_{\theta}^{\text{NN}}\mathcal{M}_{\theta}^{ \text{NN}})\rangle\mathcal{M}_{\theta}^{\text{NN}}-\rho_{\theta}^{\text{NN}}( \partial_{r}\mathcal{M}_{\theta}^{\text{NN}})\rangle\right|^{2}\,\text{d}v \text{d}x\text{d}t\\ &+\frac{1}{|\mathcal{T}\times\mathcal{D}|}\int_{\mathcal{T}}\int_{ \mathcal{D}}\left|-\partial_{x}^{2}\phi_{\theta}^{\text{NN}}-(\rho_{\theta}^{ \text{NN}}-h)\right|^{2}\,\text{d}x\text{d}t.\end{split} \tag{3.19}\]
This equation coincides with the least square loss of the system:
\[\begin{cases}\mathcal{L}g=[v\partial_{x}(\rho\mathcal{M})-\partial_{x}\langle v( \rho\mathcal{M})\rangle.\mathcal{M}]-\rho(\partial_{t}.\mathcal{M}),\\ \partial_{t}\rho+\partial_{x}\langle v(\rho\mathcal{M})\rangle=0,\\ -\partial_{x}^{2}\phi=\rho-h.\end{cases} \tag{3.20}\]
Similar to Section 1.3, the second equation in (3.20) produces \(\partial_{t}\rho-\partial_{x}\left(\rho\partial_{x}\phi\right)=0\). Combining it with \(-\partial_{x}^{2}\phi=\rho-h\), the high-field limit system (2.25) is derived. The AP property of this proposed DNN method based on the micro-macro decomposition is demonstrated. Therefore, this approach enables accurate predictions for both kinetic and high-field regimes.
### The mass conservation based APNN method
In this section, we propose another APNN method. This method is built upon an equivalent model of VPFP system, which incorporates the inherent mass conversion law to the origin VPFP system. Our new APNN method does not depend upon the explicit expression of the local Maxwellian, and nor it requires the initial data to be a local Maxwellian, thereby significantly broadening its potential applications.
#### 3.3.1 Enforcing the mass conservation law
A distinguished feature of this model is to enforce the local mass conversion law in the loss function. Let's starts from the origin VPFP equation (2.11):
\[\partial_{t}f+v\cdot\nabla_{x}f =\frac{1}{\varepsilon}\nabla_{v}\cdot[(v+\nabla_{x}\phi)f+\nabla _{v}f]=:\frac{1}{\varepsilon}\mathcal{L}f,\] \[-\Delta_{x}\phi =\rho-h(x).\]
Due to the property of mass conservation of Fokker-Planck-type operator \(\mathcal{L}\), one can integrate the first equation over \(v\) to obtain the equation of mass conservation law:
\[\partial_{t}\rho+\nabla_{x}\cdot\langle vf\rangle=0. \tag{3.21}\]
Our new model is the Vlasov equation (2.1a) together with an extra condition: the mass conservation law
\[\begin{cases}\partial_{t}f+v\cdot\nabla_{x}f=\frac{1}{\varepsilon}\mathcal{L }f,\\ \partial_{t}\rho+\nabla_{x}\cdot\langle vf\rangle=0,\\ -\Delta_{x}\phi=\rho-h,\\ \rho=\langle f\rangle.\end{cases} \tag{3.22}\]
This system is equivalent to the original system (2.1a)-(2.1b), since equation (3.21) is just the integration of the Vlasov equation (2.11) which is automatically satisfied in the continuous model but not necessarily so for DNN approximations. In the new model, the concerned macroscopic physical quantity \(\rho\) is naturally incorporated by the conservation law without any subsidiary conditions. And we will see the high-field limit equation naturally follows from the new model as \(\varepsilon\to 0\).
The formal derivation of the limit equation starts from the new model (3.22). Multiplying the first equation of the model (3.22) by \(\varepsilon\) and yielding \(\varepsilon\to 0\), this immediately gives \(\mathcal{L}f=0\). Since \(f\in\mathcal{N}(\mathcal{L})=\{f=\rho\mathcal{M}\), where \(\rho:=\langle f\rangle\}\), it follows that \(f=\rho\mathcal{M}\). Substituting this form of \(f\) into the second equation of
the model (3.22), and repeating the derivations in Section 1.3, limit system degenerates to:
\[\left\{\begin{array}{l}\partial_{t}\rho-\nabla_{x}\cdot(\rho\nabla_{x}\phi)=0, \\ -\triangle_{x}\phi=\rho-h(x).\end{array}\right. \tag{3.23}\]
Furthermore, one takes care of the corresponding initial-boundary value problem for the new model (3.22). The initial and boundary conditions of \(f\) are directly extended to \(\rho\), since \(\rho\) is direct integration of \(f\) over the velocity space.
#### 3.3.2 A new loss function with AP property
In this part, a new APNN method is proposed to solve the VPFP system. The key approach is incorporating the mass conservation based model (3.22), which possess the AP property, into the loss of the vanilla PINNs. The starting initial-boundary problem over a bounded domain is:
\[\begin{array}{ll}\left(\partial_{t}f+v\cdot\nabla_{x}f=\frac{1}{e}\mathcal{ L}f,&(t,x,v)\in\mathcal{T}\times\mathcal{D}\times\Omega\\ \partial_{t}\rho+\nabla_{x}\cdot(v\prime f)=0,&(t,x)\in\mathcal{T}\times \partial\mathcal{D}\\ -\triangle_{x}\phi(t,x)=\rho(t,x)-h(x),&(t,x)\in\mathcal{T}\times\mathcal{D},\\ \mathcal{B}f=F_{\text{B}},&(t,x,v)\in\mathcal{T}\times\partial\mathcal{D} \times\Omega\\ \mathcal{B}\rho=P_{\text{B}},&(t,x)\in\mathcal{T}\times\partial\mathcal{D} \\ \mathcal{B}\phi=\Phi_{\text{B}},&(t,x)\in\mathcal{T}\times\partial\mathcal{D} \\ Tf=f_{0},&(t,x,v)\in\{t=0\}\times\mathcal{D}\times\Omega\\ I\rho=\rho_{0},&(t,x)\in\{t=0\}\times\mathcal{D}\\ I\phi=\phi_{0},&(t,x)\in\{t=0\}\times\mathcal{D}\end{array} \tag{3.24}\]
where \(F_{\text{B}},f_{0},P_{\text{B}},\rho_{0},\Phi_{\text{B}},\phi_{0}\) are given functions.
First, we parametrize the three functions \(\rho\), \(f\) and \(\phi\) using DNNs, denoted by \(\eta\) for brevity. More precisely, we set:
\[\begin{array}{l}\rho_{\eta}^{\text{NN}}(t,x):=\log(1+\exp\left(\tilde{\rho} _{\eta}^{\text{NN}}(t,x)\right))\approx\rho(t,x),\\ f_{\eta}^{\text{NN}}(t,x):=\log(1+\exp\left(\tilde{f}_{\eta}^{\text{NN}}(t,x) \right))\approx f(t,x),\end{array} \tag{3.25}\]
which preserve the non-negative property of \(\rho\) and \(f\). The function \(\phi\) is parameterized as:
\[\phi_{\eta}^{\text{NN}}(t,x)\approx\phi(t,x). \tag{3.26}\]
Here \(\tilde{\rho}_{\theta}^{\text{NN}}\), \(\tilde{f}_{\theta}^{\text{NN}}\) and \(\phi_{\theta}^{\text{NN}}\) are all fully-connected neural networks. Next, we introduce the least square of the
residual of the mass conservation based model (3.22) as a new APNN loss:
\[\mathcal{R}^{e}_{\rm APNN}= \frac{\kappa_{1}^{\rm macro}}{|\mathcal{T}\times\mathcal{D}|}\int_ {\mathcal{T}}\int_{\mathcal{D}}\left|\partial_{\mu}\rho_{\eta}^{\rm NN}+ \partial_{x}\langle vf_{\eta}^{\rm NN}\rangle\right|^{2}\;\mathrm{d}x\mathrm{d}t \tag{3.27}\] \[+\frac{\kappa_{1}^{\rm kinetic}}{|\mathcal{T}\times\mathcal{D} \times\Omega|}\int_{\mathcal{T}}\int_{\mathcal{D}}\int_{\Omega}\left| \varepsilon\partial_{t}f_{\eta}^{\rm NN}+\varepsilon v\partial_{x}f_{\eta}^{ \rm NN}-\mathcal{L}f_{\eta}^{\rm NN}\right|^{2}\;\mathrm{d}v\;\mathrm{d}x \mathrm{d}t\] \[+\frac{\kappa_{1}^{\rm Poisson}}{|\mathcal{T}\times\mathcal{D}|} \int_{\mathcal{T}}\int_{\mathcal{D}}\left|-\partial_{x}^{2}\phi_{\eta}^{\rm NN }-\left(\rho_{\eta}^{\rm NN}-h\right)\right|^{2}\;\mathrm{d}x\mathrm{d}t\] \[+\frac{\kappa_{2}^{\rho}}{|\mathcal{T}\times\partial\mathcal{D}|} \int_{\mathcal{T}}\int_{\partial\mathcal{D}}\left|\mathcal{B}\rho_{\eta}^{\rm NN }-P_{\rm B}\right|^{2}\mathrm{d}x\mathrm{d}t\] \[+\frac{\kappa_{2}^{f}}{|\mathcal{T}\times\partial\mathcal{D}\times \Omega|}\int_{\mathcal{T}}\int_{\partial\mathcal{D}}\int_{\Omega}\left| \mathcal{B}f_{\eta}^{\rm NN}-F_{\rm B}\right|^{2}\;\mathrm{d}v\mathrm{d}x \mathrm{d}t\] \[+\frac{\kappa_{2}^{\rho}}{|\mathcal{T}\times\partial\mathcal{D}|} \int_{\mathcal{T}}\int_{\partial\mathcal{D}}\left|\mathcal{B}\phi_{\eta}^{\rm NN }-\Phi_{\rm B}\right|^{2}\mathrm{d}x\mathrm{d}t\] \[+\frac{\kappa_{3}^{\rho}}{|\mathcal{D}|}\int_{\mathcal{D}}\left| \mathcal{I}\rho_{\eta}^{\rm NN}-\rho_{0}\right|^{2}\mathrm{d}x+\frac{\kappa_{ 3}^{f}}{|\mathcal{D}\times\Omega|}\int_{\mathcal{D}}\int_{\Omega}\left| \mathcal{I}f_{\eta}^{\rm NN}-f_{0}\right|^{2}\;\mathrm{d}v\mathrm{d}x\] \[+\frac{\kappa_{4}}{|\mathcal{T}\times\mathcal{D}|}\int_{\mathcal{ T}}\int_{\mathcal{D}}\left|\left(f_{\eta}^{\rm NN}\right)-\rho_{\eta}^{\rm NN }\right|^{2}\;\mathrm{d}x\mathrm{d}t\]
For ease of description, loss \(\mathcal{R}^{e}_{\rm APNN}\) is abbreviated as:
\[\mathcal{R}^{e}_{\rm APNN}:=\kappa_{1}\mathcal{R}^{e}_{\rm residual}+\kappa_ {2}\mathcal{R}^{e}_{\rm bc}+\kappa_{3}\mathcal{R}^{e}_{\rm ic}. \tag{3.28}\]
At last, we show the AP property of the loss (3.27). We just focus on the residual term in loss:
\[\mathcal{R}^{e}_{\rm residual}= \frac{1}{|\mathcal{T}\times\mathcal{D}|}\int_{\mathcal{T}}\int_{ \mathcal{D}}\left|\partial_{\mu}\rho_{\eta}^{\rm NN}+\partial_{x}\langle vf_{ \eta}^{\rm NN}\rangle\right|^{2}\;\mathrm{d}x\mathrm{d}t \tag{3.29}\] \[+\frac{1}{|\mathcal{T}\times\mathcal{D}|}\int_{\mathcal{T}}\int_ {\mathcal{D}}\int_{\Omega}\left|\varepsilon\partial_{t}f_{\eta}^{\rm NN}+ \varepsilon v\partial_{x}f_{\eta}^{\rm NN}-\mathcal{L}f_{\eta}^{\rm NN} \right|^{2}\;\mathrm{d}v\;\mathrm{d}x\mathrm{d}t\] \[+\frac{1}{|\mathcal{T}\times\mathcal{D}|}\int_{\mathcal{T}}\int_ {\mathcal{D}}\left|-\partial_{x}^{2}\phi_{\eta}^{\rm NN}-\left(\rho_{\eta}^{ \rm NN}-h\right)\right|^{2}\;\mathrm{d}x\mathrm{d}t.\]
Taking \(\varepsilon\to 0\), formally it leads to:
\[\mathcal{R}^{e}_{\rm residual}= \frac{1}{|\mathcal{T}\times\mathcal{D}|}\int_{\mathcal{T}}\int_ {\mathcal{D}}\left|\partial_{\mu}\rho_{\eta}^{\rm NN}+\partial_{x}\langle vf_{ \eta}^{\rm NN}\rangle\right|^{2}\;\mathrm{d}x\mathrm{d}t \tag{3.30}\] \[+\frac{1}{|\mathcal{T}\times\mathcal{D}|}\int_{\mathcal{T}}\int_ {\mathcal{D}}\left|-\partial_{x}^{2}\phi_{\eta}^{\rm NN}-\left(\rho_{\eta}^{ \rm NN}-h\right)\right|^{2}\;\mathrm{d}x\mathrm{d}t.\]
This is the least square residual of the system:
\[\begin{cases}\mathcal{L}f=0,\\ \partial_{t}\rho+\partial_{x}\langle vf\rangle=0,\\ -\partial_{x}^{2}\phi=\rho-h.\end{cases} \tag{3.31}\]
The first equation \(\mathcal{L}f=0\) implies \(f\in\mathcal{N}(\mathcal{L})\), inducing \(f=\rho\mathcal{M}\). Plugging \(f=\rho\mathcal{M}\) into the second
equation and combining with the Poisson equation, one arrives at the limit system (2.25). Therefore, the AP property of the new APNN loss has been concluded.
Broadly, the mass conservation based APNN method holds several advantageous characteristics. First, the macroscopic physics quantity of interest is judiciously incorporated by the mass conservation law, free from restrictive assumptions. Secondly, the explicit Maxwellian form is rendered an unnecessary component. A wider applicability is promised, since not all kinetic-type equations possess explicit Maxwellians. Moreover, Our new method is applicable to both equilibrium and non-equilibrium initial data.
**Remark 3.1**.: _For the mass conservation based APNN method, the mass conservation condition \(\rho=\langle f\rangle\) is imposed to the loss as a soft constraint. This is different from the micro-macro decomposition based APNN method, in which the equivalent conservation condition \(\langle g\rangle=0\) is built as a hard constraint in DNNs. As reported in [20], the constraint-type for the conservation mechanism significantly effects the training accuracy of the DNNs. We will examine these two processing methods for the conservation conditions in the numerical experiment section._
### Empirical loss functions
Since the loss functions for APNNs and PINNs are defined by integrals, in practice, one needs to define the empirical loss as a Monte Carlo approximation to the integral loss. This involves the random selection to a small number of sub-domains (known as batches) to estimate the high-dimensional integrals. For the velocity-dependent operators \(\langle\cdot\rangle\) and \(\Pi(\cdot)\), the Gauss-Legendre quadrature provides additional accuracy. In detail, given \(\left\{w_{i},v_{i}^{\prime}\right\}_{i=1}^{n}\) are the nodes and weights, which \(\left\{v_{i}^{\prime}\right\}_{i=1}^{n}\) are the roots of the Legendre polynomials of degree \(n\) and \(\left\{w_{i}\right\}_{i=1}^{n}\) are the corresponding weights. Then an integral can be approximated, for example, \(\int_{-1}^{1}f(v)\mathrm{d}v\) by the summation of linear combination of \(f\left(v_{i}^{\prime}\right):\sum_{i-1}^{n}w_{i}f\left(v_{i}^{\prime}\right)\).
We start with the formulation of the empirical losses for vanilla PINN (3.3) defined as:
\[\begin{split}\mathcal{R}_{\mathrm{APNN}}^{e}=&\frac{ \mu_{1}^{\mathrm{Vlasov}}}{L_{1}}\sum_{l=1}^{L_{1}}\left|\delta\partial_{t}f_{ \gamma}^{\mathrm{NN}}(t_{l}^{r},x_{l}^{r},v_{l}^{r})+\varepsilon v_{l}^{r} \partial_{s}f_{\gamma}^{\mathrm{NN}}(t_{l}^{r},x_{l}^{r},v_{l}^{r})-\mathcal{ L}f_{\gamma}^{\mathrm{NN}}(t_{l}^{r},x_{l}^{r},v_{l}^{r})\right|^{2}\\ &+\frac{\mu_{1}^{\mathrm{Poisson}}}{L_{1}}\sum_{l=1}^{L_{1}}\left| -\partial_{x}^{2}\phi_{\gamma}^{\mathrm{NN}}(t_{l}^{r},x_{l}^{r})-(\rho_{ \gamma}^{\mathrm{NN}}(t_{l}^{r},x_{l}^{r})-h(x_{l}^{r}))\right|^{2}\\ &+\frac{\mu_{2}^{f}}{L_{2}}\sum_{l=1}^{L_{2}}\left|\mathcal{B}f_{ \gamma}^{\mathrm{NN}}(t_{l}^{b},x_{l}^{b},v_{l}^{b})-F_{\mathrm{B}}(t_{l}^{b},x_{l}^{b},v_{l}^{b})\right|^{2}+\frac{\mu_{2}^{\phi}}{L_{2}}\sum_{l=1}^{L_{2} }\left|\mathcal{B}\phi_{\gamma}^{\mathrm{NN}}(t_{l}^{b},x_{l}^{b})-\Phi_{ \mathrm{B}}(t_{l}^{b},x_{l}^{b})\right|^{2}\\ &+\frac{\mu_{3}^{f}}{L_{3}}\sum_{l=1}^{L_{3}}\left|\mathcal{I}f_{ \gamma}^{\mathrm{NN}}(t_{l}^{k},x_{l}^{k},v_{l}^{k})-f_{0}(t_{l}^{k},x_{l}^{k},v_{l}^{k})\right|^{2}+\frac{\mu_{3}^{\phi}}{L_{3}}\sum_{l=1}^{L_{1}}\left| \mathcal{I}\phi_{\gamma}^{\mathrm{NN}}(t_{l}^{k},x_{l}^{k})-\phi_{0}(t_{l}^{k},x_{l}^{k})\right|^{2}\end{split} \tag{3.32}\]
where \(\left\{(t_{l}^{r},x_{l}^{r},v_{l}^{r})\right\}_{l=1}^{L_{1}}\in\mathcal{T} \times\mathcal{D}\times\Omega\) is a batch of sample data for the residual, \(\left\{(t_{l}^{b},x_{l}^{b},v_{l}^{b})\right\}_{l=1}^{L_{2}}\in\mathcal{T} \times\partial\mathcal{D}\times\Omega\) is a batch of sample data, and \(\left\{(t_{l}^{k},x_{l}^{k},v_{l}^{k})\right\}_{l=1}^{L_{2}}\in\left\{0\right\} \times\mathcal{D}\times\Omega\) is a batch of sample data for initial conditions.
Similarly, the empirical loss function for the micro-macro decomposition based APNN method (3.14)
is defined as:
\[\begin{split}\mathcal{R}^{\text{ex}}_{\text{APNN}}=& \frac{\lambda_{1}^{\text{macro}}}{N_{1}}\sum_{i=1}^{N_{1}}\left| \partial_{t}\rho_{\theta}^{\text{NN}}(t_{i}^{r},x_{i}^{r})+\partial_{x}(v(\rho_ {\theta}^{\text{NN}}\mathcal{M}_{\theta}^{\text{NN}}))(t_{i}^{r},x_{i}^{r})+ \varepsilon\partial_{x}(vg_{\theta}^{\text{NN}})(t_{i}^{r},x_{i}^{r})\right|^ {2}\\ &+\frac{\lambda_{1}^{\text{micro}}}{N_{1}}\sum_{i=1}^{N_{1}}\left| \varepsilon\partial_{i}g_{\theta}^{\text{NN}}(t_{i}^{r},x_{i}^{r},v_{i}^{r})+ \varepsilon(v_{i}^{r}\partial_{x}g_{\theta}^{\text{NN}}(t_{i}^{r},x_{i}^{r},v _{i}^{r})-\partial_{x}(vg_{\theta}^{\text{NN}})(t_{i}^{r},x_{i}^{r})\mathcal{ M}_{\theta}^{\text{NN}}(t_{i}^{r},x_{i}^{r},v_{i}^{r})\right)\\ &-[\mathscr{L}_{\theta g}^{\text{NN}}(t_{i}^{r},x_{i}^{r},v_{i}^ {r})-v_{i}^{r}\partial_{x}(\rho_{\theta}^{\text{NN}}(t_{i}^{r},x_{i}^{r}) \mathcal{M}_{\theta}^{\text{NN}}(t_{i}^{r},x_{i}^{r},v_{i}^{r}))\\ &+\partial_{x}(v(\rho_{\theta}^{\text{NN}}\mathcal{M}_{\theta}^ {\text{NN}}))(t_{i}^{r},x_{i}^{r})\mathcal{M}_{\theta}^{\text{NN}}(t_{i}^{r},x _{i}^{r},v_{i}^{r})-\rho_{\theta}^{\text{NN}}(t_{i}^{r},x_{i}^{r})(\partial_{t }\mathcal{M}_{\theta}^{\text{NN}}(t_{i}^{r},x_{i}^{r},v_{i}^{r})))\right|^{2} \\ &\frac{\lambda_{1}^{\text{Poisson}}}{N_{1}}\sum_{i=1}^{N_{1}}\left|- \partial_{x}^{2}\phi_{\theta}^{\text{NN}}(t_{i}^{r},x_{i}^{r})-(\rho_{\theta} ^{\text{NN}}(t_{i}^{r},x_{i}^{r})-h(x_{i}^{r}))\right|^{2}\\ &+\frac{\lambda_{2}^{\prime}}{N_{2}}\sum_{i=1}^{N_{2}}\left| \mathcal{B}\left(\rho_{\theta}^{\text{NN}}(t_{i}^{b},x_{i}^{b})+\varepsilon g _{\theta}^{\text{NN}}(t_{i}^{b},x_{i}^{b},v_{i}^{b})\right)-F_{\text{B}}(t_{i} ^{b},x_{i}^{b},v_{i}^{b})\right|^{2}\\ &+\frac{\lambda_{2}^{\prime}}{N_{2}}\sum_{i=1}^{N_{2}}\left| \mathcal{B}\phi_{\theta}^{\text{NN}}(t_{i}^{b},x_{i}^{b})-\Phi_{\text{B}}(t_{i }^{b},x_{i}^{b})\right|^{2}\\ &+\frac{\lambda_{3}^{\prime}}{N_{3}}\sum_{i=1}^{N_{1}}\left| \mathcal{I}\left(\rho_{\theta}^{\text{NN}}\left(t_{i}^{b},x_{i}^{b}\right)+ \varepsilon g_{\theta}^{\text{NN}}\left(t_{i}^{b},x_{i}^{b},v_{i}^{b}\right) \right)-f_{0}\left(t_{i}^{b},x_{i}^{b},v_{i}^{b}\right)\right|^{2}\\ &+\frac{\lambda_{3}^{\prime}}{N_{3}}\sum_{i=1}^{N_{1}}\left| \mathcal{I}\phi_{\theta}^{\text{NN}}\left(t_{i}^{b},x_{i}^{b}\right)-\phi_{0} \left(t_{i}^{b},x_{i}^{b}\right)\right|^{2}\end{split} \tag{3.33}\]
where \(\{(t_{i}^{r},x_{i}^{r},v_{i}^{r})\}_{i=1}^{N_{1}}\in\mathcal{T}\times\mathcal{ D}\times\Omega\) is a batch of sample data for the residual, \(\{(t_{i}^{b},x_{i}^{b},v_{i}^{b})\}_{i=1}^{N_{2}}\in\mathcal{T}\times\partial \mathcal{D}\times\Omega\) is a batch of sample data for boundary conditions, and \(\{(t_{i}^{k},x_{i}^{k},v_{i}^{k})\}_{i=1}^{N_{1}}\in\{0\}\times\mathcal{D} \times\Omega\) is a batch of sample data for initial conditions.
The empirical loss function for the mass conservation based APNN method (3.27) is analogously de
fined as:
\[\begin{split}\mathcal{R}^{\text{e}}_{\text{APNN}}=& \frac{\kappa_{1}^{\text{macro}}}{M_{1}}\sum_{j=1}^{M_{1}}\left| \partial\rho_{\eta}^{\text{NN}}(t_{j}^{r},x_{j}^{r})+\partial_{x}(v_{\eta}^{ \text{NN}}(t_{j}^{r},x_{j}^{r}))\right|^{2}\\ &+\frac{\kappa_{1}^{\text{kinetic}}}{M_{1}}\sum_{j=1}^{M_{1}} \left|\varepsilon\partial_{t}f_{\eta}^{\text{NN}}(t_{j}^{r},x_{j}^{r},v_{j}^{ r})+\varepsilon v_{j}^{r}\partial_{x}f_{\eta}^{\text{NN}}(t_{j}^{r},x_{j}^{r},v_{j}^{ r})-\mathcal{L}f_{\eta}^{\text{NN}}(t_{j}^{r},x_{j}^{r},v_{j}^{r})\right|^{2}\\ &+\frac{\kappa_{1}^{\text{Poisson}}}{M_{1}}\sum_{j=1}^{M_{1}} \left|-\partial_{x}^{2}\phi_{\eta}^{\text{NN}}(t_{j}^{r},x_{j}^{r})-(\rho_{ \eta}^{\text{NN}}(t_{j}^{r},x_{j}^{r})-h(x_{j}^{r}))\right|^{2}\\ &+\frac{\kappa_{2}^{\rho}}{M_{2}}\sum_{j=1}^{M_{2}}\left| \mathcal{B}\rho_{\eta}^{\text{NN}}(t_{j}^{b},x_{j}^{b})-P_{\text{B}}(t_{j}^{b },x_{j}^{b})\right|^{2}+\frac{\kappa_{2}^{f}}{M_{2}}\sum_{j=1}^{M_{2}}\left| \mathcal{B}f_{\eta}^{\text{NN}}(t_{j}^{b},x_{j}^{b},v_{j}^{b})-F_{\text{B}}(t_ {j}^{b},x_{j}^{b},v_{j}^{b})\right|^{2}\\ &+\frac{\kappa_{2}^{\rho}}{M_{2}}\sum_{j=1}^{M_{2}}\left| \mathcal{B}\phi_{\eta}^{\text{NN}}(t_{j}^{b},x_{j}^{b})-\Phi_{\text{B}}(t_{j}^ {b},x_{j}^{b})\right|^{2}\\ &+\frac{\kappa_{3}^{\rho}}{M_{3}}\sum_{j=1}^{M_{3}}\left| \mathcal{I}\rho_{\eta}^{\text{NN}}(t_{j}^{k},x_{j}^{k})-\rho_{0}(t_{j}^{k},x_{ j}^{k})\right|^{2}+\frac{\kappa_{3}^{f}}{M_{3}}\sum_{j=1}^{M_{3}}\left| \mathcal{I}f_{\eta}^{\text{NN}}(t_{j}^{k},x_{j}^{k},v_{j}^{k})-f_{0}(t_{j}^{k},x_{j}^{k},v_{j}^{k})\right|^{2}\\ &+\frac{\kappa_{3}^{\phi}}{M_{3}}\sum_{j=1}^{M_{3}}\left| \mathcal{I}\phi_{\eta}^{\text{NN}}(t_{j}^{k},x_{j}^{k})-\phi_{0}(t_{j}^{k},x_{ j}^{k})\right|^{2}\\ &+\frac{\kappa_{4}}{M_{4}}\sum_{j=1}^{M_{4}}\left|\left\langle f_{ \eta}^{\text{NN}}\right\rangle(t_{j}^{r},x_{j}^{r})-\rho_{\eta}^{\text{NN}}(t_ {j}^{r},x_{j}^{r})\right|^{2}\end{split} \tag{3.34}\]
where \(\{(t_{j}^{r},x_{j}^{r},v_{j}^{r})\}_{j=1}^{M_{1}}\in\mathcal{T}\times\mathcal{ D}\times\Omega\) is a batch of sample data for the residual, \(\{(t_{j}^{b},x_{j}^{b},v_{j}^{b})\}_{j=1}^{M_{2}}\in\mathcal{T}\times\partial \mathcal{D}\times\Omega\) is a batch of sample data for boundary conditions, \(\{(t_{j}^{k},x_{j}^{k},v_{j}^{k})\}_{j=1}^{M_{1}}\in\{0\}\times\mathcal{D}\times\Omega\) is a batch of sample data for initial conditions, and \(\{(t_{j}^{r},x_{j}^{c})\}_{j=1}^{M_{4}}\in\mathcal{T}\times\mathcal{D}\) is a batch of sample data for the mass conservation.
With the empirical losses formed, one can generate the training set, and apply the optimization procedure for the empirical losses to approximate the solution of the VPFP system.
## 4 Numerical examples
In this section, extensive numerical results are presented for several problems chosen from kinetic regimes (\(\varepsilon\approx O(1)\)) to high-field regimes (\(\varepsilon\to 0\)), in order to verify two proposed APNN methods and compare their performances.
Some settings for the numerical experiments are stated as follows. The Adam version of the gradient descent algorithm is used for optimization of the loss functions. Neural network parameters \(\theta\), \(\eta\) and \(\gamma\) are initialized by the Xavier initialization. Fully-connected networks with 5 hidden layers and tanh activation function are used. The number of Gauss-quadrature points is set to be 32 for integration calculation. All hyper-parameters have been carefully tuned for optimal performance.
The reference solutions are obtained by standard finite difference methods. We will check the relative \(\ell^{2}\) errors of several macroscopic quantities, such as the density \(\rho(t,x)\) and the electric field \(E(t,x)=-\partial_{x}\phi(t,x)\), between the results of DNN methods and reference solutions. The root mean square errors (RMSE for brevity) are also provided when necessary. The specific definition of these two errors of \(\rho\) as an example
are:
\[\text{RMSE}(\rho):=\sqrt{\frac{1}{N}\sum_{j=1}^{N}\left|\rho_{\theta,j}^{\text{NN} }-\rho_{j}^{\text{ref}}\right|^{2}},\quad\ell^{2}(\rho):=\sqrt{\frac{\sum_{j} \left|\rho_{\theta,j}^{\text{NN}}-\rho_{j}^{\text{ref}}\right|^{2}}{\sum_{j} \left|\rho_{j}^{\text{ref}}\right|^{2}}}, \tag{4.1}\]
where \(N\) is the total number of mesh points.
### Problem I: Landau damping
In this part, the Landau damping case is exhibited to verify the AP property of the two proposed APNN loss functions from kinetic regimes to high-field regimes and compare their performances. The failure of vanilla PINN method in high-field regime is also demonstrated. The initial condition near the equilibrium takes the form:
\[f_{0}(x,v)=\frac{1}{\sqrt{2\pi}}\rho_{0}(x)\exp\left(-\frac{v^{2}}{2}\right), \quad(x,v)\in[0,2\pi/k]\times\mathbb{R}, \tag{4.2}\]
where the initial density \(\rho_{0}\) is:
\[\rho_{0}(x)=1+\alpha\cos(kx). \tag{4.3}\]
The wave number \(k\) is set to be \(0.5\) and the amplitude of the perturbation \(\alpha\) is set as \(0.05\). The computational domain is \([0,2\pi/k]\times[v_{\text{min}},v_{\text{max}}],-v_{\text{min}}=v_{\text{max }}=6\). The initial electric field \(\phi^{0}\) is the solution to the Poisson equation with \(h(x)=1\). It is solved analytically as:
\[\phi_{0}(x)=\frac{\alpha}{k^{2}}\cos(kx). \tag{4.4}\]
Periodic boundary conditions are applied in the \(x\) domain.
The detailed initial conditions (ICs) and boundary conditions (BCs) are specified for the APNN approaches. For the micro-macro decomposition based APNN method, the ICs for \(\rho_{0},g_{0},\phi_{0}\) are given by:
\[\rho_{0}(x)=1+\alpha\cos(kx),\quad\phi_{0}(x)=\frac{\alpha}{k^{2}}\cos(kx), \quad g_{0}(x,v)=\frac{1}{\varepsilon}\left\{f_{0}(x,v)-\rho_{0}(x)\mathcal{ M}_{0}(x,v)\right\}, \tag{4.5}\]
where \(\mathcal{M}_{0}\) is the Maxwellian as:
\[\mathcal{M}_{0}(x,v)=\frac{1}{\sqrt{2\pi}}\exp\left(-\frac{(v+\partial_{x} \phi_{0}(x))^{2}}{2}\right). \tag{4.6}\]
Periodic BCs on \(\rho,g\) are directly imposed based on the periodic BC for \(f\). For mass conservation based APNN method and vanilla PINN method, the IC and periodic BC on \(f\) are immediately translated into the related settings. To improve the numerical performance, exact periodic BCs are enforced. The ansatz is based on a Fourier basis. We also construct a transform \(\mathcal{T}:x\rightarrow\{\sin(kjx),\cos(kjx)\}_{j=1}^{n}\) before the first layer of neural networks and set \(n=1\).
Here, we present the performance of two APNN models in multiscale regimes, which are characterized by the parameter \(\varepsilon\). For each APNN model, we plot the density \(\rho\) and the electric field \(E\) as a function of \(x\) at different time, as well as the time evolution of the electric energy \(\|E(t)\|_{L^{2}}\). Figure 1 and Figure 2 illustrate that the two proposed APNN models achieve good agreement with the reference solutions across kinetic, intermediate, and high-field regimes. These numerical results confirm the AP property of two proposed APNN methods and supports our analysis. As a comparison, we plot the result of vanilla PINN method in Figure 3 and Figure 4. The vanilla PINN method exhibits slightly poorer performance in kinetic regimes (\(\varepsilon=1.0,0.5\)) and fails to capture the asymptotic limit regime. Overall, two proposed APNN models improve the vanilla PINN method and have the capability to reliably reproduce solutions for multiscale plasma physics systems.
Moreover, in the kinetic regime, we observe different performance of two proposed APNN methods at different time scales. The mass conservation based APNN method can capture the correct electric energy evolution over long duration up to \(t=5.0\) (see right column in Figure 2). However, the micro-macro decomposition based APNN method can only accurately characterize the dynamic behaviors in short duration up to \(t=1.0\) (see errors reported in Table 1, Table 2, Table 3). A possible reason is that the variable order of the non-equilibrium part \(g\) may influence the training of DNNs [32]. As time increases, the magnitude of \(g\) is significantly reduced. This may encumber the DNN training, thereby inducing the inaccuracy of the micro-macro decomposition based APNN method over long time duration. This weakness could be improved by employing self-adaptive PINNs [35, 42]. One promising strategy is to apply a time-dependent weight to non-equilibrium part \(g\)[35]. Another promising strategy is to allocate the time-dependent number of collocation points for non-equilibrium part \(g\)[42]. These self-adaptive strategies could assist in precisely characterizing \(g\) with variable order, thus improve the long time accuracy of the micro-macro decomposition based APNN method in VPFP system. In contrast, the mass conservation based APNN method gets rid of \(g\) and introduces the density \(\rho\). Density \(\rho\) and distribution \(f\) persist on equivalent first-orders, which broads its applicability over long time.
For the VPFP system, it is noteworthy that the hard or soft constraint-type of the mass conservation conditions do not significantly influence the accuracy of these two proposed APNN methods. In every numerical case tabled, the conservation condition (\(\langle g\rangle=0\)) is treated as a hard condition in the micro-macro decomposition based APNN method. We also examine the soft constraint (\(\langle f\rangle=\rho\)), constructed in the mass conservation based APNN method, which has a performance similar to its hard constraint counterpart (see Figure 1 and Figure 2). Thus, the APNN methods can achieve comparable performances with different constraint-type.
Figure 1: Multiscale linear Landau damping solved by the micro-macro decomposition based APNN method. Density \(\rho\) (left column) and electric field \(E\) (middle column) as functions of space \(x\) at \(t=0.0,0.5,1.0\); electric energy (right column) as a function of time \(t\) up to \(t=1.0\) with the kinetic and intermediate regimes, and \(t=5.0\) with the high-field regime. The electric energy is plotted in log scale. Neural networks are \([3,128,128,128,128,128,1]\) for \(\rho,\phi\) and \([4,256,256,256,256,256,1]\) for \(g\). Batch size is 512 in domain and 256 for initial condition. Penalty \(\lambda_{1}=300\), \(\lambda_{3}=1\) for kinetic regime; penalty \(\lambda_{1}=0.5\), \(\lambda_{3}=1\) for intermediate regime; and penalty \(\lambda_{1}=0.5\), \(\lambda_{3}=1\) for high-field regime.
Figure 2: Multiscale linear Landau damping solved by the mass conservation based APNN method. Density \(\rho\) (left column) and electric field \(E\) (middle column) as functions of space \(x\) at \(t=0.0,0.5,1.0\); electric energy (right column) as a function of time \(t\) up to \(t=5.0\). Neural networks are \([3,128,128,128,128,1]\) for \(\rho,\phi\) and \([4,256,256,256,256,256,1]\) for \(f\). Batch size is \(512\) in domain and \(256\) for initial condition and \(256\) for conservation condition \(\rho=\langle f\rangle\). Penalty \(\kappa_{1}=120\), \(\kappa_{3}=\kappa_{4}=1\) for kinetic regime; penalty \(\kappa_{1}=150\), \(\kappa_{3}=\kappa_{4}=1\) for intermediate regime; and penalty \(\kappa_{1}=500\), \(\kappa_{3}=\kappa_{4}=1\) for high-field regime.
\begin{table}
\end{table}
Table 2: Errors of linear Landau damping with intermediate regime (\(\varepsilon=0.5\)). Relative \(\ell^{2}\) error and RMSE of (a) density \(\rho\), (b) electric field \(E\) and (c) electric energy for the micro-macro decomposition based APNN method (MM for short), the mass conservation based APNN method (MC for short) and vanilla PINN method (PINN for short).
Figure 3: Multiscale linear Landau damping solved by vanilla PINN method with kinetic and intermediate regimes. Density \(\rho\) (column left) and electric field \(E\) (column middle) as functions of space \(x\) for \(t=0.0,0.5,1.0\); electric energy (right column) as a function of time \(t\) up to \(t=1.0\). Neural networks are \([3,128,128,128,128,128,128,1]\) for \(\phi\) and \([4,256,256,256,256,256,1]\) for \(f\). Batch size is 512 in domain and 256 for initial condition. Penalty \(\mu_{1}=30\), \(\mu_{3}=1\) for kinetic regime; and penalty \(\mu_{1}=50\), \(\mu_{3}=1\) for intermediate regime.
Figure 4: Linear Landau damping solved by vanilla PINN method with high-field regime (\(\varepsilon=0.01\)). Neural networks and batch size are the same as kinetic regime case. Penalty \(\mu_{1}=100\), \(\mu_{3}=1\).
### Problem II: The bump-on-tail case
In this part, we test whether proposed APNN methods are still effective when initial data is non-equilibrium. Consider the "double peak" function initial data:
\[f_{0}(x,v)=\frac{1}{\sqrt{2\pi}}\left(\frac{9}{10}\exp\left(-\frac{v^{2}}{2} \right)+\frac{2}{10}\exp\left(-4(v-4.5)^{2}\right)\right)(1+\alpha\cos(kx)),\]
where \(\alpha=0.05,k=0.5\) and \((x,v)\in[0,2\pi/k]\times[-v_{\max},v_{\max}]\) with \(v_{\max}=8\). The ICs for \(\rho,\phi\) are the same as the Landau damping case. The periodic BCs are applied.
Here, we demonstrate the performance of the two proposed models from kinetic regimes to high-field regimes. The micro-macro decomposition-based APNN method exhibits inaccuracy across all regimes, as depicted in Figure 5, Figure 6 and Figure 7. This inaccuracy may be attributable to the severe variations in the magnitude of \(g\), which occurs when the dynamics start from non-equilibrium initial data. As time progresses, these variation lead to a drastic change in order of \(g\), adversely impacting the performance of the neural network function \(g\). Consequently, this results in the inaccuracy of the learning outcomes [32]. Same as the micro-macro decomposition based APNN method for the Landau damping case, the inaccuracy of the learning outcomes attribute to the severe variations in the magnitude of \(g\). In contrast, the mass conservation based APNN method maintains the accurate outputs of neural networks for \(\rho\) and \(f\) at a consistent order of \(O(1)\), thereby ensuring good learning performance shown in Figure 8. As illustrated by this case, the APNN method exhibits adaptability to an non-equilibrium initial data, which underlines its flexibility and robustness.
\begin{table}
\end{table}
Table 3: Errors of linear Landau damping with high-field regime (\(\varepsilon=0.01\)). Relative \(\ell^{2}\) error and RMSE of (a) density \(\rho\), (b) electric field \(E\) and (c) electric energy for the micro-macro decomposition based APNN method (MM for short), the mass conservation based APNN method (MC for short) and vanilla PINN method (PINN for short).
Figure 5: The bump-on-tail case solved by the micro-macro decomposition based APNN method with kinetic regime (\(\varepsilon=1.0\)). Density \(\rho\) as a function of space \(x\) for \(t=0.0,0.5,1.0\) (top). Electric field \(E\) as a function of space \(x\) for \(t=0.0,0.5,1.0\) (bottom). Neural networks are \([3,128,128,128,128,1]\) for \(\rho,\phi\) and \([4,256,256,256,256,256,1]\) for \(g\). Batch size is \(512\) in domain and \(256\) for initial condition. Penalty \(\lambda_{1}=50\) and \(\lambda_{3}=1\). Errors: Relative \(\ell^{2}\) error of \(\rho\) is \(1.93\times 10^{-3}\) at \(t=0.5\) and \(3.01\times 10^{-3}\) at \(t=1.0\). Relative \(\ell^{2}\) error of \(E\) is \(6.28\times 10^{-2}\) at \(t=0.5\) and \(1.58\times 10^{-1}\) at \(t=1.0\).
Figure 6: The bump-on-tail case solved by the micro-macro decomposition based APNN method with intermediate regime (\(\varepsilon=0.3\)). Density \(\rho\) as a function of space \(x\) for \(t=0.0,0.5,1.0\) (top). Electric field \(E\) as a function of space \(x\) for \(t=0.0,0.5,1.0\) (bottom). Neural networks and batch size are the same as kinetic regime case. Penalty \(\lambda_{1}=1\) and \(\lambda_{3}=1000\). Errors: Relative \(\ell^{2}\) error of \(\rho\) is \(4.76\times 10^{-3}\) at \(t=0.5\) and \(9.58\times 10^{-3}\) at \(t=1.0\). Relative \(\ell^{2}\) error of \(E\) is \(1.44\times 10^{-1}\) at \(t=0.5\) and \(7.86\times 10^{-1}\) at \(t=1.0\).
### Problem III: A Riemann problem
In this part, a one-dimensional Riemann problem is used to examine the performance of two proposed APNN methods in discontinuity cases. The piecewise constant initial condition is defined as:
\[\begin{cases}(\rho_{l},h_{l})=(1/8,1/2),&0\leq x<1/4,\\ \langle\rho_{m},h_{m}\rangle=(1/2,1/8),&1/4\leq x<3/4,\\ \langle\rho_{r},h_{r}\rangle=(1/8,1/2),&3/4\leq x\leq 1.\end{cases} \tag{4.7}\]
The exact initial condition of \(\phi\) is solved by the Poisson equation 2.1b as:
\[\begin{cases}\phi_{l}=3/16x^{2}-3/256,&0\leq x<1/4,\\ \phi_{m}=-3/16(x-1/2)^{2}+3/256,&1/4\leq x<3/4,\\ \phi_{r}=3/16(x-1)^{2}-3/256,&3/4\leq x\leq 1.\end{cases} \tag{4.8}\]
The corresponding first-order derivative \(\partial_{x}\phi\) is directly derived. Set \(f\) initially as:
\[f_{0}(x,v)=\frac{\rho_{0}}{\sqrt{2\pi}}\exp\left(-\frac{(v+\partial_{x}\phi_{0 }(x))^{2}}{2}\right). \tag{4.9}\]
Again periodic BC in \(x\) direction is applied.
We check the ability of two proposed APNN methods in tackling the discontinuities. Macroscopic variables \(\rho,\phi\) and flux \(j(t,x)=\int_{\mathcal{R}}vf(t,x,v)\mathrm{d}v\) are plotted in the high-field case (\(\varepsilon=0.001\)). Figure 9 and Figure 10 show that the proposed APNN methods can capture the sharp transition at singularities of the solution. High-frequency information such as sharp transitions are challenging to be learned by neural networks [46]. This difficulty can be covered by both APNN methods in this case.
We further note that the micro-macro decomposition based APNN method provides more accurate profiles in a shorter time frame, compared to the mass conservation based APNN method. This observation is supported by Table 4 and the bottom rows of Figure 9 and Figure 10. One possible explanation is due to the
Figure 7: The bump-on-tail case solved by the micro-macro decomposition based APNN method with high-field regime (\(\varepsilon=0.001\)). Density \(\rho\) as a function of space \(x\) for \(t=0.0,0.5,1.0\) (top). Electric field \(E\) as a function of space \(x\) for \(t=0.0,0.5,1.0\) (bottom). Neural networks, batch size and penalty are the same as intermediate regime case. Errors: Relative \(\ell^{2}\) error of \(\rho\) is \(2.94\times 10^{-3}\) at \(t=0.5\) and \(2.98\times 10^{-3}\) at \(t=1.0\). Relative \(\ell^{2}\) error of \(E\) is \(2.10\times 10^{-1}\) at \(t=0.5\) and \(3.63\times 10^{-1}\) at \(t=1.0\).
Figure 8: The bump-on-tail case solved by the mass conservation based APNN method. Density \(\rho\) (left column) and electric field \(E\) (right column) as functions of space \(x\) at \(t=0.0,0.5,1.0\). Neural networks are \([3,128,128,128,128,128,128,1]\) for \(\rho,\phi\) and \([4,256,256,256,256,256,1]\) for \(f\). Batch size is \(512\) in domain, \(256\) for initial condition and \(256\) for conservation condition \(\rho=\langle f\rangle\). Penalty \(\kappa_{1}=50\), \(\kappa_{3}=\kappa_{4}=1\) for each regimes. Errors: \((a)\) Relative \(\ell^{2}\) error of \(\rho\) is \(1.51\times 10^{-3}\) at \(t=0.5\) and \(1.32\times 10^{-3}\) at \(t=1.0\). Relative \(\ell^{2}\) error of \(E\) is \(4.90\times 10^{-2}\) at \(t=0.5\) and \(6.53\times 10^{-2}\) at \(t=1.0\). \((b)\) Relative \(\ell^{2}\) error of \(\rho\) is \(1.02\times 10^{-3}\) at \(t=0.5\) and \(5.83\times 10^{-4}\) at \(t=1.0\). Relative \(\ell^{2}\) error of \(E\) is \(3.67\times 10^{-2}\) at \(t=0.5\) and \(3.22\times 10^{-2}\) at \(t=1.0\). \((c)\) Relative \(\ell^{2}\) error of \(\rho\) is \(1.93\times 10^{-3}\) at \(t=0.5\) and \(3.01\times 10^{-3}\) at \(t=1.0\). Relative \(\ell^{2}\) error of \(E\) is \(6.28\times 10^{-2}\) at \(t=0.5\) and \(1.58\times 10^{-1}\) at \(t=1.0\).
explicit representation of the non-equilibrium component \(g\) in the micro-macro decomposition based APNN method.This explicit representation enhances the DNNs' ability to accurately capture the non-equilibrium part of the distribution. The non-equilibrium part has a strong influence on the convergence of the governing VPFP system [32]. As a result, DNN could provide more detailed and insightful information. However, it's important to highlight that this improvement in accuracy is not universal. Specifically, it is only valid when the calculated time is relatively short and \(\rho\) and \(g\) are comparable in magnitude.
\begin{table}
\begin{tabular}{c c c c} \hline \hline \(\ell^{2}(\rho)\) & \(t=0.0\) & \(t=0.2\) \\ \hline MM & \(9.55\times 10^{-2}\) & \(7.60\times 10^{-2}\) \\ MC & \(7.36\times 10^{-2}\) & \(1.34\times 10^{-2}\) \\ \hline \hline \end{tabular} \begin{tabular}{c c c} \hline \hline RMSE(\(\rho\)) & \(t=0.0\) & \(t=0.2\) \\ \hline MM & \(3.46\times 10^{-2}\) & \(2.61\times 10^{-2}\) \\ MC & \(2.67\times 10^{-2}\) & \(4.59\times 10^{-2}\) \\ \hline \hline \end{tabular} \begin{tabular}{c c c} \hline \hline \(\ell^{2}(E)\) & \(t=0.0\) & \(t=0.2\) \\ \hline MM & \(1.40\times 10^{-3}\) & \(1.58\times 10^{-3}\) \\ MC & \(8.32\times 10^{-4}\) & \(2.01\times 10^{-3}\) \\ \hline \hline \end{tabular} \begin{tabular}{c c c} \hline \hline \(\ell^{2}(\text{flux})\) & \(t=0.0\) & \(t=0.2\) \\ \hline MM & \(1.64\times 10^{-1}\) & \(1.46\times 10^{-1}\) \\ MC & \(1.76\times 10^{-1}\) & \(2.34\times 10^{-1}\) \\ \hline \hline \end{tabular} \begin{tabular}{c c c} \hline \hline RMSE(\(\text{flux}\)) & \(t=0.0\) & \(t=0.2\) \\ \hline MM & \(3.23\times 10^{-3}\) & \(2.60\times 10^{-3}\) \\ MC & \(3.46\times 10^{-3}\) & \(4.17\times 10^{-3}\) \\ \hline \hline \end{tabular} \begin{tabular}{c c c} \hline \hline \(\ell^{2}(\text{flux})\) & \(t=0.0\) & \(t=0.2\) \\ \hline MM & \(1.64\times 10^{-1}\) & \(1.46\times 10^{-1}\) \\ MC & \(1.76\times 10^{-1}\) & \(2.34\times 10^{-1}\) \\ \hline \hline \end{tabular} \begin{tabular}{c c c} \hline \hline \(\ell^{2}(\text{flux})\) & \(t=0.0\) & \(t=0.2\) \\ \hline MM & \(1.76\times 10^{-1}\) & \(2.34\times 10^{-1}\) \\ \hline \hline \end{tabular} \begin{tabular}{c c c} \hline \hline \(\ell^{2}(\text{flux})\) & \(t=0.0\) & \(t=0.2\) \\ \hline MM & \(1.64\times 10^{-1}\) & \(1.46\times 10^{-1}\) \\ MC & \(1.76\times 10^{-1}\) & \(2.34\times 10^{-1}\) \\ \hline \hline \end{tabular} \begin{tabular}{c c c} \hline \hline \(\ell^{2}(\text{flux})\) & \(t=0.0\) & \(t=0.2\) \\ \hline MM & \(1.76\times 10^{-1}\) & \(2.34\times 10^{-1}\) \\ \hline \hline \end{tabular} \begin{tabular}{c c} \hline \hline \(\ell^{2}(\text{flux})\) & \(t=0.0\) & \(t=0.2\) \\ \hline MM & \(1.76\times 10^{-1}\) & \(2.34\times 10^{-1}\) \\ \hline \hline \end{tabular} \begin{tabular}{c c} \hline \hline \(\ell^{2}(\text{flux})\) & \(t=0.0\) & \(t=0.2\) \\ \hline MM & \(1.76\times 10^{-1}\) & \(2.34\times 10^{-1}\) \\ \hline \hline \end{tabular} \begin{tabular}{c c} \hline \hline \(\ell^{2}(\text{flux})\) & \(t=0.0\) & \(t=0.2\) \\ \hline MM & \(3.23\times 10^{-3}\) & \(2.60\times 10^{-3}\) \\ MC & \(3.46\times 10^{-3}\) & \(4.17\times 10^{-3}\) \\ \hline \hline \end{tabular} \begin{tabular}{c c} \hline \hline \(\ell^{2}(\text{flux})\) & \(t=0.0\) & \(t=0.2\) \\ \hline MM & \(1.76\times 10^{-1}\) & \(2.34\times 10^{-1}\) \\ \hline \hline \end{tabular}
\begin{tabular}{c c} \hline \hline \(\ell^{2}(\text{flux})\) & \(t=0.
### Problem IV: Mixing regimes
The proposed APNN methods are now tested in the case with mixing regimes. In these regimes, the scale parameter \(\varepsilon\) fluctuates across several orders of magnitude in space. The parameter \(\varepsilon\) is constructed as:
\[\varepsilon(x)=\begin{cases}\varepsilon_{0}+\frac{1}{2}\left(\tanh(5-10x)+ \tanh(5+10x)\right),&-1\leq x\leq 0.3,\\ \varepsilon_{0},&0.3<x\leq 1,\end{cases} \tag{4.10}\]
where \(\varepsilon_{0}\) is set as \(0.001\). Hence both the kinetic and high-field regimes are under consideration. The initial conditions for \(\rho,f\) are given by:
\[\rho_{0}(x)=\frac{\sqrt{2\pi}}{6}(2+\sin(\pi x)),\quad f_{0}(x,v)=\frac{\rho_ {0}(x)}{\sqrt{2\pi}}\exp(-\frac{(v+\partial_{x}\phi_{0}(x))^{2}}{2}). \tag{4.11}\]
The initial electric field \(\phi_{0}(x)\) is analytically solved by the Poisson equation 2.1b with \(h(x)=\sqrt{2\pi}/3\). Periodic BC in \(x\) direction is imposed.
We report the capacity of the proposed APNN methods to resolve the case with mixing regimes. Figure 11 plots the scale parameter \(\varepsilon\) with a discontinuity at \(x=0.3\). The small \(\varepsilon\) induces transition of the solution is hard to be learned by DNNs-based methods as explained in the Riemann problem. Figure 12 and Table 5 show that both proposed APNN methods capture quite well the profile of density \(\rho\), except at \(x=0.3\) where the solution has a weak discontinuity. This inferior performance is ascribed to the inherent difficulty in learning high-frequency information by neural networks.
Figure 10: The Riemann problem by the mass conservation based APNN method in the high-field regime (\(\varepsilon=0.001\)). Density \(\rho\) as a function of space \(x\) at \(t=0.0,0.2\) (left column). Electric field \(E\) as a function of space \(x\) at \(t=0.0,0.2\) (middle column). Flux as a function of space \(x\) at \(t=0.0,0.2\) (right column). Neural networks are \([3,128,128,128,128,128,1]\) for \(\rho,\phi\) and \([4,256,256,256,256,256,1]\) for \(f\). Batch size is \(512\) in domain, \(512\) for initial condition and \(256\) for conservation condition \(\rho=\langle f\rangle\). Penalty \(\kappa_{3}^{\rho}=3\), \(\kappa_{3}^{f}=1000\) and the rest of penalties equal to \(1\).
Figure 11: The scale parameter \(\varepsilon\) as a function of space \(x\) with several orders of magnitude.
Figure 12: The mixing regimes problem solved by the micro-macro decomposition based and mass conservation based APNN methods. Density \(\rho\) as a function of space \(x\) at \(t=0.0,0.1,0.2\). (\(a\)) Neural networks are \([3,128,128,128,128,128,128,1]\) for \(\rho,\phi\) and \([4,256,256,256,256,1]\) for \(g\). Batch size is \(512\) in domain and \(256\) for initial condition. Penalty \(\lambda_{1}=0.1\) and \(\lambda_{3}=1\). (\(b\)) Neural networks are \([3,128,128,128,128,1]\) for \(\rho,\phi\) and \([4,256,256,256,256,1]\) for \(f\). Batch size is \(512\) in domain, \(256\) for initial condition and \(256\) for conservation condition \(\rho=\langle f\rangle\). Penalty \(\kappa_{1}=\kappa_{3}=\kappa_{4}=1\).
### Problem V: The gravitational case
This section illustrates the efficacy of the proposed APNN methods for the VPFP system with gravitational force. Compared to the previous cases under repulsive electrostatic interaction, the attractive gravitational force induces different behavior. The existence of a unique weak solution locally in time has been proved in [36] for the limit equation with gravitational force.
The initial and boundary conditions applied are the same as in the Landau damping case. However, \(\phi_{0}\) is now the analytical solution to the equation \(\partial_{x}\phi_{0}(x)=\rho_{0}(x)-h(x)\) due to the gravitational force. As shown in Figure 13, the solutions obtained from both APNN methods exhibit good agreement with the exact solution. We conclude that both APNN methods correctly capture the dynamics of the VPFP system with gravitational force.
Figure 13: The gravitational case solved by the micro-macro decomposition based and mass conservation based APNN methods with high-field regime (\(\varepsilon=0.001\)). Density \(\rho\) (left column) and electric field \(E\) (right column) as functions of space \(x\) at \(t=0.0,0.2,0.5\). (\(a\)) Neural networks are \([3,128,128,128,128,128,1]\) for \(\rho,\phi\) and \([4,256,256,256,256,256,1]\) for \(g\). Batch size is \(512\) in domain and \(256\) for initial condition. Penalty \(\lambda_{1}=\lambda_{3}=1\). (\(b\)) Neural networks are \([3,128,128,128,128,1]\) for \(\rho,\phi\) and \([4,256,256,256,256,256,1]\) for \(f\). Batch size is \(512\) in domain, \(256\) for initial condition and \(256\) for conservation condition \(\rho=\langle f\rangle\). Penalty \(\kappa_{1}=50\); and \(\kappa_{3}=\kappa_{4}=1\). Errors: (\(a\)) Relative \(\ell^{2}\) error of \(\rho\) is \(2.72\times 10^{-4}\) at \(t=0.2\) and \(9.78\times 10^{-4}\) at \(t=0.5\). Relative \(\ell^{2}\) error of \(E\) is \(3.00\times 10^{-3}\) at \(t=0.2\) and \(1.77\times 10^{-2}\) at \(t=0.5\). (\(b\)) Relative \(\ell^{2}\) error of \(\rho\) is \(5.82\times 10^{-4}\) at \(t=0.2\) and \(8.75\times 10^{-4}\) at \(t=0.5\). Relative \(\ell^{2}\) error of \(E\) is \(1.52\times 10^{-2}\) at \(t=0.2\) and \(1.94\times 10^{-2}\) at \(t=0.5\).
\begin{table}
\begin{tabular}{c c c c} \hline \hline \(\ell^{2}(\rho)\) & \(t=0.0\) & \(t=0.1\) & \(t=0.2\) \\ \hline MM & \(9.23\times 10^{-4}\) & \(1.36\times 10^{-2}\) & \(3.67\times 10^{-2}\) \\ MC & \(1.85\times 10^{-3}\) & \(2.06\times 10^{-2}\) & \(4.40\times 10^{-2}\) \\ \hline \hline \end{tabular}
\end{table}
Table 5: Error of the mixing regimes problem. Relative \(\ell^{2}\) error of \(\rho\) at \(t=0.0,0.1,0.2\) for the micro-macro decomposition based method (MM for short) and the mass conservation based method (MC for short).
### Problem VI: Uncertainty quantification (UQ) problems
This section highlights the potential of the proposed APNN methods for high-dimensional problems. Such problems present major challenges for classical mesh-based numerical schemes due to substantial computational costs. Here, we consider the uncertainty quantification (UQ) problems with high-dimensional random inputs. A typical setup of UQ is represented by the high-dimensional random input vectors. The initial conditions with random inputs are set as:
\[\rho_{0}(x,\mathbf{z})=\frac{\sqrt{2\pi}}{2}(2+\cos(2\pi x))+0.1\prod_{i=1}^{10 }\sin\left(\pi z_{i}\right),\ \ z=(z_{1},z_{2},\cdots,z_{10})\sim\mathcal{U}\left([-1,1]^{10}\right), \tag{4.12}\]
where \(z_{1},z_{2},\cdots,z_{10}\) are independent random variables following a uniform distribution over \([-1,1]\). The background charge \(h\) is set as:
\[h(x,\mathbf{z})=\sqrt{2\pi}+0.1\prod_{i=1}^{10}\sin\left(\pi z_{i}\right),\ \ z=(z_{1},z_{2},\cdots,z_{10})\sim\mathcal{U}\left([-1,1]^{10}\right). \tag{4.13}\]
Then one analytically solves initial \(\phi_{0}\) by the Poisson equation and defines:
\[f_{0}(x,v,\mathbf{z})=\frac{\rho_{0}}{\sqrt{2\pi}}\exp(-\frac{\left(v+\partial _{x}\phi_{0}\right)^{2}}{2}). \tag{4.14}\]
The computational domain is \((x,v)\in[0,1]\times[-6,6]\). Periodic boundary condition in \(x\) direction is applied. The high-field limit characterized by \(\varepsilon=0.001\) is considered.
We present the performance of the two proposed APNN methods in the high-dimensional case. Density \(\rho\) and electric field \(E\) are evaluated at \(t=0.0,0.05,0.1\) by taking expectation on \(10^{4}\) times simulations for \((z_{1},\cdots,z_{10})\). Figure 14 and Figure 15 exhibit good approximations between the APNN approximations and the exact solutions. These agreements imply the potential for APNN methods to resolve high-dimensional and multiscale problems.
## 5 Conclusion
In this work, we developed two numerical methods based on PINN-with tailored loss function suitable for solving the multiscale uncertain Vlasov-Poisson-Fokker-Planck system with possible the high-field regimes. Our work incorporates the Asymptotic-Preserving (AP) mechanism into the loss function. Besides the micro-macro decomposition based Asymptotic-Preserving neural network (APNN) method, we formulate another APNN method that enforces the mass conservation law. This method exhibits high adaptability for long time duration and non-equilibrium initial data. Diverse numerical examples are conducted across varied regimes to validate and compare the performance of both APNN methods. The numerical examples demonstrate the effectiveness of two proposed APNN methods for approximating multiscale VPFP systems. While the micro-macro decomposition based APNN method shows its accuracy in discountinuity cases, the mass conservation based method shows its broader applicability for the cases with long-duration or non-equilibrium initial data.
## Acknowledgement
Shi Jin is partially supported by the Strategic Priority Research Program of Chinese Academy of Sciences XDA25010401, the NSFC grant No. 12031013, the Shanghai Municipal Science and Technology Major
Figure 14: UQ problem solved by the micro-macro decomposition based APNN method. Density \(\rho\) (left column) and electric field \(E\) (right column) as functions of space \(x\) at \(t=0.0,0.05,0.1\). Neural networks are \([13,128,128,128,128,128,1]\) for \(\rho,\phi\) and \([14,256,256,256,256,256,1]\) for \(g\). Batch size is \(512\) in domain and \(256\) for initial condition. Penalty \(\lambda_{1}=50\), \(\lambda_{3}=1\) for kinetic regime; and \(\lambda_{1}=1\), \(\lambda_{3}=1\) for high-field regime. Errors: (\(a\)) Relative \(\ell^{2}\) error of \(\rho\) is \(4.71\times 10^{-3}\) at \(t=0.05\) and \(5.00\times 10^{-3}\) at \(t=0.1\). Relative \(\ell^{2}\) error of \(E\) is \(1.30\times 10^{-2}\) at \(t=0.05\) and \(2.28\times 10^{-2}\) at \(t=0.1\). (\(b\)) Relative \(\ell^{2}\) error of \(\rho\) is \(4.67\times 10^{-3}\) at \(t=0.05\) and \(7.86\times 10^{-3}\) at \(t=0.1\). Relative \(\ell^{2}\) error of \(E\) is \(2.11\times 10^{-2}\) at \(t=0.05\) and \(1.77\times 10^{-2}\) at \(t=0.1\).
Figure 15: UQ problem solved by the mass conservation based APNN method. Density \(\rho\) (left column) and electric field \(E\) (right column) as functions of space \(x\) at \(t=0.0,0.05,0.1\). Neural networks are \([13,128,128,128,128,128,1]\) for \(\rho,\phi\) and \([14,256,256,256,256,1]\) for \(f\). Batch size is \(512\) in domain, \(256\) for initial condition and \(256\) for conservation condition \(\rho=\langle f\rangle\). Penalty \(\kappa_{1}=100\), \(\kappa_{3}=\kappa_{4}=1\) for kinetic regime and \(\kappa_{1}=100\), \(\kappa_{3}=\kappa_{4}=1\) for high-field regime. Errors: \((a)\) Relative \(\ell^{2}\) error of \(\rho\) is \(7.83\times 10^{-3}\) at \(t=0.05\) and \(1.01\times 10^{-2}\) at \(t=0.1\). Relative \(\ell^{2}\) error of \(E\) is \(2.58\times 10^{-2}\) at \(t=0.05\) and \(4.02\times 10^{-2}\) at \(t=0.1\). \((b)\) Relative \(\ell^{2}\) error of \(\rho\) is \(7.57\times 10^{-3}\) at \(t=0.05\) and \(1.11\times 10^{-2}\) at \(t=0.1\). Relative \(\ell^{2}\) error of \(E\) is \(2.57\times 10^{-2}\) at \(t=0.05\) and \(3.34\times 10^{-2}\) at \(t=0.1\).
Project (2021SHZDZX0102), and the Fundamental Research Funds for the Central Universities. Zheng Ma is supported by NSFC Grant No. 12031013, No. 92270120 and Foundation of LCP.
|
2310.02361 | Event-Enhanced Multi-Modal Spiking Neural Network for Dynamic Obstacle
Avoidance | Autonomous obstacle avoidance is of vital importance for an intelligent agent
such as a mobile robot to navigate in its environment. Existing
state-of-the-art methods train a spiking neural network (SNN) with deep
reinforcement learning (DRL) to achieve energy-efficient and fast inference
speed in complex/unknown scenes. These methods typically assume that the
environment is static while the obstacles in real-world scenes are often
dynamic. The movement of obstacles increases the complexity of the environment
and poses a great challenge to the existing methods. In this work, we approach
robust dynamic obstacle avoidance twofold. First, we introduce the neuromorphic
vision sensor (i.e., event camera) to provide motion cues complementary to the
traditional Laser depth data for handling dynamic obstacles. Second, we develop
an DRL-based event-enhanced multimodal spiking actor network (EEM-SAN) that
extracts information from motion events data via unsupervised representation
learning and fuses Laser and event camera data with learnable thresholding.
Experiments demonstrate that our EEM-SAN outperforms state-of-the-art obstacle
avoidance methods by a significant margin, especially for dynamic obstacle
avoidance. | Yang Wang, Bo Dong, Yuji Zhang, Yunduo Zhou, Haiyang Mei, Ziqi Wei, Xin Yang | 2023-10-03T18:37:29Z | http://arxiv.org/abs/2310.02361v1 | # Event-Enhanced Multi-Modal Spiking Neural Network for Dynamic Obstacle Avoidance
###### Abstract.
Autonomous obstacle avoidance is of vital importance for an intelligent agent such as a mobile robot to navigate in its environment. Existing state-of-the-art methods train a spiking neural network (SNN) with deep reinforcement learning (DRL) to achieve energy-efficient and fast inference speed in complex/unknown scenes. These methods typically assume that the environment is static while the obstacles in real-world scenes are often _dynamic_. The movement of obstacles increases the complexity of the environment and poses a great challenge to the existing methods. In this work, we approach robust dynamic obstacle avoidance twofold. First, we introduce the neuromorphic vision sensor (_i.e._, event camera) to provide _motion cues_ complementary to the traditional Laser depth data for handling dynamic obstacles. Second, we develop an DRL-based event-enhanced multimodal spiking actor network (EEM-SAN) that extracts information from motion events data via _unsupervised representation learning_ and fuses Laser and event camera data with _learnable thresholding_. Experiments demonstrate that our EEM-SAN outperforms state-of-the-art obstacle avoidance methods by a significant margin, especially for dynamic obstacle avoidance.
Dynamic Vision Sensor (DVS); Spiking Neural Network (SNN); Deep Reinforcement Learning (DRL); dynamic obstacle avoidance. +
Footnote †: ccs: Computing methodologies Vision for robotics; Reinforcement learning: Spiking neural networks
+
power consumption and low latency obstacle avoidance (Beng et al., 2017; Chen et al., 2018; Wang et al., 2019). With recent advances in deep learning, training SNN with RL for obstacle avoidance becomes the mainstream method in the field and achieves promising results. However, existing methods typically assume that the environment is static (Beng et al., 2017; Wang et al., 2019) while in real-world scenes such as malls and streets, the obstacles (_e.g._, pedestrians and vehicles) are not always static but often _dynamic_. The movement of obstacles makes the scene become more complex and leads to a faster relative speed between the agent and the obstacle, which requires the agent to make decisions and take avoidance actions in a shorter time, posing a great challenge to the existing methods.
In this work, we strive to embrace challenges toward robust _dynamic obstacle avoidance_. We approach this twofold. First, based on that the neuromorphic event camera called Dynamic Vision Sensor (DVS) (Shen et al., 2016; Wang et al., 2019) can record a _high-frequency_ stream of asynchronous brightness change events with extremely _low latency_ (in the order of \(\mu\)s) and _low power consumption_, we for the first time introduce the two-dimensional DVS event modality into the SNN-based obstacle avoidance framework to provide motion cues complementary to the traditional Laser depth data for handling dynamic obstacles, building an omni perception of scenes, similar to human stereo vision (Shen et al., 2016). Second, we develop an event-enhanced multimodal spiking actor network (EEM-SAN) that achieves dynamic obstacle avoidance in a deep reinforcement learning manner. EEM-SAN is built on three key modules: (i) a hybrid spiking variational autoencoder (HSVAE) that extracts information from DVS event data via unsupervised representation learning; (ii) a population coding (PC) module that combines population coding (Shen et al., 2016) and Poisson coding (Shen et al., 2016) to decode information from the activity of neurons; and (iii) a middle fuse decision module with learnable thresholding (MFDM-LT) designed for multimodal data fusion.
We perform extensive experiments to demonstrate the efficacy of our method and show that DVS events provide a powerful and complementary cue for dynamic obstacle avoidance (Figure 1). In summary, our contributions are:
* the first solution to solve the challenging dynamic obstacle avoidance problem using a deep-reinforcement-learning-based spiking neural network with robot state and both Laser and DVS data as input, action decision as output;
* a novel hybrid spiking variational autoencoder that decouples the representation learning of DVS event data from the whole reinforcement learning and greatly facilitates the training process;
* a new middle fuse decision module with learnable thresholding to robustly integrate Laser and DVS data.
## 2. Background and Related Work
_Leaky Integrate-and-Fire (LIF) spiking neuron model._ A spiking neural network (SNN) is a bio-plausible neural network that simulates biological information processing: neurons exchange information through spikes (or action potentials). Many different mathematical spiking neuron models have been developed, spanning from the simplest Integrate-and-Fire (IF) (Wang et al., 2019) to sophisticated Spike Response Model (SRM) (Shen et al., 2016). Our approach leverages the _Leaky Integrated-and-Fire (LIF) model_(Shen et al., 2016), a simplified variant of SRM model. We can define an \(n_{l}\)-layer feedforward SNN architecture with LIF neurons. Given \(N^{l}\) incoming spike trains at layer \(l\), \(s_{l}^{l}(t)\), the SNN forward propagation is mathematically defined as:
\[v_{l}^{l+1}(t)=\sum_{j=1}^{N^{l}}w_{ij}s_{j}^{l}(t)+\] \[v_{i}^{l+1}(t-1)f_{d}\left(s_{i}^{l+1}(t-1)\right)+b_{i}^{l+1},\] \[s_{l}^{l+1}(t)=f_{s}\left(b_{i}^{l+1}(t)\right),\] \[f_{d}(s(t))=\begin{cases}D&s(t)=0\\ 0&s(t)=1\end{cases}, \tag{1}\]
where \(w_{ij}\) is the synaptic weight between the \(j\)-th neuron on the \(l\)-th layer and the \(i\)-th neuron on the layer \(l+1\); \(b_{i}^{l+1}\) is an adjustable bias, and \(D\) is a constant. The operator \(f_{s}(\cdot)\) is a spike function defined as:
\[f_{s}(v):v\to s,s(t):=s(t)+\delta(t-t^{(f+1)}), \tag{2}\] \[t^{f+1}=\min\{t:v(t)=\Theta,t>t^{(f)}\}, \tag{3}\]
where \(s(t)\) is a sequence of spikes called a spike train, \(\delta\) is a mathematical function whose value is zero everywhere except at zero and whose integral over the entire real line is equal to one, and \(\Theta\) is the membrane potential threshold which is static and the same for all neurons in the network (Shen et al., 2016).
_Dynamic Vision Sensor._ DVS is a bio-inspired sensor that reports per-pixel brightness changes in log scale as a stream of asynchronous events (Shen et al., 2016; Wang et al., 2019). Compared to conventional frame-based cameras, DVS offers a very high dynamic range (140 dB versus 60 dB)
Figure 1. Illustration of obstacle avoidance using different sensors: (a) Laser can help detect large static objects but fails to perceive fast-moving objects; (b) event camera DVS excels to capture moving objects but cannot provide depth information to avoid large textureless obstacles; and (c) combining Laser and DVS achieves robust obstacle avoidance.
and high temporal resolution (in the order of \(\mu\)s). An event, \(e\), encodes three pieces of information: the pixel location, \((x,y)\), of an event, the timestamp, \(t\), records the time when the event is triggered, and the polarity, \(p\in\{-1,1\}\), of an event, which reflects the direction of the changes. Formally, a set of events can be defined as
\[\mathcal{E}=\{e_{k}\}_{k=1}^{N}=\{[x_{k},y_{k},t_{k},p_{k}]\}_{k=1}^{N}. \tag{4}\]
In constant lighting conditions, events are triggered by moving edges (_e.g._, object contour and texture boundaries), making the DVS a natural edge extractor. However, this attractive feature also poses a unique challenge since events predominantly stem from edges, making the measured events inherently sparse. Asynchronous and sparse events cannot be effectively handled by CNN-based approaches designed for conventional frames.
DVS-based Robot ControlMost robotics applications use traditional frame-based cameras as their perception devices. However, frame-based cameras have inherent characteristics such as high data volume, low temporal resolution, and high latency, which weakens their ability to perceive fast-moving objects and greatly limit the robot's manipulation capabilities. In this context, event cameras have attracted the attention of scholars in the field of robotics. By combining event cameras with robot perception and control, a series of breakthroughs in robot control has emerged, overcoming the limitations of traditional frame-based cameras. (Zang et al., 2017) is the first work to use DVS to avoid high-speed moving objects. To further explore the advantages of DVS, Falanga _et al.(Falanga et al., 2018)_ conducted experiments on UAVs equipped with three sensors: monocular, stereo frame-based cameras, and DVS. The results showed that the DVS (\(2\sim 4\)_ms_) had significantly lower delay than the monocular (\(26\sim 40\)_ms_) and stereo frame-based cameras (\(17\sim 70\)_ms_) when operating within the perception range of \(2\)_m_. Sanket _et al.(Sanket et al., 2018)_ used a artificial neural network (ANN) to segment independent moving objects from event streams, and reasoned about their 3D motion to perform evasion tasks. Most of these methods rely on hand-crafted features or priors (_e.g._, Kalman filter (Zang et al., 2017), optical flow (Falanga et al., 2018), and the obstacles (Sanket et al., 2018)) to perform obstacle reasoning and avoidance. Our method differs from the above works in that we combine DRL and SNN with DVS to achieve robot control, enabling continuous autonomous robot navigation and robust yet efficient dynamic obstacle avoidance.
SNNs for Multimodal-Based SensingSNNs have not been widely explored for multimodal-based sensing (Sanket et al., 2018; Sanket et al., 2018). A few attempts have been made to combine image and audio modality. Liu _et al.(Liu et al., 2018)_ developed a weighted-sum-based attention scheme to fuse image and audio modalities. The weights for each modality are dynamically decided. In the same vein, Jia _et al.(Jia et al., 2018)_ proposed an MR-SNN algorithm to fuse the same two modalities using a fusion mask. MR-SNN learns a Motif mask for each modality and generates a fusion mask by averaging the two learned masks. Instead of learning in a supervised manner, Rathi _et al.(Rathi et al., 2018)_ proposed an unsupervised multimodal learning method, which combines the image and audio modality by learning cross-modal connections enabled by the Spike Timing Dependent Plasticity (STDP) algorithm. All these methods assess their effectivenesses with a simple MNIST classification task. We validate our multimodal-based SNN approach with a more complex practical problem (_i.e._, obstacle avoidance), under both static and dynamic conditions.
SNNs_for Robot ControlSNNs get substantial attention from the robot control community due to their high energy efficiency when deployed on neuromorphic hardware. A thread of work leverages SNNs for robot obstacle avoidance tasks (Tang et al., 2018; Sanket et al., 2018; Sanket et al., 2018). Tang _et al.(Tang et al., 2018)_ proposed a hybrid framework SDDPG for mapless navigation tasks. The SDDPG framework consists of an SNN-based actor network and a CNN-based critic network, where the two networks are contained together. Ding _et al.(Ding et al., 2018)_ proposed a bio-inspired dynamic spiking threshold scheme to enhance SDDPG's homeostasis in obstacle avoidance tasks under normal and degraded conditions. Later, Tang _et al.(Tang et al., 2018)_ extended their approach to continuous control tasks and proposed the PopSAN method. An essential contribution introduced by PopSAN is the population-coding scheme, which effectively addresses the high-dimensional state problem presented in continuous control tasks. Recently, Zhang _et al.(Zhang et al., 2018)_ combined multiscale dynamic neurons coding and population coding to improve the performance of a spiking actor network. A critical difference between our approach and these methods is that we leverage two modalities for robot control instead of one.
## 3. Methodology
Figure 2(a) illustrates the diagram of our proposed event-enhanced multimodal spiking actor network (EEM-SAN) that is trained with reinforcement learning. First, the DVS sensor mounted on the agent records a high-frequency stream of brightness change events which is then encoded by a hybrid spiking variational autoencoder (HSVAE) (Figure 2(b)) into a one-dimensional vector (_i.e._, DVS State). Then, the obtained DVS state, together with the robot state and Laser state, is processed by a population coding (PC) module (Figure 2(c)) that can decode information from the activity of neurons. Finally, a middle fuse decision module with learnable thresholding (MFDM-LT) (Figure 2(d)) is used to fuse multimodal data and output obstacle avoidance decision which will be decoded into actual control action by Algorithm 1.
### Hybrid Spiking Variational Autoencoder
To exploit useful motion cues embedded in DVS event data, a key problem is how to effectively and efficiently learn its feature representations. Variational Autoencoder (VAE) (Goodfellow et al., 2016) is a powerful tool to extract the high-level embeddings of the input data. It typically consists of an encoder and decoder networks and is trained by:
\[Loss=\frac{1}{\mathsf{w}\times h}\sum_{i,j}^{w,h}(x_{i,j}-\overline{x_{i,j}})^ {2}+\int p(x)log\frac{p(x)}{q(x)}, \tag{5}\]
where the first item is the mean squared error (MSE) between the original input image \(x\) and the reconstruction \(\overline{x}\), and the second item measures how the latent vector \(p(x)\) matches a unit Gaussian \(q(x)\) by Kullback-Leibler (KL) divergence. \(W\) and \(1\) represent the width and height of the event frame, respectively. VAEs have stable learning abilities in generative models and can be applied to various tasks. Recently, a series of works have attempted to use SNNs to create VAEs and make validation on classification datasets (Sanket et al., 2018; Sanket et al., 2018; Sanket et al., 2018; Sanket et al., 2018). To take full advantage of the event-based nature of the continuous DVS stream and its rich temporal features, we design a novel hybrid spiking variational autoencoder (HSVAE) that contains
an SNN encoder and ANN decoder. The architecture details of HSVAE are shown in Table 1. Unlike traditional spiking neural networks that reset the membrane potential, our method records all SNN-related states during one episode of simulated environment interaction. We record the current value, membrane potential and spike value of every neuron during one episode for the complete backpropagation chain. During deployment, we only need to record the relevant states at the last moment. We verified that SNNs trained in this way can integrate more temporal information to make better decisions. A single SNN layer of our HSVAE is shown in Figure 2(b) where refractory states are omitted for clarity. As illustrated, a stream of events recorded by the event camera mounted on a mobile robot is fed into SNN that can encode the spatio-temporal features of the input data into a latent state \(z\). In Figure 2(b), \(P\) is pre-synaptic potential, \(C\) is current value and \(V\) is the membrane potential of the spiking neuron. The training process of HSVAE is presented in Algorithm 2.
\begin{table}
\begin{tabular}{|c|c|c||c|} \hline
**Layer** & **Kernel** & **Output** & **Layer Type** \\ \hline input & & 128\(\times\)128\(\times\)1 & DVS128 \\
1 & 16\(\times\)4p1s2 & 64\(\times\)64\(\times\)16 & \\
2 & 32\(\times\)4p1s2 & 32\(\times\)32\(\times\)32 & SNN LIF Encoder \\
3 & 16\(\times\)4p1s2 & 16\(\times\)16\(\times\)16 & \\
4 & 1e1p1s2 & 8\(\times\)8\(\times\)1 & \\
5 & - & 64 & Flatten() \\
6 & - & 64 & \(\mu=\) FC(\(V^{t}\)) \\
7 & - & 64 & \(\sigma=\) FC(\(V^{t}\)) \\
8 & - & 8\(\times\)8\(\times\)1 & UnFlatten() \\
9 & 16\(\times\)4p1s2 & 16\(\times\)16\(\times\)16 & \\
10 & 32\(\times\)4p1s2 & 32\(\times\)32\(\times\)32 & ANN Decoder \\
11 & 16\(\times\)4p1s2 & 64\(\times\)64\(\times\)16 & \\
12 & 1e4p1s2 & 128\(\times\)128\(\times\)1 & Event Frame \\ \hline \end{tabular}
* Notation: XcIPzS represents channel X convolution filters (YxY) with padding \(Z\) and stride \(S\).
\end{table}
Table 1. Architecture of our Hybrid Spiking VAE (HSVAE).
Figure 2. Overview of EEM-SAN (a) and its three main components: (b) a Hybrid Spiking Variational Autoencoder (HSVAE) module, (c) a Population Coding (PC) module, and (d) a Middle Fuse Decision Module with Learnable Thresholding (MFDM-LT).
```
0: Left and right wheel speeds \(v_{L}\), \(v_{R}\)
0: Min and max wheel speeds \(v_{min}\), \(v_{max}\)
0: Robot State, Laser State, Event frame Deque: \(D_{E}=(E_{1},\cdots,E_{5})\)
0: Learnable encoding means \(\mathbf{\mu}\) and standard deviations \(\mathbf{\sigma}\) for all populations
0: Middle Fuse Module: \(MF\) and total timestep T DVS state generated by the Spiking Encoder of HSVAE: DVS State = HSVAE (\(D_{E}\)) Input Multimodal Observation \(O\) = [Robot State, Laser State, DVS State] Spikes generated by the population coding module: \(\mathbf{X}\) = PC (\(\mathbf{o}\), \(\mathbf{\mu}\), \(\mathbf{\sigma}\)) for\(t=1,\cdots,T\)do Laser spikes from populations at timestep \(t\): \(\mathbf{S}_{L}^{(t)(0)}=\mathbf{X}^{(t)(O_{0}\cdots O_{L})}\) DVS spikes from populations at timestep \(t\): \(\mathbf{S}_{L}^{(t)(0)}=\mathbf{X}^{(t)(O_{D}\cdots O_{cmd})}\) Fuse spikes from Middle Fuse Module: \(\mathbf{S}_{M}^{(t)(0)}=MF(\mathbf{S}_{L}^{(t)(0)},\mathbf{S}_{D}^{(t)(0)})\) for\(k=1,\cdots,K\)do Update LIF neurons in layer \(K\) at timestep \(t\) based \(\mathbf{e}^{(t)(k)}=d_{c}\cdot\mathbf{e}^{(t-1)(k)}+\mathbf{W}^{(k)}\mathbf{s}^{(t)(k-1)}+ \mathbf{b}^{(k)}\); \(\mathbf{g}^{(t)(k)}=d_{o}\cdot\mathbf{g}^{(t-1)(k)}\cdot(1-\mathbf{s}^{(t-1)(k)})+\mathbf{e}^ {(t)(k)}\); \(\mathbf{s}^{(t)(k)}=Threshold(\mathbf{a}^{(t)(k)})\); endfor SpikeCount\({}^{t}\) = SpikeCount\({}^{t-1}+\mathbf{s}^{(t)(t)}\) endfor Action = SpikeCount\({}^{(T)}/T\) \(v_{L}\) = Action[0] * \((v_{max}-v_{min})+v_{min}\) \(v_{R}\) = Action[1] * \((v_{max}-v_{min})+v_{min}\)
```
**Algorithm 1** Forward propagation through EEM-SAN
### Population Coding
Neurons in the brain often use population coding and it isn't easy to decode correct information from the activity of a single neuron (Sang et al., 2017). Hence, some researchers have begun to use populations of neurons to encode information into spike trains fed into SNNs instead of simple frequency encoding (_e.g._, Poisson coding) (Sang et al., 2017; Wang et al., 2018). To encode each dimension of the state, we created a population with 10 neurons, where each neuron has a Gaussian receptive field with two parameters: mean and standard deviation. These parameters were learned with surrogate backpropagation. Since SAN adopts the Poisson encoding, we apply the group encoding method combined with Poisson coding as PopSAN method in our experimental comparison. For a fair comparison, our method takes the same population and Poisson coding. Formally, the population coding function can be formulated as:
\[\begin{cases}A_{P_{t,j}}=EXP(-1/2\cdot((s_{i}-\mu_{i,j})/\sigma_{i,j})^{2}) \\ \mathbf{A}_{P}=[A_{P_{t,1}},\cdots,A_{P_{t,j}},\cdots,A_{P_{N,J}}]\end{cases}, \tag{6}\]
\[P(O_{k,t}=1)=C_{R}^{\sigma}A_{P_{k}}^{\sigma}(1-A_{P_{k}})^{R-r}, \tag{7}\]
where \(i\) is the index of the input state (\(i=1,...,N\)), \(j\) is the index of neurons in a population (j = 1,..., \(J\)), and \(A_{P}\) is the stimulation strength after population coding, used for drawing the binary random number (Wang et al., 2018).
### Middle Fuse Decision Module with Learnable Thresholding
The middle fuse decision module consists of a middle fuse (MF) module and a decision module (DM). In MF, two modalities are first transformed to a one-dimensional vector with a length of 20 by two fully connected layers composed of LIF neurons, respectively, and the two obtained one-dimensional vectors are then fused via the element-wise addition. DM contains four fully connected SNN layers and its forward propagation process is presented in Algorithm 1. Since the threshold function of SNNs is non-differentiable, many methods focus on how to learn and tune it (Wang et al., 2018; Wang et al., 2018; Wang et al., 2018). Among them, the approximate backpropagation method is widely due to its efficiency and flexibility. Here, we adopt the STBP (Sang et al., 2017) algorithm which uses the rectangular function to approximate the gradient of a spike as follows:
\[z(V)=\begin{cases}1&\text{if }|V-V_{th}|<0.5\\ 0&\text{otherwise}\end{cases}, \tag{8}\]
where \(z\) is the pseudo-gradient, \(V\) is membrane potential, and \(V_{th}\) is the threshold.
Several dynamic thresholding schemes have been developed in recent years to make neuronal thresholds learnable and varied throughout the network. However, most existing methods require strict prerequisites or are not based on the gradient descent method, which results in a higher computational load or makes it difficult to migrate to other SNNs (Wang et al., 2018; Wang et al., 2018). A few attempts have been made to integrate with the existing back-propagation-based training algorithms on classification tasks (Wang et al., 2018; Wang et al., 2018). Following the path, we further explore the performance of learnable thresholds under multimodal reinforcement learning tasks. We use the learnable thresholding
mechanism in our MFDM. This mechanism is endowed with parameter optimization capability through STBP. By doing so, all neurons have different thresholds in the same network. This means the neuron's response depends not only on its internal state but also on the threshold level. The key idea of the learnable thresholding is to find the comprehensive gradient of the loss function, and then the weight \(W\) and the neuron's threshold \(H\) are simultaneously updated until convergence. From Eq. 1 and 8, the threshold's partial derivatives of the loss function can be calculated as follows:
\[\frac{\partial L}{\partial H^{n}}=\sum_{t=1}^{T}\frac{\partial L}{\partial s^{ t,n}}\frac{\partial s^{t,n}}{\partial H^{n}}\frac{\partial t^{n}}{\partial H^{n}}. \tag{9}\]
To ensure that the threshold remains within the appropriate area, we create a new parameter \(r\) to define \(H\) using hyperbolic tangent relation, formulated as \(H=\tanh\left(r\right)\). With this, Eq. 9 can be expressed as:
\[\begin{split}\frac{\partial L}{\partial r^{n}}&= \sum_{t=1}^{T}\frac{\partial L}{\partial s^{t,n}}\frac{\partial s^{t,n}}{ \partial H^{n}}\frac{\partial H^{n}}{\partial r^{n}}\\ &=-\sum_{t=1}^{T}\frac{\partial L}{\partial s^{t,n}}f^{{}^{\prime }}\left({{v}^{t,n}}-tanh(r^{n})\right)\left(1-tanh^{2}(r^{n})\right).\end{split} \tag{10}\]
Improved in this way, the threshold \(H\) of the neurons can be iteratively trained using the backpropagation method and will be in the range (-1, 1). In this context, we apply the learnable thresholding scheme to MFDM and let neurons in the same layer share the same threshold to reduce learnable parameters.
## 4. Experiments and Evaluation
We evaluate the obstacle avoidance capabilities of the proposed EEM-SAN using success rate (SR) as a metric. SR is the percentage of successful passes among 200 trials. A successful pass is a trial in which a robot can reach the destination without touching any static or dynamic obstacles within 1000 steps. Our evaluation baseline model and test environment are modified variants of the SAN (Song et al., 2018) and its original simulated test environment, respectively.
### Experimental Settings
**Implementation Details.** We integrated the EEM-SAN with the deep reinforcement learning algorithm DDPG (Krizhevsky et al., 2014). We repeated each experiment three times to obtain the mean success rate. The frequency of the Laser sensor and DVS are set to 20 Hz and 100 Hz, respectively. The timestep of MFDM-LT is set as 5. The number of groups in the population coding is set as 10. The neuron current decay constant and the voltage decay constant are set as 0.5 and 0.75, respectively. During the training process, we set the batch size to 256 and the learning rate to 0.0001.
**Simulator Setup.** Our experiments are based on the Gazebo simulator (Gazebo, 2019). Both the training and testing use the robot operating system (ROS) as a middleware. The testing environment was set to be different from the training environment for better validating the generalization capability of a method. Following existing methods SAN (Song et al., 2018), PopSAN (Song et al., 2018), and BDETT (Bahdan et al., 2018), we developed and tested our method in the same environment and with the same random seeds. Our testing environments are set to be very challenging to better validate the robustness of an obstacle avoidance method. The challenges of our testing environments are in the following aspects: (i) **densely and highly dynamic obstacles** (eleven dynamic obstacles, all set to higher than the maximum speed of the robot); (ii) **faster robot speed** (the maximum speed of the robot is twice that in SAN); and (iii) **narrow traversal passages and more densely organized static obstacles**. We have experimentally demonstrated in Table 2 that the existing SOTA method SAN (Song et al., 2018) has a significant accuracy drop (_i.e._, from 97.8% to 58.0%) when transferred from common scenes to our challenging scenes. More details about training and testing environment can be found in Appendix and SAN (Song et al., 2018).
### Evaluation
We extensively compare the effectiveness of our proposed EEM-SAN to three state-of-the-art methods including SAN (Song et al., 2018), PopSAN (Song et al., 2018), and BDETT (Bahdan et al., 2018) across two different test maps (Bahdan et al., 2018; Song et al., 2018) under both dynamic and static conditions. The experimental results are reported in Table 2 and Figure 3.
_Dynamic Conditions._ To ensure the diversity and challenge of the scene, we set up eleven dynamic obstacles that reciprocate linearly along different trajectories at different speeds, and the speed of all moving obstacles is set to slightly higher than the maximum speed of the robot. From the results in Table 2, we can see that our EEM-SAN outperforms all competing methods by a significant margin. For example, compared with the state-of-the-art method BDETT (Bahdan et al., 2018), our method improves _SR_ by 10.8% and 11.8% on the two testing maps, respectively. This clearly demonstrates the effectiveness of our method for dynamic obstacle avoidance.
_Static Conditions._ Although our method was originally designed for dynamic obstacle avoidance, we also tested its performance in static conditions to see its robustness. From Table 2, we observe that: (1) our method still performs the best among all the compared methods for static obstacle avoidance; (2) when varying the maximum robot speed from 1.0 m/s to 0.5 m/s, all the methods get significant performance improvement. This is because a robot with a slow-moving speed has more time to make a decision and take action, which greatly decreases the difficulty of obstacle avoidance. Under such a condition, our method achieves very robust avoidance, _i.e._, the success rate _SR_ is up to 1.000; (3) compared to SAN (Song et al., 2018), the PopSAN (Song et al., 2018) which is based on SAN (Song et al., 2018) but equipped with population coding method instead of frequency coding performs better, especially for the robot with a maximum speed of 1.0 m/s. This indicates that the population coding method is more suitable for handling rapidly changing scenes than the frequency coding method; and (4) by comparing the results of BDETT (Bahdan et al., 2018) and PopSAN (Song et al., 2018), the obvious performance improvement can be found under dynamic conditions but not under static conditions. This reveals that the dynamic thresholding scheme developed in BDETT (Bahdan et al., 2018) benefits much from the dynamic scenes but little from static conditions. By contrast, our method can significantly improve obstacle avoidance performance in both dynamic and static conditions. Furthermore, Figure 3 visualizes an example that clearly shows that the existing methods often fail around the right-angle turn while our method can achieve robust obstacle avoidance.
### Ablation Study
To analyze our EEM-SAN, we investigate (a) the importance of DVS event cues; (b) the effectiveness of HSVAE; (c) the effectiveness of MFDM-LT; and (d) the efficiency advantage of SNN over ANN. Table 3, 4, 5, and Figure 4, 5 summarize our findings.
settings. The reason behind this may be (i) the DVS modality generates larger amount of data and thus a higher threshold is needed to filter out the useless spikes and (ii) lower threshold in Laser layer enables more depth information to pass through to dominate the action decision at the startup phase of the robot when lots of DVS events noise would be generated to interfere with decision making. Another interesting finding is that the threshold values in the decision layers increase as the network goes deeper. This phenomenon can be explained as the modality-fused data can fire neurons in shallow layers easily to maintain sufficient information which would be filtered by deep-layer neurons strictly to extract most informative data for the final action decision. Furthermore, we fix the learnable thresholds in MFDM-LT during the training process (Figure 5 MFDM) and find that the performance degrades dramatically. And this can be a strong evidence to demonstrate the effectiveness of the learnable threshold scheme in our MFDM-LT.
_Efficiency Advantage of SNN over ANN._ Both HSVAE and MFDM-LT modules in our EEM-SAN are implemented by SNN, but not ANN. To show the advantage of such a choice, we compare the superior computational efficiency of our fully-SNN-based EEM-SAN against its ANN variants in terms of addition and multiplication FLOPs in Table 5. We can observe that, compared with the fully-SNN-based architecture (Table 5 (i)), much more computations are required when adopting ANN-based HSVAE (ii), ANN-based MFDM-LT (iii), or fully-ANN-based network (iv). For example, the ANN version of EEM-SAN (iv) is 4.63 and 11.66 times more expensive than the SNN version (i) in terms of addition and multiplication operations, respectively. Prior works [14; 17; 48] have demonstrated that the computing complexity of the network is positively correlated with inference speed and energy consumption, especially when the network is implemented in neuromorphic device [3; 35; 43]. Therefore, our fully-SNN-based EEM-SAN can achieve much faster inference with much lower energy consumption than its ANN counterpart.
### Limitations
In this work, we make the first attempt to connect multi-sensor representation and fully-SNN-based DRL towards robust and efficient robot control in extreme navigation scenarios with both static and fast-moving dynamic obstacles. Testing our method on different robot platforms (_e.g._, unmanned aerial vehicles) with more real challenging scenes (_e.g._, subterranean) would be a promising future work but out of focus of this work. Besides, our EEM-SAN has only been tested in a realistic simulator and has not been implemented on a real robot due to the unavailability of neuromorphic hardware. To conduct such an engineering verification, we are actively seeking permission from the Intel Neuromorphic Research Community (INRC) to use the neuromorphic chip Lohli. We will perform such an interesting verification once the hardware becomes available.
## 5. Conclusion
In this paper, we presented an event-enhanced multimodal spiking actor network (EEM-SAN) for autonomous navigation and dependable obstacle avoidance. Our solution is the first to introduce the Dynamic Vision Sensor (DVS) to provide motion cues that complement the traditional Laser depth data for handling dynamic obstacles. EEM-SAN consists of two main modules: a hybrid spiking variational autoencoder (HSVAE) which encodes the DVS event data through unsupervised representation learning, and a middle fuse decision module with learnable thresholding (MFDM-LT) designed for multimodal data fusion. Through extensive validation and ablation studies, we demonstrate the value of DVS event cues, as well as the effectiveness and robustness of our EEM-SAN. In the future, we plan to deploy our method on neuromorphic devices to maximize its advantages in terms of computational efficiency and energy consumption.
###### Acknowledgements.
This work was supported in part by National Key Research and Development Program of China (2022ZD0210500), the National Natural Science Foundation of China under Grants 61972067/U21A20491, and the Distinguished Young Scholars Funding of Dalian (No. 2022RJ01). Ziqi Wei was supported by the open funding of State Key Laboratory of Structural Analysis for Industrial Equipment.
Figure 4. Comparison between learned thresholds in different MFDM-LT layers under different initial training thresholds.
Figure 5. Performance comparison between MFDM variants with and without learnable threshold scheme under different initial training thresholds settings. |
2308.01813 | Deep Neural Networks Fused with Textures for Image Classification | Fine-grained image classification (FGIC) is a challenging task in computer
vision for due to small visual differences among inter-subcategories, but,
large intra-class variations. Deep learning methods have achieved remarkable
success in solving FGIC. In this paper, we propose a fusion approach to address
FGIC by combining global texture with local patch-based information. The first
pipeline extracts deep features from various fixed-size non-overlapping patches
and encodes features by sequential modelling using the long short-term memory
(LSTM). Another path computes image-level textures at multiple scales using the
local binary patterns (LBP). The advantages of both streams are integrated to
represent an efficient feature vector for image classification. The method is
tested on eight datasets representing the human faces, skin lesions, food
dishes, marine lives, etc. using four standard backbone CNNs. Our method has
attained better classification accuracy over existing methods with notable
margins. | Asish Bera, Debotosh Bhattacharjee, Mita Nasipuri | 2023-08-03T15:21:08Z | http://arxiv.org/abs/2308.01813v2 | # Deep Neural Networks Fused with Textures for Image Classification
###### Abstract
Fine-grained image classification (FGIC) is a challenging task in computer vision for due to small visual differences among inter-subcategories, but, large intra-class variations. Deep learning methods have achieved remarkable success in solving FGIC. In this paper, we propose a fusion approach to address FGIC by combining global texture with local patch-based information. The first pipeline extracts deep features from various fixed-size non-overlapping patches and encodes features by sequential modeling using the long short-term memory (LSTM). Another path computes image-level textures at multiple scales using the local binary patterns (LBP). The advantages of both streams are integrated to represent an efficient feature vector for image classification. The method is tested on eight datasets representing the human faces, skin lesions, food dishes, marine lives, etc. using four standard backbone CNNs. Our method has attained better classification accuracy over existing methods with notable margins.
**Keywords:** Convolutional Neural Networks; Face Recognition; Food Classification; Hand Shape;
Local Binary Patterns; Marine Life; Palmprint; Random Erasing Data Augmentation; Skin Lesions.
## 1 Introduction
Fine-grained image classification (FGIC) is a challenging problem in computer vision since the past decades [1]. It discriminates smaller visual variations among various sub-categories of objects like human faces, flowers, foods, etc. The convolutional neural networks (CNNs) have achieved high performance in FGIC. The CNNs represent object's shape, texture, and
other correlated information in the feature space. In addition with global image-level description, object-parts relation, and local patch information have shown their efficacy by mining finer details to solve FGIC. Many works have been devised leveraging attention mechanism [2], [3]; context encoding [4], [5]; erasing data augmentation [6], [7], and others. Many works avoid bounding-box annotations to localize essential image-regions using weakly supervised part selection [8]. Thus, defining region-based descriptors is a key aspect to enhance FGIC performance.
In another direction, the local binary patterns (LBP) [9] have achieved significant success in describing textural features from human faces, and other image categories [10]. LBP is a non-parametric texture descriptor, extracted from a grayscale image. It encodes the differences between a pixel and its neighborhood pixels localized in a rectangular grid _e.g._, 3\(\times\)3, etc. Here, both the textural and deep features are fused to formulate an efficient feature vector for image classification. This work proposes a method namely, **D**eep (Neural) **N**etworks fused with **T**extures (DNT) to explore its aptness for FGIC. The first path extracts deep feature map using a base CNN. Then, the high-level deep feature maps is pooled through a set of non-overlapping patches. Next, a global average pooling (GAP) layer is applied to summarize the features followed by patch-encoding using the long short-term memory (LSTM). The other path computes the histograms of LBPs as local texture-based feature descriptors. Finally, these two sets of features are mixed prior to a classification layer. We have experimented on eight small-scale image datasets (1k-15k), representing a wide variations in the object's shape, color, background, texture, etc. The datasets includes human faces with age-variations [11], [12]; hand shapes/palmprint [13], skin lesions [14], natural objects like flowers, underwater sea-lives; and food-dishes of India [15] and Thailand [16]. This paper is an improvement of our earlier published work [17]. Currently, this extended work contains new results tested on two more datasets representing hand-shape and skin lesions, including the results of previous work. The contributions of this paper are as follows:
* The deep features and local binary patterns are fused for image recognition.
* The method achieves satisfactory accuracy on eight image datasets representing the human faces, hand, skin lesions, food dishes, and natural object categories.
The rest of this paper is organized as follows: Section 2 summarizes related works, and Section 3 describes the proposed method. The experimental results are discussed in Section 4, followed by the conclusion in Section 5.
## 2 Related Works
Human faces, food items, and other objects (_e.g._, flowers, marine lives, etc.) recognition is a challenging FGIC task. Apart from global feature descriptor rendered from a full-image, patch-descriptors have attained remarkable progress using deep learning. Part-based methods focusing on local descriptions and semantic correlations are integrated [3]. In this direction, multi-scale region proposals, and fixed-size patches have attained much research attention. In [5], multi-scale region features are encoded via the LSTM units. In [4], mask-RCNN is employed to localize discriminative regions. Several approaches have explored attention mechanism to improve FGIC performance including food recognition [3]. Few methods have proposed an ensemble of various CNNs, fusion of two or more sub-networks for performance
gain [15]. Recently, vision transformers (ViT) have embedded non-overlapping patches with multi-head self-attention module [18]. The performance can be boosted by the random erasing data augmentation for multi-scale patch-based feature representation [7]. Various food-dishes classification is discussed in [19], [20], [21]. Researchers have summarized a comprehensive study on FGIC [1], and food recognition [22]. The Forward Step-wise Uncertainty-Aware Model Selection has described an deep learning based ensemble method for food-dishes classification [23]. Underwater object detection, segmentation, and classification is a challenging research area in computer vision. Studies on the marine lives detection using deep learning techniques are summarized in [24], [25]. Also, early diagnosis of skin cancer from skin lesions using hybrid models CNN-ANN and CNN-RF is presented [14]. A survey in this direction has been studied in [26]. Several benchmark public datasets are developed by the International Skin Imaging Collaboration (ISIC) for detection and classification of skin cancer, melanoma, and lesions using dermoscopy images [27], [28], [29]. Marine-life classification using transfer learning based on pre-trained CNNs is described [30]. A video dataset on underwater marine animals of six categories is proposed with baseline results [31].
Before the deep learning era, different local shape and texture information including the LBP, geometric properties, shape profiles, bag of words, Scale Invariant Feature Transform (SIFT), colors, and many more have been described in the literature [32], [33], [13], [34], [26], [14], [35], [36]. Most of these conventional feature descriptors are used for recognizing the human faces, emotions, hand-shape, palmprint, skin lesions, and other biometric modality and object categories. A significant number of works on face recognition have computed local textures using the LBP family and others. A classical gray-scale, rotation invariant, and uniform LBP at circular neighborhoods is introduced in [9]. Recently, a learning 2D co-occurrence LBP is described to attain scale invariance for image recognition [10]. Deep architectures have been developed underlying the LBP for textural feature extraction for face recognition [37]. Empirical model with local binary convolution layers is explored [38]. Weighted-LBP emphasizes more importance on regions which are more influenced by aging effects [39]. Multi-scale LBP and SIFT descriptors are used for multi-feature discriminant analysis in [40]. The VGG-Face model is used for extracting the features for face recognition [41]. A journey of LBP since the past two decades is sketched in [33]. With a brief study, this paper explores a combination of deep features using CNN and texture features using LBP for classifying images with diverse categories.
## 3 Proposed Method: Deep Networks fused with Textures
Proposed DNT is a two-stream deep model (Fig. 1). Firstly, it emphasizes the features via patches and LSTM. Then, it combines multiple LBP. Lastly, both paths are fused.
### Convolutional Feature Representation
An input color image with its class-label \(I_{l}\in\mathbb{R}^{h\times w\times 3}\) is fed into a base CNN, such as DenseNet-121, etc. A CNN, say \(N\) extracts high-level feature map \(F\in\mathbb{R}^{h\times w\times c}\) where \(h\), \(w\), and \(c\) denote the height, width, and channels, respectively. Simply, we denote \(N(I_{l},\theta)\to F\) to compute deep features, where image \(I_{l}\) is provided with its class-label \(l\), and \(\theta\) represents the learning parameters of \(N\). The feature map \(F\) from the last convolutional layer of base \(N\) is extracted to develop the proposed model by including other functional modules.
### Patch Encoding
The region proposals (\(D\)) are generated as non-overlapped uniform (same size) patches from \(I_{l}\). The resulting number of regions is \(e=(h\times w)/a^{2}\), where \(a\times a\) is spatial size of a rectangular patch \(d\). A set \(D=\{d_{1},d_{2},...,d_{e}|I_{l}\}\) of \(e\) patches are pooled from feature map \(F\) which is spatially upsampled to \(h^{\prime}\times w^{\prime}\times c\) size prior to pooling. The patches represent fine-details and local contexts which are important for subtle discrimination in FGIC. The bilinear pooling is applied to compute features from every patch of size \(h_{1}\times w_{1}\times c\). Next, the global average pooling (GAP) is applied to summarize the mean features of \(D\). It downsamples the spatial dimension at patch-level to \(1\times 1\times c\). The resulting feature map is \(F_{1}\). To learn effectiveness of patches, a single layer fully-gated LSTM [42] is applied to learn long-term dependencies via the hidden states. The encoded feature vector is denoted as \(F_{2}\in\mathbb{R}^{v\times 1}\), defined in (Eq. 1).
\[F=N(I_{l},\theta);\ \ F_{1}=N\Big{(}D,GAP(F),\theta_{1}\Big{)};\ \ F_{2}=N\Big{(}D,LSTM(F_{1}),\theta_{2}\Big{)} \tag{1}\]
### Textures Representation using Local Binary Patterns
The local binary pattern (\(LBP\)) is a monotonic grayscale-invariant local descriptor which computes spatial textures. The histogram of \(LBP\) labels is considered as a feature vector. Here, the uniform value of \(LBP_{p,R}\) is extracted as texture descriptor at global image-level. \(P\) defines total number of sampled neighbors, and \(R\) represents the radius of circular neighborhood.
\[LBP_{P,R}=\sum_{i=0}^{P-1}q(p_{i}-p_{c}).2^{i};\ \ \ \ \ \ \ \ \ q(p_{i}-p_{c})=\begin{cases}1,&\text{if }(p_{i}-p_{c}) \geq 0\\ 0,&\text{otherwise}\end{cases} \tag{2}\]
Here, \(p_{c}\) denotes grayscale value of center-pixel of a local window, and \(p_{i}\) represents value of corresponding neighbor pixel of \(p_{c}\), and \(q(.)\) is an indicator function. The histograms of multiple neighborhoods are combined to improve the effectiveness of texture patterns. The
Figure 1: Proposed method (DNT) fuses deep features and texture descriptors using local binary patterns (LBP) for fine-grained image classification.
descriptor \(F_{3}\) is defined as
\[F_{3}=\mathop{\Big{\|}}\limits_{i=j=1}^{P,R}LBP(I)_{P_{i},R};F_{final}=N(F_{2} \Big{\|}F_{3},\theta_{f});\ \ \ \bar{I}=\textit{softmax}(F_{final});\ \ \ \bar{I}\in\mathbb{R}^{Y\times 1} \tag{3}\]
where, \(\Big{\|}\) denotes concatenation operator. The neighborhood spatial structures of \(P=8,16\) and \(R=1,2\) combinations are considered, shown in top-row of Fig. 2. The dimension of combined image-level texture vector is \(4\times 256=1024\). However, other higher values can also be computed according to Eq. 2-3.
#### 3.3.1 Fusion
Finally, \(F_{2}\) and \(F_{3}\) are concatenated to produce a mixed feature vector \(F_{final}\) which is fed to a _softmax_ layer for generating an output probability vector implying each predicted class-label \(\bar{I}\) corresponds to actual-label \(l\in Y\) from a set of classes \(Y\).
### Random Region Erasing Image Augmentation
Several image augmentation methods are used _e.g._, translation, rotation, scaling, random erasing [6], [7], etc. Here, random erasing at global image-level is applied along with general data augmentations. It randomly selects a rectangular region \(I_{E}\) in \(I\), and erases pixels inside \(I_{E}\) with random values within [0, 255]. The height and width of \(I_{E}\) are randomly chosen on-the-fly within [0.2, 0.8] scale, and pixels are erased with value 127. Examples are shown in the bottom-row of Fig. 2.
Figure 2: Top-row: LBP of various neighborhoods (P, R): (8,1), (8,2), (16,1), and (16,2). Bottom-row: Random erasing data augmentation on flower and celebrity-face images.
## 4 Experimental Results and Discussion
Firstly, the datasets are summarised, followed by the implementation details. The performance evaluation and comparison with state-of-the-arts are discussed next.
### Dataset Summary
Proposed DNT is evaluated on eight datasets representing the human faces, hand-shapes, food dishes, flowers, marine-lives, and skin lesions. A well-known age-invariant human face dataset, FG-Net contains 1002 images of 82 persons with ages from 0 to 69 years [12]. A touch-less hand database, called REgim Sfax Tunisia (REST), is mainly used for palmprint (biometric) recognition using local texture and shape descriptors [13]. A subset of REST dataset containing at least 5 left-hand images (2 images for testing and remaining 3 or more for training per class) of 179 individuals each is used in our work. The datasets comprised with 80 Indian dishes [15] and 50 Thailand dishes [16] are tested. Remaining 4 datasets _i.e._, the celebrity faces 1, flowers 2, marine animals 3, and skin lesions 4 are collected from the Kaggle repository. The images are randomly divided into decent train-test sets, detailed in Table 1. Dataset samples are shown in Fig. 3-4. The top-1 accuracy (%) is evaluated for assessment and the model parameters are estimated in millions (M).
Footnote 1: [https://www.kaggle.com/datasets/vishesh1412/celebrity-face-image-dataset](https://www.kaggle.com/datasets/vishesh1412/celebrity-face-image-dataset)
Footnote 2: [https://www.kaggle.com/datasets/alxmanze/flowers-recognition](https://www.kaggle.com/datasets/alxmanze/flowers-recognition)
Footnote 3: [https://www.kaggle.com/datasets/venerzan09/sea-animals-image-dataset](https://www.kaggle.com/datasets/venerzan09/sea-animals-image-dataset)
Footnote 4: [https://www.kaggle.com/datasets/nodoubitrome/skin-cancer9-classessic](https://www.kaggle.com/datasets/nodoubitrome/skin-cancer9-classessic)
### Implementation
The DenseNet-121, DenseNet-201, ResNet-50, and MobileNet-v2 backbone CNNs are used for deep feature extraction, and fine-tuned on the target datasets. Pre-trained ImageNet weights are used to initialize base CNNs with input image-size 256\(\times\)256. Random region
Figure 3: Dataset samples are shown column-wise: human faces of FG-Net and celebrity, hand shape, and ISIC skin lesions.
erasing, rotation (\(\pm\)25 degrees), scaling (\(\pm\)0.25), and cropping 224\(\times\)224 image-size are followed for data augmentation. The output feature map (_e.g._, 7\(\times\)7\(\times\)c) is upsampled to 48\(\times\)48\(\times\)c for pooling of \(4\times 4\) patches, and the value of output channels (c) varies according to CNN architectures, _e.g._, \(c=\) 1024 for DenseNet-121. Uniform patch-size is 12\(\times\)12 pixels to generate 16 patches. The feature size of LSTM's hidden layers is 1024, and concatenated with LBP of same size. The final feature vector \(c=2048\) is fed to the _softmax_ layer for classification. Batch normalization and drop-out rate is 0.2 is applied to ease over-fitting. The Stochastic Gradient Descent (SGD) optimizer is used to optimize the categorical cross-entropy loss with an initial learning rate of 10\({}^{-3}\) and divided by 10 after 100 epochs. The DNT model is trained for 200 epochs with a mini-batch size of 8 using 8 GB Tesla M10 GPU, and scripted in Python.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Dataset & Class & Train & Test & DenseNet121 & ResNet50 & DenseNet201 & MobileNetv2 \\ \hline FG-Net & 82 & 827 & 175 & 52.38 & 48.80 & 57.73 & 52.38 \\ Celebrity & 17 & 1190 & 510 & 94.24 & 89.28 & 95.04 & 92.85 \\ Indian Food & 80 & 2400 & 1600 & 72.18 & 68.87 & 73.31 & 69.62 \\ Thai Food & 50 & 14172 & 1600 & 92.31 & 90.18 & 92.50 & 89.93 \\ Flower & 6 & 2972 & 1400 & 96.85 & 95.78 & 96.71 & 96.07 \\ Sea-life & 18 & 5823 & 3636 & 90.36 & 89.10 & 91.05 & 90.09 \\ REST-Left Hand & 179 & 616 & 358 & 79.73 & 81.25 & 80.96 & 77.27 \\ ISIC Skin Cancer & 7 & 1491 & 675 & 78.86 & 77.23 & 79.45 & 73.06 \\ \hline \multicolumn{2}{|c|}{Model Parameters (Millions)} & \multicolumn{1}{c|}{10.2} & \multicolumn{1}{c|}{29.0} & \multicolumn{1}{c|}{23.5} & \multicolumn{1}{c|}{6.1} \\ \hline \end{tabular}
\end{table}
Table 1: Dataset summary and test results using 3\(\times\)3 patches and 256\(\times\)256 LBP
Figure 4: Dataset samples are shown column-wise: food-dishes of India and Thailand, natural objects representing flower and marine-lives.
### Result Analysis and Performance Comparison
The test results with \(3\times 3\) patches and two LBP structures _i.e._, (8, 1) and (8, 2) with a total 512 textures are given in Table 1. The feature size of LSTM's hidden unit is 512, and after concatenating with histograms of LBP, size of final feature map is 1024. The last-row estimates the parameters (Millions) of various models. The accuracy (%) is very decent, except age-invariant face recognition (AIFR) _i.e._, FG-Net. Many existing methods have experimented on FG-Net dataset for AIFR by following leave-one-person out strategy [41], [43]. In our set-up, FG-Net test-set includes at least one unseen image per person by splitting 1002 samples into train-test (83:17) sets. Here, we have tested this challenging dataset for FGIC rather than AIFR. Hence, DNT is not directly comparable with existing methods. However, DNT attains better results (Table 1 and Table 3) than NTCA (48.96%) [44], and other works on AIFR [11].
The REST dataset [13] is primarily tested for palmprint based biometric identification using traditional textures and more recently using CNNs. In this work, we have tested the left-hand images of REST dataset for classification using full hand-shapes and its class-labels only, _i.e._, without determining any region of interests for palmprint, or additional pre-processing stage. The proposed DNT has achieved 85.79% top-1 accuracy using DenseNet-201. The precision is 90.0% and recall is 86.0%. On the contrary, the SIFT descriptors [13] reported 80.83% palmprint identification success with the samples of 150 persons. Though, our DNT is not directly comparable with this existing work, yet, the gain of our method is significant on this dataset. The ISIC 2017 dataset comprising with 2000 lesion images are classified into 3 categories, namely Melanoma, Seborrheic keratosis and Nevu in [29]. The classification accuracy is 85.7% using Lesion Indexing Network (LIN). However, our method is not directly comparable with LIN [29]. Our DNT has classified 7 skin diseases namely actinic keratosis, basal cell carcinoma, melanoma, nevus, pigmented benign keratosis, squamous cell carcinoma, and vascular lesion. The accuracy is 81.10% using DenseNet-201 backbone. The confusion matrices are shown in Fig. 5, representing the performance of DNT using ResNet-50 and DenseNet-201 base CNNs.
FoodNet presents classification of 50 Indian food-dishes [15], and achieves 73.50% accuracy using an ensemble method. It consists 100 images per class and 80% images per class are used for training. We have used similar 80 dishes with 50 images per class, following 60:40 train-test ratio. DNT achieves 80.75% and 74.75% accuracy using DenseNet-201 and ResNet-50, respectively (Table 3). We have tested on ThaiFood-50 [16], and the accuracy is 80.42%. In [45], the accuracy is 83.07% using ResNet-50. On the contrary, DNT attains 95.18% using DenseNet-201, and 91.93% by ResNet-50 (Table 3).
The performance on other datasets are also high. However, to the best of our knowledge, no significant results have been reported on dataset, like sea-life. We have reported the results in the context of FGIC on these datasets for further research.
The overall classification performances of various CNNs using \(3\times 3\) patches and \(2\times 256\) LBP on Indian food dishes and celebrity faces are shown in Fig. 6. It is evident that accuracy improvement is very small after 100 epochs. ResNet-50 and Xception are comparatively heavier models than the DenseNet family regarding the model parameters. Whereas, MobileNet-v2 is a lightweight model, yet, very efficient for FGIC.
Next, experiments on Indian food and flower show accuracy gain using \(4\times 4\) patches while other components are unaltered. This test is comprised with 16 patches, 512 LBP and 512 LSTM features. The results imply more patches improve accuracy (Table 2).
We have increased the number of LSTM's hidden states and number of patches. This test is carried out with the fused features of 1024 textures (LBP), and 1024 LSTM units encoded from 16 patches. The results are reported in Table 3. Clearly, DenseNet-201 performs the best among four base CNNs, while other backbones produce satisfactory results.
The significance of major components of DNT are tested, and the results are given in Table 4. Particularly, the benefits of random erasing over general image augmentation, number of patches, textures (LBP), LSTM, and their further increment in the feature space are
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Dataset & DenseNet-121 & ResNet-50 & DenseNet-201 & MobileNet-v2 \\ \hline FG-Net & 54.74 & 49.40 & **55.95** & 53.57 \\ Celebrity & 95.43 & 92.06 & **95.83** & 90.87 \\ Indian Food & 78.18 & 74.75 & **80.75** & 76.31 \\ Thai Food & 94.00 & 91.93 & **95.18** & 92.75 \\ Flower & 97.50 & 97.14 & **98.00** & 97.21 \\ Sea-life & 92.50 & 92.51 & **94.51** & 92.34 \\ REST-Left Hand & 83.52 & 83.23 & **85.79** & 80.14 \\ ISIC Skin Cancer & 80.50 & 78.42 & **81.10** & 77.52 \\ \hline Param (M) & 15.8 & 36.5 & 30.8 & 12.1 \\ \hline \end{tabular}
\end{table}
Table 3: Performance of DNT using 16 patches and 1024 LBP
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Dataset & DenseNet-121 & ResNet-50 & DenseNet-201 & MobileNet-v2 \\ \hline Flower & 97.10 & 94.78 & 96.50 & 95.71 \\ Indian Food & 72.13 & 71.43 & 76.06 & 72.94 \\ \hline Param (M) & 10.3 & 28.8 & 23.3 & 6.0 \\ \hline \end{tabular}
\end{table}
Table 2: Performance of DNT using 16 patches and 512 LBP
Figure 5: Confusion matrix on ISIC skin cancer dataset using DNT with \(4\times 4\) patches and \(2\times 1024\) LBP based on ResNet-50 (left) and DenseNet-201 (right) backbone CNNs.
investigated for performance improvement on two datasets using DenseNet-121 (DN-121). The ablative results justify the usefulness of essential components of the proposed DNT.
## 5 Conclusion
In this paper, we have presented a new work on image classification by fusing the deep features with local textures at image-level. The performance is evaluated using four base CNNs on eight diverse FGIC datasets. We have achieved better results on these datasets compared to mentioned existing works. In addition with conventional image augmentation, random region erasing data augmentation improves the accuracy. The ablation study reflects the usefulness of important components. In future, we plan to develop a new model to improve the performance further and explore other fusion strategies for wider applicability on diverse datasets.
## Acknowledgement
We thank to the Reviewers to improve this paper. We thank to the repositories for the datasets used in this work. We are thankful to BITS Pilani, Pilani Campus, Rajasthan, India, for providing necessary infrastructure and Research Initiation Grant to carry out this work.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline DenseNet-121 (DN-121) base CNN with key modules & Indian Food & Sea-life & Param \\ \hline DN-121 + common image augmentation & 63.37 & 86.24 & 7.0 \\ \hline DN-121 + common + random erasing image augment & 67.25 & 88.43 & 7.0 \\ \hline DNT (DN-121) with 9 patches, and LBP (addition) & 71.49 & 89.23 & 10.2 \\ \hline
**DNT** (DN-121) with 16 patches, and without LBP & 74.56 & \(90.28\) & 15.5 \\ \hline
**DNT** (DN-121) with 16 patches, and LBP (concatenation) & **78.18** & **92.50** & 15.5 \\ \hline \end{tabular}
\end{table}
Table 4: Ablation study on proposed DNT using DenseNet-121 (DN-121)
Figure 6: Test accuracy of various CNNs using \(3\times 3\) patches and \(2\times 256\) LBP on Indian food-80 and celebrity faces datasets. |
2306.02108 | Random matrix theory and the loss surfaces of neural networks | Neural network models are one of the most successful approaches to machine
learning, enjoying an enormous amount of development and research over recent
years and finding concrete real-world applications in almost any conceivable
area of science, engineering and modern life in general. The theoretical
understanding of neural networks trails significantly behind their practical
success and the engineering heuristics that have grown up around them. Random
matrix theory provides a rich framework of tools with which aspects of neural
network phenomenology can be explored theoretically. In this thesis, we
establish significant extensions of prior work using random matrix theory to
understand and describe the loss surfaces of large neural networks,
particularly generalising to different architectures. Informed by the
historical applications of random matrix theory in physics and elsewhere, we
establish the presence of local random matrix universality in real neural
networks and then utilise this as a modeling assumption to derive powerful and
novel results about the Hessians of neural network loss surfaces and their
spectra. In addition to these major contributions, we make use of random matrix
models for neural network loss surfaces to shed light on modern neural network
training approaches and even to derive a novel and effective variant of a
popular optimisation algorithm.
Overall, this thesis provides important contributions to cement the place of
random matrix theory in the theoretical study of modern neural networks,
reveals some of the limits of existing approaches and begins the study of an
entirely new role for random matrix theory in the theory of deep learning with
important experimental discoveries and novel theoretical results based on local
random matrix universality. | Nicholas P Baskerville | 2023-06-03T13:16:17Z | http://arxiv.org/abs/2306.02108v1 | # Random matrix theory and the loss surfaces of neural networks
###### Abstract
A dissertation submitted to the University of Bristol in accordance with the requirements for award of the degree of Doctor of Philosophy in the Faculty of Science, School of Mathematics. |
2307.00344 | Sparse-Input Neural Network using Group Concave Regularization | Simultaneous feature selection and non-linear function estimation are
challenging, especially in high-dimensional settings where the number of
variables exceeds the available sample size in modeling. In this article, we
investigate the problem of feature selection in neural networks. Although the
group LASSO has been utilized to select variables for learning with neural
networks, it tends to select unimportant variables into the model to compensate
for its over-shrinkage. To overcome this limitation, we propose a framework of
sparse-input neural networks using group concave regularization for feature
selection in both low-dimensional and high-dimensional settings. The main idea
is to apply a proper concave penalty to the $l_2$ norm of weights from all
outgoing connections of each input node, and thus obtain a neural net that only
uses a small subset of the original variables. In addition, we develop an
effective algorithm based on backward path-wise optimization to yield stable
solution paths, in order to tackle the challenge of complex optimization
landscapes. Our extensive simulation studies and real data examples demonstrate
satisfactory finite sample performances of the proposed estimator, in feature
selection and prediction for modeling continuous, binary, and time-to-event
outcomes. | Bin Luo, Susan Halabi | 2023-07-01T13:47:09Z | http://arxiv.org/abs/2307.00344v1 | # Sparse-Input Neural Network using Group Concave Regularization
###### Abstract
Simultaneous feature selection and non-linear function estimation are challenging, especially in high-dimensional settings where the number of variables exceeds the available sample size in modeling. In this article, we investigate the problem of feature selection in neural networks. Although the group LASSO has been utilized to select variables for learning with neural networks, it tends to select unimportant variables into the model to compensate for its over-shrinkage. To overcome this limitation, we propose a framework of sparse-input neural networks using group concave regularization for feature selection in both low-dimensional and high-dimensional settings. The main idea is to apply a proper concave penalty to the \(l_{2}\) norm of weights from all outgoing connections of each input node, and thus obtain a neural net that only uses a small subset of the original variables. In addition, we develop an effective algorithm based on backward path-wise optimization to yield stable solution paths, in order to tackle the challenge of complex optimization landscapes. Our extensive simulation studies and real data examples demonstrate satisfactory finite sample performances of the proposed estimator, in feature selection and prediction for modeling continuous, binary, and time-to-event outcomes.
Concave Regularized Neural Network
Neural networks, Feature selection, High dimensionality, LASSO, Concave penalty
## 1 Introduction
In the past decade, advancements in molecular, imaging and other laboratory tests have led to a growing interest in high-dimensional data analysis (HDDA). High-dimensional data refers to a dataset that contains a large number of observed variables relative to the small sample size, which presents a significant challenge in building accurate and interpretable models. For example, in bioinformatics, hundreds of thousands of RNA expressions, Genome-Wide Association Study (GWAS) data, and microarray data are used to understand the biology of disease, with only hundreds of patients involved (Visscher et al., 2012; Hertz et al., 2016; Kim and Halabi, 2016; Beltran et al., 2017). To address the curse of dimensionality, feature selection has become a critical step in HDDA. By identifying the most
representative features to characterize the biology of the disease or the outcome, feature selection approaches can increase the model interpretability and improve the generalization of the model.
There are various methods for feature selection, including filter methods (Koller and Sahami, 1996; Guyon and Elisseeff, 2003; Gu et al., 2012), wrapper methods (Kohavi and John, 1997; Inza et al., 2004; Tang et al., 2014), and embedded methods (Tibshirani, 1996; Zou, 2006; Fan and Li, 2001; Zhang et al., 2010). Among them, penalized regression methods have become very popular in HDDA since the introduction of the least absolute shrinkage and selection operator (LASSO) (Tibshirani, 1996). Penalized regression method can perform simultaneous parameter estimation and feature selection by shrinking some of the parameter coefficients to exact zeros. While LASSO has been widely used to obtain sparse estimations in machine learning and statistics, it tends to select unimportant variables to compensate for the over-shrinkage for relevant variables (Zou, 2006). To address the bias and inconsistent feature selection of LASSO, several methods have been proposed, including adaptive LASSO (Zou, 2006), the minimax concave penalty (MCP) (Zhang et al., 2010), and the smoothly clipped absolute deviation (SCAD) (Fan and Li, 2001).
However, most of these penalized methods assume linearity in the relationship between the variables and the outcomes, while the actual functional form of the relationship may not be available in many applications. Some additive non-parametric extensions have been proposed to resolve this problem (Lin and Zhang, 2006; Ravikumar et al., 2009; Meier et al., 2009), but their models rely on sums of univariate or low-dimensional functions and may not be able to capture the complex interactions between multiple covariates. Yamada et al. (2014) propose the HSIC-LASSO approach that leverages kernel learning for feature selection while uncovering non-linear feature interactions. However, it suffers from quadratic scaling in computational complexity with respect to the number of observations.
Neural networks are powerful tools for modeling complex relationships in a wide range of applications, from image (Krizhevsky et al., 2017; He et al., 2016) and speech recognition (Graves et al., 2013; Chan et al., 2016) to natural language processing (Young et al., 2018; Devlin et al., 2018) and financial forecasting (Fischer and Krauss, 2018). Their state-of-the-art performance has been achieved through powerful computational resources and the use of large sample sizes. Despite that, high-dimensional data can still lead to overfitting and poor generalization performance for neural networks (Liu et al., 2017). Recently, there have been novel developments in using regularized neural networks for feature selection or HDDA. A line of research focuses on utilizing the regularized neural networks, specifically employing the group LASSO technique to promote sparsity among input nodes (Liu et al., 2017; Scardapane et al., 2017; Feng and Simon, 2017). These methods consider all outgoing connections from a single input neuron as a group and apply the LASSO penalty on the \(l_{2}\) norm of weight vectors of each group. Other LASSO-regularized neural networks in feature selection can be found in the work of Li et al. (2016) and Lemhadri et al. (2021). However, regularized neural networks incorporating LASSO suffer from a tendency to over-shrink the non-zero weight of relevant variables and include many false positives in the selected model. The adaptive LASSO was employed to alleviate this problem (Dinh and Ho, 2020), yet their results are limited to continuous outcomes and assume that the conditional mean function is exactly a neural network. The work in Yamada et al. (2020) bypassed the \(l_{1}\) regularization by introducing stochastic gates to the input layer of neural networks. They considered
\(l_{0}\)-like regularization based on a continuous relaxation of the Bernoulli distribution. Their method, however, requires a cutoff value for selecting variables with weak signals, and the stochastic gate is unable to completely exclude the non-selected variables during model training and prediction stages.
In this paper, we propose a novel framework for sparse-input neural networks using group concave regularization to overcome the limitations of existing feature selection methods. Although concave penalties like MCP and SCAD have been shown to perform well in both theoretical and numerical settings for feature selection and prediction, they have not received the same level of attention as LASSO in the context of machine learning. Our proposed framework aims to draw attention to the underutilized potential of the concave penalty for feature selection in neural networks, by providing a comprehensive approach for simultaneous feature selection and function estimation in both low-dimensional and high-dimensional settings. In particular, our proposed method considers all outgoing connections from a single input neuron as a group and applies a proper concave penalty to the \(l_{2}\) norm of weights for each group. By shrinking all the weights of certain groups to exact zeros, it obtains a neural net that uses only a small subset of variables. In addition, we developed an effective algorithm based on the backward path-wise optimization that yields stable solution paths, to tackle the challenge of complex optimization landscapes. Our simulation studies and real data examples demonstrate the satisfactory finite sample performance of the group concave regularization, which outperforms existing methods in terms of feature selection and prediction accuracy for modeling continuous, binary, and time-to-event outcomes.
The rest of this article is organized as follows. In Section 2, we formulate the problem of feature selection for a generic non-parametric model and introduce our proposed method. The implementation of the method, including the composite gradient descent algorithm and the backward path-wise optimization, is presented in Section 3. In Section 4, we conduct extensive simulation studies to demonstrate the performance of the proposed method. The application of the method to real-world datasets is presented in Section 5. Lastly, in Section 6, we discuss the results and their implications.
## 2 Method
### Problem setup
Let \(X\in\mathbb{R}^{d}\) be a \(d\)-dimensional random vector and \(Y\) be a response variable. We assume the conditional distribution \(P_{Y|X}\) depends on a form of \(f(X_{S})\) with a function \(f\in F\) and a subset of variables \(S\subseteq\{1,\cdots,d\}\). We are interested in identifying the true set \(S\) for significant variables and estimating function \(f\) so that we can predict \(Y\) based on selected variable \(X_{S}\).
At the population level, we aim to minimize the loss
\[\min_{f\in F,S}\mathbb{E}_{X,Y}\ell(f(X_{S}),Y)\]
where \(\ell\) is a loss function tailored to a specific problem. In practical settings, the distribution of \((X,Y)\) is often unknown, and instead only an independent and identically distributed (i.i.d.) random sample of size \(n\) is available, consisting of pairs of observations \(\left(X_{i},Y_{i}\right)_{i=1}^{n}\). Additionally, if the number of variables \(d\) is large, an exhaustive search over all possible
subsets \(S\) becomes computationally infeasible. Furthermore, we do not assume any specific form of the unknown function \(f\) and aim to approximate \(f\) nonparametrically using neural networks. Thus, our goal is to develop an efficient method that can simultaneously select a variable subset \(S\) and approximate the solution \(f\) for any given class of functions using a sparse-input neural network.
### Proposed framework
We consider function estimators based on feedforward neural networks. Let \(\mathcal{F}_{n}\) be a class of feed forward neural networks \(f_{\mathbf{w}}:\mathbb{R}^{d}\mapsto\mathbb{R}\) with parameter \(\mathbf{w}\). The architecture of a multi-layer perceptron (MLP) can be expressed as a composition of a series of functions
\[f_{\mathbf{w}}(x)=L_{D}\circ\sigma\circ L_{D-1}\circ\sigma\circ\cdots\circ \sigma\circ L_{1}\circ\sigma\circ L_{0}(x),x\in\mathbb{R}^{d},\]
where \(\circ\) denotes function composition and \(\sigma(x)\) is an activation function defined for each component of \(x\). Additionally,
\[L_{i}(x)=\mathbf{W}_{i}x+b_{i},i=0,1,\ldots,\mathcal{D},\]
where \(\mathbf{W}_{i}\in\mathbb{R}^{d_{i+1}\times d_{i}}\) is a weight matrix, \(D\) is the number of hidden layers, \(d_{i}\) is the width defined as the number of neurons of the \(i\)-th layer with \(d_{0}=d\), and \(b_{i}\in\mathbb{R}^{d_{i+1}}\) is the bias vector in the \(i\)-th linear transformation \(L_{i}\). Note that the vector \(\mathbf{w}\in\mathbb{R}^{P}\) is the column-vector concatenation of all parameters in \(\{\mathbf{W}_{i},b_{i}:i=0,1,\ldots,\mathcal{D}\}\). We define the empirical loss of \(f_{\mathbf{w}}\) as
\[\mathcal{L}_{n}(\mathbf{w})=\frac{1}{n}\sum_{i=1}^{n}\ell(f_{\mathbf{w}}(X_{i} ),Y_{i}).\]
The ideal scenario is to have a sparse-input neural network \(f_{\mathbf{w}}\) that only takes signals from the important variables, meaning that \(\mathbf{W}_{0,j}=\mathbf{0}\) for \(j\notin S\), where \(\mathbf{W}_{0,j}\) denotes the \(j\)th column vector of \(\mathbf{W}_{0}\). In order to minimize the empirical loss \(\mathcal{L}_{n}(\mathbf{w})\) while inducing sparsity in \(\mathbf{W}_{0}\), we propose to train the neural network by minimizing the following group regularized empirical loss
\[\hat{\mathbf{w}}=\operatorname*{argmin}_{\mathbf{w}\in\mathbb{R}^{P}}\left\{ \mathcal{L}_{n}(\mathbf{w})+\sum_{j=1}^{d}\rho_{\lambda}(\|\mathbf{W}_{0,j}\| _{2})+\alpha\|\mathbf{w}\|_{2}^{2}\right\}, \tag{1}\]
where \(\|\cdot\|_{2}\) denotes the Euclidean norm of a vector.
The objective function in Eq. (1) comprises three components:
1. \(\mathcal{L}_{n}(\mathbf{w})\) is the empirical loss function such as the mean squared error loss for regression tasks, the cross-entropy loss for classification tasks, and the negative log partial likelihood for proportional hazards models. Further details can be found in Appendix A.
2. \(\rho_{\lambda}\) is a concave penalty function parameterized by \(\lambda\geq 0\). To simultaneously select variables and learn the neural network, we group the outgoing connections from each single input neuron that corresponds to each variable. The concave penalty function \(\rho_{\lambda}\) is designed to shrink the weight vectors of specific groups to exact zeros, resulting in a neural network that utilizes only a small subset of the original variables.
3. \(\alpha\|\mathbf{w}\|_{2}^{2}\), where \(\alpha>0\), represents the ridge regularization term used to prevent overfitting in neural networks. Note that feature selection, employing \(\rho_{\lambda}\), depends exclusively on the magnitudes of weights in the input layer. However, it is possible to diminish the influence of \(\rho_{\lambda}\) by reducing all weights in the input layer while simultaneously allowing larger weights in other layers, without affecting the network's output. The ridge regularization addresses this issue by promoting smaller, well-balanced weights, thereby improving model stability and mitigating overfitting.
Note that when the number of hidden layers \(D=0\), the function \(f_{\mathbf{w}}\) reduces to a linear function, and the optimization problem in Eq. (1) becomes the framework of elastic net (Zou and Hastie, 2005), SCAD-\(L_{2}\)(Zeng and Xie, 2014), and Mnet (Huang et al., 2016), with the choice of \(\rho_{\lambda}\) to be LASSO, SCAD, and MCP, respectively.
### Concave regularization
There are several commonly used penalty functions that encourage sparsity in the solution, such as LASSO (Tibshirani, 1996), SCAD (Fan and Li, 2001), and MCP (Zhang et al., 2010). When applied to the \(l_{2}\)-norm of the coefficients associated with each group of variables, these penalty functions give rise to group regularization methods, including group LASSO (GLASSO) (Yuan and Lin, 2006), group SCAD (GSCAD) (Guo et al., 2015), and group MCP (GMCP) (Huang et al., 2012). Specifically, LASSO, SCAD, and MCP are defined as follows.
* **LASSO** \[\rho_{\lambda}(t)=\lambda|t|.\]
* **SCAD** \[\rho_{\lambda}(t)=\begin{cases}\lambda|t|&\text{for }|t|\leq\lambda,\\ -\frac{t^{2}-2a\lambda|t|+\lambda^{2}}{2(a-1)}&\text{for }\lambda<|t|\leq a \lambda,\\ \frac{(a+1)\lambda^{2}}{2}&\text{for }|t|>a\lambda,\end{cases}\] where \(a>2\) is fixed.
* **MCP** \[\rho_{\lambda}(t)=\operatorname{sign}(t)\lambda\int_{0}^{|t|}\left(1-\frac{z} {\lambda a}\right)_{+}dz,\] where \(a>0\) is fixed.
It has been demonstrated, both theoretically and numerically, that the concave regularization methods of SCAD and MCP exhibit strong performance in terms of feature selection and prediction (Fan and Li, 2001; Zhang et al., 2010). Unlike the convex penalty LASSO, which tends to over-regularize large terms and provide inconsistent feature selection, concave regularization can reduce LASSO's bias and improve model selection accuracy. The rationale behind the concave penalty lies in the behavior of its derivatives. Specifically, SCAD and MCP initially apply the same level of penalization as LASSO, but gradually reduce the penalization rate until it drops to zero when \(t>a\lambda\). Given the benefits of the concave penalization, we propose using the group concave regularization in our framework for simultaneous feature selection and function estimation.
## 3 Implementation
### Composite gradient descent
Note that the optimization in Eq. (1) is not a convex optimization problem since both empirical loss function \(\mathcal{L}_{n}(\mathbf{w})\) and the penalty function \(\rho_{\lambda}\) can be non-convex. To obtain the stationary point, we use the composite gradient descent algorithm (Nesterov, 2013). This algorithm is also incorporated in Feng and Simon (2017); Lembadri et al. (2021) for sparse-input neural networks based on the LASSO regularization.
Denote \(\bar{\mathcal{L}}_{n,\alpha}(\mathbf{w})=\mathcal{L}_{n}(\mathbf{w})+\alpha\| \mathbf{w}\|_{2}^{2}\) as the smooth component of the objective function in Eq. (1). The composition gradient iteration for epoch \(t\) is given by
\[\mathbf{w}^{t+1}=\operatorname*{argmin}_{\mathbf{w}}\left\{\frac{1}{2}\| \mathbf{w}-\tilde{\mathbf{w}}^{t+1}\|_{2}^{2}+\sum_{j=1}^{d}\rho_{\lambda}(\| \mathbf{W}_{0,j}\|_{2})\right\}, \tag{2}\]
where \(\tilde{\mathbf{w}}^{t+1}=\mathbf{w}^{t}-\gamma\nabla\bar{\mathcal{L}}_{n, \alpha}(\mathbf{w}^{t})\) is the gradient update only for the smooth component \(\bar{\mathcal{L}}_{n,\alpha}(\mathbf{w}^{t})\) that can be computed using the standard back-propagation algorithm. Here \(\gamma>0\) is the learning rate for the update and can be set as a fixed value or determined by employing the backtracking line search method, as described in Nesterov (2013). Let \(A_{j}\) represent the index set of \(\mathbf{W}_{0,j}\) within \(\mathbf{w}\). We define \(A\) as the index set that includes all weights in the input layer, given by \(A=\{\bigcup_{j=1}^{d}A_{j}\}\). By solving Eq. (2), we obtain the iteration form \(\mathbf{w}_{A^{c}}^{t+1}=\tilde{\mathbf{w}}_{A^{c}}^{t+1}\) and
\[\mathbf{w}_{A_{j}}^{t+1}=h(\tilde{\mathbf{w}}_{A_{j}}^{t+1},\lambda),\quad \text{for }j=1,\cdots,d. \tag{3}\]
Here, \(A^{c}\) refers to the complement of the set \(A\), and the function \(h\) represents the thresholding operator, which can be determined by the specific penalty \(\rho_{\lambda}\). By taking \(\rho_{\lambda}\) to be the LASSO, MCP, and SCAD penalty, it can be verified that the GLASSO, GSCAD, and GMCP solutions for the iteration in Eq. (3) have the following form:
* GLASSO \[h_{\text{GLASSO}}(z,\lambda)=S_{\text{g}}(z,\lambda).\]
* GSCAD \[h_{\text{GSCAD}}(z,\lambda)=\begin{cases}S_{\text{g}}(z,\lambda),&\text{if } \|z\|_{2}\leq 2\lambda,\\ \frac{a-1}{a-2}S_{\text{g}}(z,\frac{a\lambda}{a-1}),&\text{if }2\lambda<\|z\|_{2} \leq a\lambda,\\ z,&\text{if }\|z\|_{2}>a\lambda.\end{cases}\]
* GMCP \[h_{\text{GMCP}}(z,\lambda)=\begin{cases}\frac{a}{a-1}S_{\text{g}}(z,\lambda),& \text{if }\|z\|_{2}\leq a\lambda,\\ z,&\text{if }\|z\|_{2}>a\lambda,\end{cases}\]
where \(S_{\text{g}}(z;\lambda)\) is the group soft-thresholding operator defined as
\[S_{\text{g}}(z;\lambda)=\left(1-\frac{\lambda}{\|z\|_{2}}\right)_{+}z.\]
Therefore, we can efficiently implement the composite gradient descent by integrating an additional thresholding operation into the input layer. This operation follows the gradient
descent step using the smooth component \(\bar{\mathcal{L}}_{n,\alpha}(\mathbf{w})\). The calculation for epoch t can be summarized as follows:
1. compute gradient of \(\bar{\mathcal{L}}_{n,\alpha}(\mathbf{w}^{t})\) using back-propagation,
2. update \(\tilde{\mathbf{w}}^{t+1}\leftarrow\mathbf{w}^{t}-\gamma\nabla\bar{\mathcal{L} }_{n,\alpha}(\mathbf{w}^{t})\),
3. update \(\mathbf{w}^{t+1}_{A^{c}}\leftarrow\tilde{\mathbf{w}}^{t+1}_{A^{c}}\) and \(\mathbf{w}^{t+1}_{A_{j}}\gets h(\tilde{\mathbf{w}}^{t+1}_{A_{j}},\lambda)\), for \(j=1,\cdots,d\).
The final index set of the selected variables is \(\hat{S}=\{j:\hat{\mathbf{w}}_{A_{j}}\neq\mathbf{0}\}\).
### Backward path-wise optimization
We are interested in learning neural networks not only for a specific value of \(\lambda\), but also for a range of \(\lambda\) where the networks vary by the number of included variables. Specifically, we consider a range of \(\lambda\) from \(\lambda_{min}\), where the networks include all or an excessively large number of variables, up to \(\lambda_{max}\), where all variables are excluded and \(|W_{0}|\) becomes a zero matrix. Since the objective function is not convex and has multiple local minima, the solution of Eq. (1) with random initialization may not vary continuously for \(\lambda\in[\lambda_{min},\lambda_{max}]\), resulting in a highly unstable path of solutions that are regularized by \(\lambda\).
To address this issue, we consider a path-wise optimization strategy by varying the regularization parameter along a path. In this approach, we use the solution of a particular value of \(\lambda\) as a warm start for the next problem. Regularized linear regression methods (Friedman et al., 2007, 2010; Breheny and Huang, 2011) typically adopt a forward path-wise optimization, starting from a null model with all variables excluded at \(\lambda_{max}\) and working forward with decreasing \(\lambda\)s. However, our numerical studies for sparse-input neural networks showed that starting from a sparse solution as an initial model does not produce a larger model along the path until jumping to the full model at a sufficiently small \(\lambda\). To tackle this problem, we implement a backward path-wise optimization approach, starting from a dense model at the minimum value of \(\lambda_{min}\) and solving toward sparse models up to \(\lambda_{max}\) with all variables excluded from the network. This dense-to-sparse warm start approach is also employed in (Lembadri et al., 2021) using LASSO regularization.
To further illustrate the importance of using backward path-wise optimization in regularized neural networks, we investigate variables selection and function estimation of a regression model \(Y=f(X)+\epsilon\), where \(f(X)=\log(|X_{1}|+0.1)+X_{1}X_{2}+X_{2}+\exp(X_{3}+X_{4})\) with 4 informative and 16 nuisance variables, and each \(X_{i}\) and \(\epsilon\) follow the standard normal distribution. More details of the simulations are presented in Section 4. Figure 1 shows the solution paths of GMCP and GLASSO based on different types of optimization. It is observed that non-pathwise optimization leads to fluctuations or variations in the solution path, whereas forward path-wise optimization tends to remain in the same sparse model until transitioning to a full model with a sufficiently small \(\lambda\). In contrast, backward path-wise optimization using GMCP and GLASSO produces relatively smooth solution paths. It should be noted that GLASSO has a tendency to over-shrink the weight vectors of informative variables and include more variables in the model. In contrast, GMCP is designed to prevent over-shrinkage and offers a smooth transition from the full model to the null model.
In addition to providing stable and smooth solution paths, backward path-wise optimization is advantageous computationally. In particular:
* The consecutive estimates of weights in the path are close, which reduces the rounds of gradient descent needed for each iteration. Therefore, the bulk of the computational cost occurs at \(\lambda_{min}\), and a lower number of iterations for the remaining \(\lambda\)s results in low computational costs.
* We observe that the excluded variables from previous solutions are rarely included in the following solutions. By pruning the inputs of the neural network along the solution path, further reduction in computation complexity can be achieved as the model becomes sparse. Since the computational cost scales with the number of input features, this approach can significantly speed up computation, particularly for high-dimensional data.
### Tuning Parameter Selection
Two tuning parameters are required in our proposed framework: the group penalty coefficient \(\lambda\) and the ridge penalty coefficient \(\alpha\). The former controls the number of selected variables and yields sparser models for larger values of \(\lambda\), while the latter imposes a penalty on the size of the network weights to prevent overfitting.
In all numerical studies presented in this paper, we adopted a 20% holdout validation set from the training data. The model was trained using the remaining data, and the optimal values for \(\lambda\) and \(\alpha\) were selected from a fine grid of values based on their performance on the validation set.
Python code and examples for the proposed group concave regularized neural networks are available at [https://github.com/r08in/GCRNN](https://github.com/r08in/GCRNN).
## 4 Simulation Studies
We assess the performance of the proposed regularized neural networks in feature selection and prediction through several simulation settings. The data are generated through the following function:
\[f(X)=\log(|X_{1}|+0.1)+X_{1}X_{2}+X_{2}+\exp(X_{3}+X_{4}),\]
where each component of the covariate vector \(X=(X_{1},\cdots,X_{d})^{T}\in\mathbb{R}^{d}\) are generated from independent standard normal distribution. Here \(d>4\) and function \(f(X)\) is sparse that only the first four variables are relevant to the outcome. We generate \(n\) i.i.d. random samples with continuous outcomes, binary outcomes, and time-to-event outcomes in the following three examples, respectively.
**Example 1** (Regression Model): _The continuous response \(Y\) is generated from a standard regression model with an additive error as follows_
\[Y=f(X)+\epsilon,\]
_where \(\epsilon\) follows a standard normal distribution._
Figure 1: **Solution path of \(l_{2}\) norm of the weight vector associated with each input node \(\|\mathbf{W}_{0j}\|_{2}\). Top left: Non-pathwise optimization using GMCP. All the neural network weights are initialized by drawing from \(N(0,0.1)\) for each \(\lambda\). Top right: forward path-wise optimization using GMCP. It starts from the null model and computes the solution with decreasing \(\lambda\). Random initialization is used before the selection of the first set of variables. Bottom left: backward path-wise optimization using GMCP. Bottom right: backward path-wise optimization using GLASSO.**
**Example 2**: **(Classification Model)** _The binary response \(Y\in\{0,1\}\) is generated from a Bernoulli distribution with the following conditional probability_
\[P(Y=1|X)=\frac{1}{1+\exp{(-f(X))}}.\]
**Example 3**: **(Proportional Hazards Model)**_The survival time \(T\) follows the proportional hazards model with a hazard function of the form_
\[h(t|X)=h_{0}(t)\exp{(f(X))}, \tag{4}\]
_where \(h_{0}(t)\) is the baseline hazard function. Thus, \(T=H_{0}^{-1}\left(-\log(U)\exp{(f(X))}\right)\), where \(U\) is a uniform random variable in \([0,1]\), and \(H_{0}\) is the baseline cumulative hazard function defined as \(H_{0}(t)=\int_{0}^{t}h_{0}(u)du\). We considered a Weibull hazard function for \(H_{0}\), with the scale parameter \(=2\) and the shape parameter \(=2\). Among \(n\) samples, \(\mathcal{C}\times n\) of them are randomly chosen as censoring observations with the censoring indicator \(\delta_{i}=0\) and otherwise \(\delta_{i}=1\) for event observations. The censoring rate \(\mathcal{C}=0\), \(0.2\) and \(0.4\) in our simulation. We define the observed time_
\[Y_{i}=\begin{cases}T_{i}&\text{if }\delta_{i}=1,\\ C_{i}&\text{if }\delta_{i}=0,\end{cases}\]
_where censoring time \(C_{i}\) is drawn from a uniform distribution \((0,T_{i})\)._
For each example, we consider the low and high dimensional settings in the following scenarios:
1. Low dimension (LD): \(d=20\) and \(n=300\) and \(500\).
2. High dimension (HD): \(d=1000\) and \(n=500\).
We perform \(200\) simulations for each scenario. The performance of the trained model in prediction and feature selection are evaluated on independently generated \(n\) random samples by the following measures:
1. Prediction score, which is defined as the \(R^{2}\) score, classification accuracy, and C-index for the regression, classification, and proportional hazards model, respectively.
2. Model size (MS), is the average number of selected covariates.
3. False positives rate (FPR), is the percent of selected but unimportant covariates: \[FPR=\frac{|\hat{S}\bigcap S^{c}|}{|S^{c}|}\times 100\%.\]
4. False negatives rate (FNR), is the percent of non-selected but important covariates: \[FNR=\frac{|\hat{S}^{c}\bigcap S|}{|S|}\times 100\%.\]
Recall that \(S\) represents the true index sets of important variables and \(\hat{S}=\{j:\|\mathbf{W}_{0,j}\|_{2}\neq 0\}\) denote the index sets of selected variables.
In our numerical studies, we consider the concave regularization GMCP and GSCAD for our proposed framework. We name the method of regularized neural networks using GLASSO, GMCP, and GSCAD as GLASSONet, GMCPNet, and GSCADNet, respectively. We compare the proposed group concave regularized estimator GMCPNet and GSCADNet with GLASSONet, neural network (NN) without feature selection (\(\lambda=0\)), random (survival) forest (RF), and the STG method proposed in Yamada et al. (2020). We also include the oracle version of NN and RF (Oracle-NN and Oracle-RF) as benchmarks, where true relevant variables are known in advance and used directly in the model fitting process. See Appendix D for the implementation details.
### Results
Table 1 presents a summary of the feature selection performance of the four approaches, namely STG, GLASSONet, GMCPNet, and GSCADNet, across all simulation scenarios. We exclude the results of the STG method for Example 3 as it either selects all variables or none of them for the survival outcome. For both LD and HD settings, GMCPNet and GSCADNet consistently outperform the STG and GLASSONet in terms of feature selection. These models exhibit superior performance, achieving model sizes that closely matched the true model, along with low false positive rates (FPR) and false negative rates (FNR) for most scenarios. While STG performs well in certain LD settings, it tends to over select variables in HD scenarios with a large variability in the model size. On the other hand, GLASSONet is prone to selecting more variables, leading to larger model sizes in both LD and HD settings, which aligns with the inherent nature of the LASSO penalty.
Figure 2 displays the distribution of testing prediction scores for the regression, classification, and proportional hazards models (PHM) with a censoring rate of \(\mathcal{C}=0.2\). The complete results of the PHM can be found in Appendix B. GMCPNet and GSCADNet demonstrate comparable performance in both LD and HD settings, achieving similar results to the Oracle-NN and outperforming NN, RF, and even Oracle-RF in most of scenarios. STG performs similarly to Oracle-NN in the LD setting of the regression model, but its performance deteriorates in the HD setting and other models. Conversely, while GLASSONet outperforms or is comparable to the Oracle-RF method in the LD settings, it suffers from overfitting in the HD settings by including a large number of false positives in the final model.
It is worth pointing out that the Oracle-NN outperforms the Oracle-RF in every scenario, indicating that neural network-based methods can serve as a viable alternative to tree-based methods when the sample size is sufficiently large relative to the number of predictors. Additionally, NN without feature selection performs the worst across all the simulation scenarios, highlighting the importance of feature selection, especially in the high-dimensional space.
Overall, the simulation results demonstrate the superior performance of the concave penalty in terms of feature selection and prediction. The proposed GMCPNet and GSCADNet methods exhibit remarkable capabilities in selecting important variables with low FPR and low FNR, while achieving accurate predictions across various models. These meth
ods show promise for tackling the challenges of feature selection and prediction in high-dimensional data.
## 5 Real Data Example
### Survival Analysis on CALGB-90401 dataset
We utilize the data from the CALGB-90401 study, a double-blinded phase III clinical trial that compares docetaxel and prednisone with or without bevacizumab in men with metastatic castration-resistant prostate cancer (mCRPC) to illustrate the performance of our proposed method. The CALGB-90401 data consists of 498,801 single-nucleotide polymorphisms (SNPs) that are processed from blood samples from patients. We assume a dominant model for SNPs and thus each of the SNPs is considered as a binary variable. Since most SNPs are irrelevant for predicting patient survival, we only consider 181 SNPs that are associated with DNA damage-repair genes, and 444 prioritized SNPs based on an updated literature search (Mateo et al., 2015; Wyatt et al., 2016; Beltran et al., 2011;
\begin{table}
\begin{tabular}{|l|l|l l|l l|l l|} \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Method} & \multicolumn{2}{l|}{\(n=300\), \(d=20\)} & \multicolumn{2}{l|}{\(n=500\), \(d=20\)} & \multicolumn{2}{l|}{\(n=500\), \(d=1000\)} \\ & & FPR, FNR & MS (SD) & FPR, FNR & MS (SD) & FPR, FNR & MS (SD) \\ \hline \multirow{4}{*}{Regression} & STG & 7.8, 5.4 & 5.0 (2.0) & 7.2, 2.1 & 5.1 (1.7) & 1.6, 12.1 & 19.2 (28.0) \\ & GLASSONet & 86.7, 4.4 & 17.7 (4.7) & 96.0, 0.6 & 19.3 (2.2) & 24.3, 29.2 & 245.0 (98.7) \\ & GMCPNet & 2.2, 4.5 & 4.2 (1.0) & 2.1, 4.2 & 4.2 (1.0) & 0.0, 5.8 & 4.1 (0.9) \\ & GSCADNet & 2.4, 5.0 & 4.2 (1.1) & 2.0, 3.2 & 4.2 (0.9) & 0.0, 7.1 & 4.1 (1.0) \\ \hline \multirow{4}{*}{Classification} & STG & 25.3, 16.5 & 7.4 (6.9) & 10.1, 11.0 & 5.2 (4.8) & 3.8, 15.6 & 40.9 (183.4) \\ & GLASSONet & 89.2, 1.0 & 18.2 (2.8) & 94.7, 0.2 & 19.1 (2.0) & 16.3, 21.5 & 165.4 (92.9) \\ & GMCPNet & 14.4, 3.9 & 6.2 (3.6) & 9.3, 0.8 & 5.5 (2.6) & 0.3, 16.2 & 6.5 (4.2) \\ & GSCADNet & 11.6, 5.8 & 5.6 (2.9) & 7.0, 1.0 & 5.1 (1.9) & 0.3, 16.8 & 6.8 (5.9) \\ \hline \multirow{4}{*}{Survival (\(\mathcal{C}=0\))} & GLASSONet & 97.2, 0.0 & 19.5 (1.0) & 99.2, 0.0 & 19.9 (0.5) & 18.2, 20.0 & 184.8 (56.2) \\ & GMCPNet & 1.6, 0.4 & 4.2 (0.6) & 0.8, 0.0 & 4.1 (0.4) & 0.0, 1.5 & 4.1 (0.5) \\ & GSCADNet & 1.9, 0.2 & 4.3 (0.6) & 1.2, 0.0 & 4.2 (0.5) & 0.0, 1.6 & 4.1 (0.7) \\ \hline \multirow{4}{*}{Survival (\(\mathcal{C}=0.2\))} & GLASSONet & 98.0, 0.1 & 19.7 (0.8) & 99.6, 0.0 & 19.9 (0.3) & 16.8, 18.0 & 170.6 (49.0) \\ & GMCPNet & 1.9, 0.4 & 4.3 (0.9) & 1.7, 0.0 & 4.3 (1.0) & 0.0, 2.6 & 4.2 (0.9) \\ & GSCADNet & 1.8, 0.2 & 4.3 (0.9) & 1.7, 0.1 & 4.3 (0.8) & 0.0, 3.5 & 4.1 (0.7) \\ \hline \multirow{4}{*}{Survival (\(\mathcal{C}=0.4\))} & GLASSONet & 95.0, 0.0 & 19.2 (1.7) & 98.8, 0.0 & 19.8 (0.5) & 15.2, 19.9 & 154.6 (48.4) \\ & GMCPNet & 5.8, 8.1 & 4.6 (1.5) & 1.2, 0.1 & 4.2 (0.5) & 0.0, 4.2 & 4.1 (1.0) \\ \cline{1-1} & GSCADNet & 4.8, 7.5 & 4.5 (1.3) & 1.7, 0.0 & 4.3 (0.7) & 0.0, 4.9 & 4.2 (1.0) \\ \hline \end{tabular}
\end{table}
Table 1: **Feature selection results of STG, GLASSONet, GMCPNet, and GSCADNet under the regression, classification, and proportional hazards models.** The false positives rate (FPR %), false negatives rate (FNR %), and model size (MS) with standard deviation (SD) in parentheses are displayed.
Figure 2: **Top row**: \(R^{2}\) score of the proposed methods for the regression model outlined in Example 1. **Middle row**: Accuracy of the proposed methods for the classification model outlined in Example 2. **Bottom row**: C-Index of the proposed methods for the survival model outlined in Example 3. The dashed lines represent the median score of the Oracle-NN, used as a benchmark for comparison.
Mosquera et al., 2013; Robinson et al., 2015; Abida et al., 2019; De Laere et al., 2017). We also include the eight clinical variables that have been identified as prognostic markers of overall survival in patients with mCRPC (Halabi et al., 2014): opioid analgesic use (PAIN), ECOG performance status, albumin (ALB), disease site (defined as lymph node only, bone metastases with no visceral involvement, or any visceral metastases), LDH greater than the upper limit of normal (LDH.High), hemoglobin (HGB), PSA, and alkaline phosphatase (ALKPHOS). The final dataset has \(d=635\) variables with a number of patients \(n=631\) and censoring rate \(C=6.8\%\).
We consider the proportional hazard model in the form of Eq. (4) for our proposed methods to identify clinical variables or SNPs that can predict the primary outcome of overall survival in these patients. To evaluate the feature selection and prediction performance of the methods, we randomly split the dataset 100 times into training sets (n=526) and testing sets (n=105) using a 5:1 allocation ratio. We apply the methods to each of the training sets and calculate the time-dependent area under the receiver operating characteristic curve (tAUC) on the corresponding testing sets. The tAUC assesses the discriminative ability of the predicted model and is computed using the Uno method (Uno et al., 2007). The results of the 100 random splits are presented in Figure 3. Our proposed method, GSCADNet, outperforms the others in survival prediction (left panel). It is worth noting that the NN method, which lacks feature selection, tends to overfit in high-dimensional data and performs poorly. Although these three regularized methods of sparse-input neural networks perform similarly in survival prediction, GLASSONet has a tendency to over-select variables and the proposed GMCPNet and GSCADNet select a relatively smaller set of variables without compromising prediction performance (middle panel). The right panel of Figure 3 demonstrates that GSCADNet successfully selects most of the significant clinical variables and detects some of the important SNPs in predicting overall survival.
Figure 3: **Left**: Boxplots of tAUC from testing set over 100 random splits. **Middle**: the number of selected variables for GLASSONet, GMCPNet, and GSCADNet. **Right**: Variables selected by GSCADNet with selection proportion\(\geq 10\%\) over 100 random splits.
### Classification on MNIST Dataset
We aim to visualize the selection of variables by considering the classification problem on the MNIST dataset. The MNIST dataset is a well-known benchmark dataset in computer vision, consisting of grayscale images of handwritten digits from 0 to 9. In this study, we focus on the binary classification problem of distinguishing between 7s and 8s in the MNIST dataset. We evaluate our proposed methods GMCPNet and GSCADNet, along with existing methods GLASSONet, STG, NN, and RF, based on their feature selection and classification accuracy.
The MNIST dataset consists of grayscale images with 28x28 pixels, which gives 784 variables. We randomly select 250 pictures of 7s and 8s from the MNIST dataset, respectively, to form a high-dimensional training dataset with \(d=784\) and \(n=500\). Note that the class labels depend only on the pixels of the central area of the images, and thus a good method for feature selection should identify the relevant pixels and classify the images of 7s and 8s. We also corrupt the images with i.i.d. random noise from a standard normal distribution so that the input features are not sparse. The trained models are evaluated on the testing dataset with 2002 images. We repeated the process of random sampling and model fitting 100 times, and the feature (pixel) selection and classification results are shown in Figure 4. We observe that GLASSONet, GMCPNet, GSCADNet all achieve median accuracies greater than 91% and outperform the other methods. While the heatmaps of feature selection show that GLASSONet, GMCPNet, GSCADNet consistently select relevant pixels in high frequencies, GLASSONet tends to over select variables and GMCPNet and GSCADNet choose irrelevant pixels in much lower frequencies (indicated by dark red colors).
## 6 Discussion
Among the plethora of feature selection methods, penalized regression has gained significant popularity. However, many of these methods rely on the assumption and application of linear theory, which may not capture the complex relationships between covariates and the outcome of interest. In biomedical research, for instance, researchers often normalize data and employ penalized techniques under a linear model for feature selection. However, relying solely on data transformation risks overlooking intricate biological relationships and fails to address the dynamic nature of on-treatment biomarkers. Moreover, advancements in molecular and imaging technologies have introduced challenges in understanding the non-linear relationships between high-dimensional biomarkers and clinical outcomes. Novel approaches are urgently needed to tackle these complexities, leading to an improved understanding of non-linear relationships and optimizing patient treatment and care.
In this paper, we have proposed a novel framework that utilizes group concave regularization for feature selection and function estimation in complex modeling, specifically designed for sparse-input neural networks. Unlike the convex penalty LASSO, the concave regularization methods such as MCP and SCAD gradually reduce the penalization rate for large terms, preventing over-shrinkage and improving model selection accuracy. Our optimization algorithm, based on the composite gradient descent, is simple to implement, requiring only an additional thresholding operation after the regular gradient descent step on the smooth component. Furthermore, we incorporate backward path-wise optimization to efficiently navigate the optimization landscape across a fine grid of tuning parameters,
Figure 4: **Comparing feature selection and classification performance by STG, GLASSONet, GMCPNet and GSCADNet. Top left**: the image that takes the average of all the images in the training set and shows relevant pixels in grayscale. **Bottom right**: testing accuracy for clarification of 7s and 8s in the MNIST dataset using training dataset \(d=784\) and \(n=500\). **Other panels**: heatmaps depicting the selection frequencies of each pixel across 100 repetitions for each method. Lighter colors indicate higher selection frequencies, with white highest and darker colors lowest.
generating a smooth solution path from dense to sparse models. This path-wise optimization approach improves stability and computational efficiency, potentially enhancing the applicability of our framework for sparse-input neural networks.
The runtime of our proposed method over a solution path of \(\lambda\)s (with a fixed \(\alpha\)) can be comparable to or even shorter than training a single model with a fixed \(\lambda\), such as the NN method without feature selection (\(\lambda=0\)). To illustrate this, we examine the algorithm complexity of the NN method, which can be approximated as \(\mathcal{O}(ndT)\), where \(T\) denotes the number of epochs for learning the neural network. In contrast, training our proposed method over a solution path of \(m\)\(\lambda\)s has a complexity of \(\mathcal{O}(n\bar{d}T^{\prime}m)\), where \(\bar{d}\) represents the averaged number of inputs along the solution path with dimension pruning, and \(T^{\prime}\) is the number of epochs for each \(\lambda\) in the path. In our simulation with the HD scenario (\(d=1000\)), we set \(T=5000\), \(T^{\prime}=200\), and \(m=50\). Assuming the number of inputs decreases equally along the solution path from the full model to the null model, we have \(\bar{d}=d/2=500\). Thus, \(ndT=n\bar{d}T^{\prime}m\) indicates that solving for an entire path of our proposed method requires a similar computation as training a single model. In real applications, especially in high-dimensional scenarios, the dimensionality usually drops quickly along the solution path. Therefore, \(\bar{d}\) can be much smaller than \(d/2\), and thus solving for a whole solution path can be more computationally efficient. It is worth pointing out that we set \(T^{\prime}\) to be small for the first parameter \(\lambda_{\min}\) as well in the HD setting, to avoid overfitting of an initial dense model.
In our numerical studies, the parameter tunings are limited to \(\lambda\) and \(\alpha\). However, in real-world applications, it may be necessary to tune additional hyperparameters, such as the learning rate, the number of layers, and the number of nodes in each layer. The computation cost associated with tuning these parameters can be reduced by leveraging parallel computing techniques. Furthermore, when the sample size is moderate and the important variables are sparse, we have observed that using a two- or three-layer neural network with a modest number of nodes per layer (e.g., 5 or 10 nodes per layer) is often sufficient for a wide range of datasets.
One limitation of the proposed method arises in ultra-high dimensional scenarios where the number of variables reaches hundreds of millions. Directly applying the proposed sparse-input neural networks in such cases can lead to an exceedingly complex optimization landscape, making it computationally infeasible. To mitigate this limitation, one suggestion is to employ a pre-screening method to reduce the dimensionality to a more manageable size prior to applying the proposed approach.
Another limitation pertains to the proposed group regularized method, which is primarily focused on individual feature selection. This limitation becomes particularly relevant when dealing with covariates exhibiting grouping structures, such as a group of indicator variables representing a multilevel categorical covariate, or scientifically meaningful groups based on prior knowledge. A potential future research direction could involve redefining the groups within the proposed framework. This could be achieved by considering all outgoing connections from a group of input neurons as a single group, enabling group selection and accommodating the presence of grouping structures.
In conclusion, our study exhibits the advantages of employing group concave regularization for sparse-input neural networks. The findings highlight its efficacy in consistently selecting relevant variables and accurately modeling complex non-linear relationships between
covariates and outcomes, across both low and high-dimensional settings. The proposed approach holds the promising potential to enhance modeling strategies and find wide-ranging applications, particularly in diseases characterized by non-linear biomarkers, such as oncology and infectious diseases.
This research was supported in part by the National Institutes of Health Grants R01CA256157, R01CA249279, 1R21CA263950-01A1, the United States Army Medical Research Materiel Command grant Award Number HT9425-23-1-0393, and the Prostate Cancer Foundation Challenge Award.
|
2303.14836 | Illuminati: Towards Explaining Graph Neural Networks for Cybersecurity
Analysis | Graph neural networks (GNNs) have been utilized to create multi-layer graph
models for a number of cybersecurity applications from fraud detection to
software vulnerability analysis. Unfortunately, like traditional neural
networks, GNNs also suffer from a lack of transparency, that is, it is
challenging to interpret the model predictions. Prior works focused on specific
factor explanations for a GNN model. In this work, we have designed and
implemented Illuminati, a comprehensive and accurate explanation framework for
cybersecurity applications using GNN models. Given a graph and a pre-trained
GNN model, Illuminati is able to identify the important nodes, edges, and
attributes that are contributing to the prediction while requiring no prior
knowledge of GNN models. We evaluate Illuminati in two cybersecurity
applications, i.e., code vulnerability detection and smart contract
vulnerability detection. The experiments show that Illuminati achieves more
accurate explanation results than state-of-the-art methods, specifically, 87.6%
of subgraphs identified by Illuminati are able to retain their original
prediction, an improvement of 10.3% over others at 77.3%. Furthermore, the
explanation of Illuminati can be easily understood by the domain experts,
suggesting the significant usefulness for the development of cybersecurity
applications. | Haoyu He, Yuede Ji, H. Howie Huang | 2023-03-26T22:20:17Z | http://arxiv.org/abs/2303.14836v1 | # Illuminat: Towards Explaining Graph Neural Networks for Cybersecurity Analysis
###### Abstract
Graph neural networks (GNNs) have been utilized to create multi-layer graph models for a number of cybersecurity applications from fraud detection to software vulnerability analysis. Unfortunately, like traditional neural networks, GNNs also suffer from a lack of transparency, that is, it is challenging to interpret the model predictions. Prior works focused on specific factor explanations for a GNN model. In this work, we have designed and implemented Illuminat, a comprehensive and accurate explanation framework for cybersecurity applications using GNN models. Given a graph and a pre-trained GNN model, Illuminat is able to identify the important nodes, edges, and attributes that are contributing to the prediction while requiring no prior knowledge of GNN models. We evaluate Illuminat in two cybersecurity applications, i.e., code vulnerability detection and smart contract vulnerability detection. The experiments show that Illuminat achieves more accurate explanation results than state-of-the-art methods, specifically, 87.6% of subgraphs identified by Illuminat are able to retain their original prediction, an improvement of 10.3% over others at 77.3%. Furthermore, the explanation of Illuminat can be easily understood by the domain experts, suggesting the significant usefulness for the development of cybersecurity applications.
## 1 Introduction
Graph is a structured data representation with nodes and edges, where nodes denote the entities and edges denote the relationship between them. Graph has been widely used in cybersecurity applications, such as code property graph for code vulnerability detection [1], API-call graph for Android malware detection [2], and website network for malicious website detection [3].
Graph neural networks (GNNs) are multi-layer neural networks that can learn representative embeddings on structured graph data [4]. Because of that, GNNs have achieved outstanding performance for various cybersecurity applications, such as malicious account detection [5], [6], fraud detection [7], [8], software vulnerability detection [9], [10], [11], memory forensic analysis [12], and binary code analysis [13], [14], [15]. Existing works usually construct graphs from an application and train a GNN model that can learn the node or graph representation. The GNN model can be used for various downstream tasks, e.g., node classification [16], link prediction [17], and graph classification [18]. Taking binary code similarity detection as an example, recent works [13], [15] first transform binary code into an attributed control flow graph. With that graph, they train a GNN model that can represent each graph as an embedding. Finally, they use a similarity function, e.g., cosine similarity, to measure code similarity.
### _Motivation_
When a pre-trained GNN model is deployed in reality, it usually generates many positive alarms that need to be manually verified by the cybersecurity analysts to confirm their existence. Unfortunately, existing models usually generate too many alarms that the cybersecurity analysts are not able to verify them in a timely manner, which is known as the threat alert fatigue problem [19]. According to a recent study from FireEye, most organizations in US receive 17,000 alters per week while only 4% of them are properly investigated [20].
To investigate a generated alarm, the cybersecurity analysts usually need to manually figure out why it is predicted as a positive. If such information can be provided automatically, it would greatly help to accelerate the manual investigation process. Unfortunately, GNN models lack the explainability similar to traditional deep neural networks. There have been efforts towards automatically explaining neural networks, such as convolutional neural networks [21], recurrent neural networks [22]. However, they cannot be directly applied because GNNs work on the graph, which is an irregular data structure. Each node in a graph can have arbitrary neighbors and the order may be arbitrary as well. Therefore, the traditional explanation methods would fail to explain the interaction between node attributes without considering the message passing through edges.
On the other hand, several GNN explanation methods are proposed recently [23], [24], [25], [26], [27]. However, these methods mainly aim to provide an explanation of certain factors from the input graphs. Table I compares the recent works for GNN explanation. In particular, PGMExplainer [23] and SubgraphX [24] apply a _node-centric_ strategy to identify the important nodes as the explanation result. Such a method ignores the edges, which are critical for the cybersecurity analysts to investigate the alarm. The other three methods, i.e., GNNExplainer [26], PGExplainer [25], and GraphMask [27], apply an _edge-centric_ strategy by identifying the important edges and regarding the constructed subgraph as the explanation result. Though the subgraph includes both important edges and nodes, the nodes identified in this way are usually not the
truly important ones. Besides nodes and edges, only GNNExplainer investigates the important attributes. However, GNNExplainer identifies the important attributes globally, which is not specified for each node or edge.
### _Requirement_
To accurately explain the GNN models, we believe an explanation method should satisfy the following requirements.
**Requirement #1: comprehensive explanation.** We derive the comprehensiveness from completeness in [28]. Particularly to GNNs, it refers to all the major factors in an input graph, which includes nodes, edges, and attributes. The factors in a cybersecurity-based graph are specially constructed from real situations. The information contained by different factors is learned and used by GNNs. The distrust in GNNs exists as long as the decision-making is not clear to the cybersecurity analysts. A comprehensive explanation for all the major factors is crucial for them to fully understand the GNN behaviors.
**Requirement #2: accurate explanation.** An explanation is accurate if it is able to identify the important factors that contribute to the prediction. For an accurately identified subgraph, we assume that the prediction probability of it should be close to or even higher than its original prediction probability. If a prediction error is not precisely addressed, the same error may lead to vulnerability from malicious attacks. The inaccurate explanation would not be able to help diagnose the error but enlarge the vulnerability.
**Requirement #3: no need for prior knowledge of GNN models.** The cybersecurity models are not easily accessible due to two major reasons. First, the cybersecurity applications require more complex neural network architectures [28]. Not only do the models consist of different types of neural networks, but the GNNs are adapted differently from basic GNNs. Second, in real scenarios, the users often times are using pre-trained models [29] especially for complex models. The prediction accuracy itself does not alleviate the distrust of a model from the users, due to the lack of transparency. Explanation methods without the need for prior knowledge are easier to access and utilize because of their flexibility. With the constraints, the explanation method with no prior knowledge requirement is preferred by cybersecurity analysts.
### _Contribution_
Motivated by these, we design a comprehensive and accurate GNN explanation method, Illuminati. Given a pre-trained GNN model and a graph as inputs, Illuminati firstly learns the importance score of edges and node attributes collectively by using edge masks and attribute masks. Illuminati then aggregates the learned masks and computes the importance score of nodes. In the end, our method identifies the important subgraph towards the GNN prediction. Attribute masks are applied locally to each attribute of each node so that we can identify the attributes that are important to different nodes. Further, Illuminati does not require prior knowledge of the pre-trained model, which makes it more applicable to cybersecurity applications.
We compared the explanation performance of Illuminati with prior works on public datasets and cybersecurity application datasets. We focused on two cybersecurity applications, i.e., smart contract vulnerability detection and code vulnerability detection. The evaluation is based on the prediction change between the input graph and the explained subgraph. 87.6% Illuminati-explained subgraphs retain their original prediction, with an improvement of 10.3% over the baseline methods. Then we provided case studies for explaining the two real-world applications and a deep analysis of the model behaviors. We believe they can help cybersecurity analysts quickly understand and diagnose the alarms generated by applications using GNN models.
In summary, we make three major contributions.
* **New insight and method**. To the best of our knowledge, this is the first GNN explanation method to provide a comprehensive and cybersecurity-specialized explanation method for cybersecurity applications using GNN models.
* **Extensive evaluation.** We evaluate the performance of Illuminati quantitatively with two cybersecurity applications. The results show Illuminati outperforms existing explanation methods in terms of not only accuracy but also cybersecurity requirements.
* **Cybersecurity case study.** We demonstrate the practical usage of Illuminati with the case study of cybersecurity applications. We interpret the model behavior from both correct and incorrect predictions through the output of Illuminati, as well as analyze how we can troubleshoot and improve the models.
The main novelty of Illuminati is to jointly consider the contributions of nodes, edges, and attributes. Also, we analyze and prove that explaining node importance is critical for graph classification tasks. Further, we find the node attributes should be explained individually for better comprehensiveness and accuracy.
Illuminati is different from existing works in terms of providing a comprehensive and accurate explanation method specialized for real cybersecurity applications. Particularly, compared with a representative related work, i.e., GNNExplainer, it is a generic method that only explains edges and does not explain node attributes individually.
## 2 Security Cases and Threat Models
### _Case #1: Code Vulnerability_
**Code vulnerability** is the flaw or weakness in the code that can cause risks and be exploited by the attackers to
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Method & Node & Edge & Attribute & No Prior Knowledge \\ \hline GNNExplainer & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) \\ PGExplainer & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) \\ GraphMask & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) \\ PGM-Explainer & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) \\ SubgraphX & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) \\
**Illuminati** & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) \\ \hline \hline \end{tabular}
\end{table} TABLE I: Comparison of different GNN explanation methods (\(\bullet\)= true; \(\bigcirc\)= false; \(\bigcirc\)= incomplete).
conduct unauthorized activities, e.g., stealing data [30]. For example, the straightforward risks of buffer overflow are data loss, software crashes, and arbitrary code execution, which can be exploited by attackers. A program is classified as vulnerable if it contains a vulnerability. The tested CWE dataset has three types of vulnerability: "double free", "use after free", and "NULL pointer dereference".
**Threat model.** The attackers can exploit the detected vulnerabilities to initialize malicious actions by using various attack patterns against the software or the system. The attackers can exploit the vulnerability simultaneously in different rewarding approaches, such as hacking tools and remote commands. These attacks may eventually lead to software crashes and data loss, profoundly, financial loss and privacy leakage. This impacts both users and developers.
### _Case #2: Smart Contract Vulnerability_
**Smart contract vulnerability** is a coding error that can be exploited by attackers to cause financial loss. A program with such a coding error is classified as vulnerable. Smart contract vulnerability is dangerous because most smart contracts deal with financial assets directly, and the blockchain cannot roll back changes. We study two types of vulnerabilities, i.e., reentrancy vulnerability and infinite loop vulnerability. The reentrancy vulnerability occurs when the contract transfers funds before the balance is updated. The infinite loop occurs when the loop never finishes.
**Threat model.** The attackers can exploit the logical errors to conduct the attack by submitting a transaction to the blockchain. This can cause transaction failures or repeated transactions, which eventually lead to financial loss. For example, the malicious contract can drain funds from the reentrancy-vulnerable contract by recurrent reentrant calls [31]. The DAO attack is one exploitation case to such vulnerability. The attack conducts repeated withdrawals before the balance update. This attack has caused significant money stolen.
## 3 Background
### _Graph Neural Networks_
**Graph Neural Networks (GNNs)**, \(\Phi\) takes an attributed graph \(G=(\mathcal{V},\mathcal{E})\) and \(\mathcal{X}\) as input then generates a set of node representations \(\mathcal{Z}\) through hidden layers, where \(\mathcal{V}\) and \(\mathcal{E}\) denote nodes and edges, and \(\mathcal{X}\) denotes attributes.
A GNN, \(\Phi\) takes two major operations to compute node representations \(\mathbf{h}\) in each layer [16, 32, 33, 34]. In the \(l\)-th layer, GNN computes the neighbor representation \(\mathbf{h}_{\mathcal{N}_{i}}^{(l)}=\texttt{Agg}(\{\mathbf{h}_{j}^{(l-1)}\mid v_{j} \in\mathcal{N}_{i}\})\}\) for node \(v_{i}\) firstly, by aggregating its neighbor nodes' representations from the previous layer. Then, the new node representation is updated from the aggregated representation and its representation from the previous layer: \(\mathbf{h}_{i}^{(l)}=\texttt{Update}(\mathbf{h}_{i}^{(l-1)},\mathbf{h}_{\mathcal{N}_{i}} ^{(l)})\). The final representation for node \(v_{i}\) is \(\mathbf{z}_{i}=\mathbf{h}_{i}^{(L)}\) after \(L\) layers of computation. The final node representations are used for different tasks such as graph classification. A generic graph classification model contains a pooling method and fully connected layers after GNN layers. The pooling method gathers node embeddings into a graph embedding and the fully connected layers compute the classification.
In this paper, we design our explanation method based on the GNNs with such architecture, so our explanation method is more applicable.
### _GNN Explanation_
**GNN explanation** takes an attributed graph and a pre-trained GNN model as input, then identifies the key factors that contribute to the prediction. Specifically, the task for the explanation methods is to identify the nodes, edges and attributes that contribute most to the prediction. For graph classification tasks, given an input graph \(G\) with attributes \(\mathcal{X}\) and a pre-trained GNN model \(\Phi\), the GNN will make the prediction by computing the label \(y\) with the probability \(P_{\Phi}(Y=y\mid G,\mathcal{X})\). The task of explanation methods is to reason why the input graph is classified as \(y\) by \(\Phi\). The explanation offers a set of important factors
Figure 1: Explaining an example code predicted as vulnerable by a pre-trained GNN model with different explanation methods. (a) shows an example source code with “double free” vulnerability, (b) shows the converted Attributed control and data Flow Graph (AFG) and a pre-trained model, and (c) shows the explanation results with the identified important factors colored. Specifically, GNNExplainer identifies important edges and treats the same attributes from different nodes identically, PGM-Explainer identifies important nodes only.
that contribute to the prediction, for example, by retaining important edges [25, 27].
In this paper, we develop the explanation method Illuminati for GNN models in cybersecurity domain. Existing works only focus on specific factors to explain. Illuminati provides a comprehensive and accurate explanation for all the graph factors, which benefits the development of cybersecurity applications.
**Example with code vulnerability detection.** Figure 1(a) shows an example source code with a "double free" vulnerability, which happens when the second free (line 12) is called after the first free (line 9). Vulnerability detection methods firstly convert the source code to an attributed graph. For example, we construct the attributed graph from the source code as shown in Figure 1(b) by building the **A**tributed control and data **E**low **G**raph (AFG) and encoding the syntax attributes for each node. The node denotes the statement, the edge denotes control or data flow between two statements, and the attributes include syntax features, such as which keywords are used in a statement. Using the AFGs and their corresponding labels (bengin or vulnerable) as the training dataset, one can train a GNN model for vulnerability detection, e.g., Devign [9].
For the AFG generated from the example source code in Figure 1, nodes 9, 12 and the keyword free should be identified in the final explanation results. Figure 1(c) presents the output from two recent representative works and Illuminati. GNNExplainer estimates the edge importance from the AFG by learning the soft continuous edge masks. In this example, GNNExplainer identifies \((4,9)\) and \((5,9)\) as important and considers this subgraph as the explanation result. This is not accurate because node 12 is missed due to none of its edges is considered important. PGM-Explainer samples a local dataset by random attribute perturbation to the AFG. With the perturbed nodes and the prediction change being recorded, a probabilistic graphical model is utilized to identify the important nodes. As a result, nodes 5, 9, and 11 are identified. The explanation from PGM-Explainer misses node 12. Such explanations will confuse a cybersecurity analyst or lead to a wrong conclusion.
## 4 Design Details of Illuminati
### _Overview_
The workflow of Illuminati is shown in Figure 2. Illuminati takes an attributed graph and a pre-trained GNN as input then generates a key subgraph that contributes to the prediction, with the importance scores as the importance measurement.
First, Illuminati learns the importance scores for edges and node attributes collectively from the input graph and the pre-trained GNN. The edge masks and attribute masks are initialized by Illuminati. Using the same approach from GNNExplainer, Illuminati applies the masks as learnable parameters to the input graph. Similar to GNN training, the masks are learned iteratively from the feedback of GNN. The importance scores are then calculated from the learned masks. Next, Illuminati estimates the importance scores for nodes from the calculated importance scores for edges and node attributes. For each node, the importance scores from the related edges and attributes are aggregated for the estimation. Finally, an important subgraph is explained by removing the factors with low importance scores under certain constraints, e.g., the size of the subgraph.
Next, we discuss the detailed design of Illuminati. The main notations are summarized in Table 2
### _Objective Function_
An attributed graph contains graph structure and attributes. Our target is to find a subgraph \(G_{s}=(\mathcal{V}_{s},\mathcal{E}_{s})\) and a subset of attributes \(\mathcal{X}_{s}\) that contribute to the GNN prediction. In order to find the important factors, we use mutual information maximization as our objective function [26], which is defined in Equation 1:
\[\max_{G_{s}}MI(Y,(G_{s},\mathcal{X}_{s}))=H(Y)- \tag{1}\] \[H(Y\mid G=G_{s},\mathcal{X}=\mathcal{X}_{s})\]
where \(Y\) is the predicted label for an input graph. The graph structure can be represented by an adjacency matrix \(A\) or an edge list \(\mathcal{E}\), and node attributes are represented by a node attribute matrix. However, a node consists of its connected edges and attributes. It is not possible to directly quantify the importance score for a node. Thus, node explanation is considered after edge and node attribute explanation. Here, \(G_{s}=(\mathcal{V},\mathcal{E}_{s})\).
**Estimation for edges.** The estimation for the objective function is not tractable since there are \(2^{|\mathcal{E}|}\) different subgraphs for \(G\), because each edge is independent. Following the existing works [25, 26], in consideration of relaxation, we adopt Bernoulli distribution \(P(G_{s})=\prod_{(i,j)\in\mathcal{E}}P((i,j))\) for edge explanation, where \(P((i,j))\) is the probability of the edge \((i,j)\)'s existence. Therefore, our goal for edge explanation is considered as finding the correct \(P(G_{s})\).
**Estimation for attributes.** For the basic GNNs, the same node attributes from different nodes share the same GNN parameters in each layer, while some newly developed GNNs extend the usage of node attributes. For example, GAT [33] takes node attributes to calculate attention coefficients. Besides, the same node attributes perform differently when located in different nodes because of the nonlinear computation from GNNs. Node attributes should be explained individually for a graph. We use the same method from edge estimation for node attribute estimation.
The mutual information quantifies the probability change of GNN prediction with the input limited to \(G_{s}\) and \(\mathcal{X}_{s}\). An edge \((i,j)\) is considered unimportant when removing it does not largely decrease the probability of prediction. With the pre-trained GNN \(\Phi\) being
\begin{table}
\begin{tabular}{c|l} \hline \hline Notation & Description \\ \hline \(G\) & A graph \\ \(\mathcal{V}\) & The set of nodes in graph \(G\) \\ \(\mathcal{E}\) & The set of edges in graph \(G\) \\ \(\mathcal{X}\) & The sets of node attributes in graph \(G\) \\ \(\Phi\) & A GNN model \\ \(P\) & Prediction probability \\ \(m\) & Explanation mask \\ \(\omega\) & Importance score \\ \hline \hline \end{tabular}
\end{table} TABLE II: List of notations.
fixed, we rewrite our objective function as minimizing \(H(Y\mid G=G_{s},\mathcal{X}=\mathcal{X}_{s})\), defined in Equation 2, where \(C\) is the set of prediction classes. In this way, we make sure the subgraph \(G_{s}G_{s}\) and the subset of attributes \(\mathcal{X}_{s}\) achieve the maximum probability of prediction.
\[\begin{split}\min_{P(G_{s}),P(\mathcal{X}_{s})}&- \sum_{c=1}^{C}\mathbb{1}[y=c]\log P_{\Phi}(Y=y\mid\\ G=G_{s},\mathcal{X}=\mathcal{X}_{s})\end{split} \tag{2}\]
### _Edge and Attribute Explanation_
Our goal for edge and attribute explanation is to learn the correct \(P(G_{s})\) and \(P(\mathcal{X}_{s})\). We introduce edge masks \(m^{(\mathcal{E})}\) and node attribute masks \(m^{(\mathcal{X})}\) as our learning parameters. We take \(P(G_{s})=\sigma(m^{(\mathcal{E})})\) and \(P(\mathcal{X}_{s})=\sigma(m^{(\mathcal{X})})\), where \(\sigma(\cdot)\) denotes _sigmoid_ function. Here, the objective function can be approximated as:
\[\begin{split}\min_{m^{(\mathcal{E})},m^{(\mathcal{X})}}& -\sum_{c=1}^{C}\mathbb{1}[y=c]\log P_{\Phi}(Y=y\mid G=\\ (\mathcal{V},\mathcal{E}\odot\sigma(m^{(\mathcal{E})})),\mathcal{ X}&=\mathcal{X}_{s}\odot\sigma(m^{(\mathcal{X})}))\end{split} \tag{3}\]
where \(\odot\) denotes element-wise multiplication. Edge masks learn how much message from source nodes should be passed to destination nodes. Node attribute masks learn how much of node attributes should be used for messages.
For undirected graphs, the edge is bidirectional, where the information is passed back and forth. In this paper, we consider all the graphs as directed graphs to estimate the message passing precisely. An edge mask for undirected graph is computed by \(\hat{m}^{(\mathcal{E})}_{(i,j)}=\hat{m}^{(\mathcal{E})}_{(j,i)}=Agg(\{\hat{m}^{ (\mathcal{E})}_{(i,j)},\hat{m}^{(\mathcal{E})}_{(j,i)}\})\), where \(Agg\) is a user-defined aggregation function. GNNExplainer and PGEExplainer treat both directions equally by taking the average of two directions. From our practical observation, the performance of the explanation can be improved by applying different aggregation functions.
As Figure 2 suggests, mask training is similar to GNN training. First, we initialize the masks for edges and node attributes, respectively. Next, the masks are used to add weights on the edges and node attributes of the input graph as in Equation 3. Then, the weighted graph is fed into the pre-trained GNN for mask learning. With the feedback from GNN, the mask values are optimized by minimizing the objective function. The masks are learned iteratively through these steps, so the importance scores are gathered from the learned masks.
**Reparameterization trick.** The importance scores, as weights for mask training, are soft continuous values falling into \((0,1)\). However, an edge should either exist or not, meaning the edges should be binarily indicated. Using continuous importance scores will cause the "introduced evidence" problem [35]. The importance scores add unexpected noise to the input, which does not reflect the real-world explanations. The binary importance scores, however, are not differentiable for researchers to estimate the importance level. Our solution is to reparameterize importance scores into binary as weights on the input graph, while the differentiable importance scores are still retained for importance estimation. Here, we apply hard concrete distribution [36] as our reparameterization trick. We rewrite the distribution for edges as:
\[\begin{split} s&=\sigma((\log u-\log(1-u)+m^{( \mathcal{E})})/\beta)\\ \epsilon&=\min(1,\max(0,s(\zeta-\gamma)+\gamma))\end{split} \tag{4}\]
where \(u\sim\mathcal{U}(0,1)\) and \(\beta\) is the temperature. With \(\zeta<0\) and \(\gamma>1\), we stretch the concrete distribution to \((\zeta,\gamma)\). Distribution in \((\zeta,0]\) and \([1,\gamma)\) ultimately falls into 0 and 1. Thus, part of the distribution is squeezed into binary. Meanwhile, we take \(s=\sigma(m^{(\mathcal{E})}/\beta)\) as the binary concrete distribution for edges, i.e., importance scores, then approximate the "sub-edges" as \(\mathcal{E}_{s}\approx\mathcal{E}\odot\epsilon\) for edge mask training.
### _Node Explanation_
With learned edge masks and node attribute masks, we need to quantify the importance scores for nodes. Inspired by the Bernoulli distribution for graph structure, the contribution from a node \(v_{i}\) is quantified by:
\[\omega_{v_{i}}=\prod_{(i,j)\in\mathcal{E}_{i}^{+}}P((i,j))^{1/|\mathcal{E}_{i }^{+}|}\prod_{t\in\mathbf{x}_{i}}P(t)^{1/|x_{i}|} \tag{5}\]
Here, the contribution of a node \(v_{i}\) is quantified from the importance scores of its outgoing edges \(\mathcal{E}_{i}^{+}\) and node attributes \(\mathbf{x}_{i}\). The contribution from edges should be normalized because a node connects arbitrary numbers of edges. We multiple the importance scores of connected edges and extract the \(\mid\mathcal{E}_{i}^{+}\mid\)-th root of the multiplication. For node \(v_{i}\), we can define the importance score for outgoing edges as \(\omega_{\mathcal{E}_{i}^{+}}=\prod_{(i,j)\in\mathcal{E}_{i}^{+}}P((i,j))^{1/| \mathcal{E}_{i}^{+}|}\), and the importance score for node attributes as \(\omega_{\mathbf{x}_{i}}=\prod_{t\in\mathbf{x}_{i}}P(t)^{1/|x_{i}|}\). However, there are two problems with equation 5. First, the normalization method may degrade the important edges. An important node can be connected by important and unimportant edges while the unimportant edges decrease the overall importance of its message passing path. Second, node interactions are not considered. Nodes interact through GNN computation, which leads to certain nodes being important to the prediction.
Figure 2: The workflow of Illuminati. With a input graph and a pre-trained GNN, Illuminati firstly learns the importance scores for edges and node attributes. Next, Illuminati estimates the importance scores for nodes from the previous calculation. The important subgraph is then explained by removing the unimportant factors.
In order to fix the first problem, we take an aggregation function, e.g., \(\max\), to calculate the contribution from \(\mathcal{E}_{i}^{+}\), \(\omega_{\mathcal{E}_{i}^{+}}=Agg(\{P((i,j))\mid(i,j)\in\mathcal{E}_{i}^{+}\})\). The aggregation function is changeable in order to adjust to different GNNs. But it cannot be directly applied to the incoming edges of \(v_{i}\). In GNN computation, a node's representation \(\mathbf{h}_{i}\) is aggregated from the message passing through its incoming edges \(\mathcal{E}_{i}^{-}\). The message information depends on the source node and its connected edge. Thus, we quantify the message importance through edge \((i,j)\) as:
\[\omega_{(i,j)}=P((i,j))\omega_{\mathbf{x}_{i}} \tag{6}\]
For a node's importance estimation, we consider the messages from and to the node (outgoing messages and incoming messages) separately since the contribution can vary. With the solution to the first problem, we firstly aggregate the importance scores for outgoing messages and incoming messages of node \(v_{i}\) separately:
\[\omega_{v_{i}}^{(out)} =Agg_{1}(\{\omega_{(i,j)}\mid(i,j)\in\mathcal{E}_{i}^{+}\}) \tag{7}\] \[\omega_{v_{i}}^{(in)} =Agg_{1}(\{\omega_{(j,i)}\mid(j,i)\in\mathcal{E}_{i}^{-}\})\]
Then, we introduce the second aggregation function to compute the ultimate node importance scores from gathering the outgoing messages and incoming messages. Here, we compute the ultimate importance score for \(v_{i}\) by:
\[\omega_{v_{i}}=Agg_{2}(\{\omega_{v_{i}}^{(out)},\omega_{v_{i}}^{(in)}\}) \tag{8}\]
**Synchronized mask learning.** For different purposes, some graph factors can share the same masks. For example, for undirected graphs, two paths of the same edge can share the same edge mask in order to eliminate the pair difference problem. When node attribute explanation is not required, node attributes from the same node \(\mathbf{x}_{i}\) can share the same masks. In this way, we are able to directly learn \(\omega_{\mathbf{x}_{i}}\) for each node. Thus, the graph structure is explained efficiently with less storage requirement.
## 5 Experiment
The experiments are conducted on a server with two Intel Xeon E5-2683 v3 (2.00GHz) CPUs, each of which has 14 cores and 28 threads. The code in this work is available for reproduction1.
Footnote 1: [https://github.com/iHeartGraph/Illuminati](https://github.com/iHeartGraph/Illuminati)
### Dataset and Pre-traind GNN Models
We evaluate eight datasets as shown in Table 3. We test the explanation methods on three public datasets used for the graph classification task, including two real-world datasets and a synthetic dataset. Two molecular datasets Mutagenicity [37] and BBBP [38] contain graphs with nodes representing the atoms, and edges representing the chemical bonds. BA-2motifs [25] is a motif-based synthetic dataset, each graph from which contains a five-node house-like motif or a cycle motif. For code vulnerability detection, we use a well-labeled dataset from NIST Software Assurance Reference Dataset (SARD), named Juliet [39], which not only labels the vulnerable functions but also provides the benign functions. For a clear explanation study, we require the datasets easy to understand and achieve good prediction accuracy. The CWEs we select for the experiment are 415, 416, and 476 which represent "double free", "use after free" and "NULL pointer dereference" respectively. The source code is represented by AFGs. The datasets for smart contract vulnerability detection are from two platforms, Ethereum Smart Contracts (Reentrancy) and VNT chain Smart Contracts (Infinite loop). The contract graphs are constructed from the source code from the work of Zhuang _et al._[40]. The graphs for two cybersecurity applications will be illustrated in Section 6.
We use three kinds of GNN models for different applications respectively. The models include two parts, i.e., GNN layers to generate node representations and functional layers to compute graph representations. The dataset splits for model training and the testing accuracies are shown in Table 3. The pre-trained models are used as pre-trained models for explanation evaluation.
We train a basic 3-layer GCN [16] for public datasets. For a graph classification task, it is followed by a \(max\) and \(mean\) pooling layer and a fully connected layer. The model in Devign [9] is used for code vulnerability detection, which consists of a 3-layer gated graph recurrent network [41] with a _Conv_ module. DR-GCN [40] for smart contract vulnerability detection is derived from GCN with increased connectivity in each layer. A _max_ pooling layer and two fully connected layers are applied for graph representation after the 3-layer DR-GCN.
### Compared Works
We compare Illuminati with the following baseline GNN explanation methods, GNNExplainer [26], PGMExplainer [23], and PGExplainer [25]. Here, GNNExplainer and PGM-Explainer do not require prior knowledge from GNNs. GNNExplainer targets on edges for graph structure explanation. The importance of edges is differentiated by learning the edge masks. The important nodes are automatically extracted with the explained important edges. Attribute explanation is also provided by GNNExplainer. The same node attributes from different
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Methods** & **BBBP** & **Mutagenicity** & **BA-2motifs** \\ \hline PGM-Explainer & 74.6 & 57.2 & 41.0 \\ GNNExplainer & 75.1 & 69.9 & 41.0 \\ PGExplainer & 76.2 & 68.2 & 41.0 \\ ILuminati & **76.7** & **72.0** & 41.0 \\ \hline \hline \end{tabular}
\end{table} TABLE 4: EP (%) of explained subgraphs for public datasets, where BBBP and Mutagenicity are real-world molecular datasets and BA-2motifs is a synthetic dataset.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Dataset** & \begin{tabular}{c} **Avg. \#** \\ **of nodes** \\ \end{tabular} &
\begin{tabular}{c} **\# of train/** \\ **validation/test** \\ \end{tabular} & **Model** & **Accuracy** \\ \hline BBBP & 24.065 & 1,629/205/205 & GCN & 0.878 \\ Mutagenicity & 30.317 & 4,453/453/453 & GCN & 0.805 \\ BA-2motifs & 25,000 & 800/100/100 & GCN & 1.000 \\ Reentrancy & 4.968 & 1,340/-/331 & DR-GCN & 0.926 \\ Infinite Loop & 3.686 & 1,056/-261 & DR-GCN & 0.632 \\ CWE-415 & 9.962 & 666/-334 & Deving & 0.949 \\ CWE-416 & 17.839 & 666/-/334 & Deving & 0.934 \\ CWE-476 & 9.132 & 666/-/334 & Deving & 0.841 \\ \hline \hline \end{tabular}
\end{table} TABLE 3: The specifications of different dataset and the accuracy of the pre-trained models.
nodes are explained equally by learning the same attribute masks. PGM-Explainer [23] provides node explanation by a probabilistic graphical model with the generated dataset. Whether a node is perturbed and the prediction change is noted for dataset generation. Then the Grow-Shrink (GS) [42] algorithm is conducted to shrink the datasets and a Bayesian network is used to explain the GNN model. PGExplainer takes the node embeddings from the last layer of GNNs as input, then learns the edge masks from a multi-layer neural network. Similar to GNNExplainer, the explanation of graph structure is only determined by explained edges.
We used the shared source code of the two compared works and reimplement the interfaces to support the dataset and pre-trained GNN models. We compare different methods for graph structure explanation. Specifically, the subgraph is extracted only by node, and all the connected edges are retained. For GNNExplainer and PGExplainer, as we identify the top-\(R\) (rate) or top-\(K\) nodes, edges that are originally connected from the input graph are restored. Thus, only node removal is conducted and the number of remaining nodes is controlled to be equal for all the explanation methods. Also, we do not apply any additional constraints for the evaluation. We use \(max\) pooling as \(Agg^{(2)}\) for Illuminati.
### _Performance Comparison_
In this subsection, we present the quantitative analysis of explanation methods with various evaluation metrics.
**Evaluation metrics.** In this work, we assume the important subgraphs will retain the original predictions, meaning causing the least prediction change from the original graphs. We define _Essentialness Percentage (EP)_ as our evaluation metric:
\[\text{EP}=\frac{1}{N}\sum_{i}^{N}(\mathbb{1}[y_{s}^{(i)}=y^{(i)}]) \tag{9}\]
where \(\mathbb{1}[\cdot]\) means the result being 1 if the statement in \([\cdot]\) is true, otherwise 0; \(y_{s}\) denotes the prediction label of the subgraph, and \(N\) is the number of graphs in the dataset. EP, as the percentage of subgraphs that retain the original predictions, evaluates how accurate the extracted factors are to the prediction. To validate the accuracy of the explained factors, we design two tests. Based on the objective of explanation, we firstly evaluate EP from the subgraphs formed by the important factors. We also consider the intuition reasonable that if the important factors are removed, the remaining subgraphs will not likely be able to retain the original predictions, which will cause lower EP. Thus, we divide the graphs into the explained subgraphs and the remaining subgraphs after explanation, where the explained subgraphs are constituted by important factors.
An accurate explanation should be able to identify the most important factors, thus the explanation should be sparse. However, explanation methods provide continuous importance scores for different factors rather than solid binary scores. In order to evaluate the sparsity for different explanation methods, we define \(Sparsity\) as follows:
\[Sparsity=\frac{1}{N}\sum_{i}^{N}\text{min}\mid\mathcal{V}_{s}^{(i)}\mid\text { s.t. }y_{s}^{(i)}=y^{(i)} \tag{10}\]
Sparsity represents the average minimum size of subgraphs that retain the original GNN predictions from a dataset. The smaller sparsity means the explanation method identifies more important factors and ignores irrelevant factors, thus provides more accurate explanations.
**EP of explained subgraphs.** We use the testing splits from Table 4 for explanation method evaluation. All the explanation methods explain the graph by generating the importance scores for different factors. It is unclear if a factor should be kept. Thus, we evaluate the performance of the explanation by comparing the EP under the same graph size. First, we test the explanation methods with public datasets and a trained basic 3-layer GCN, shown in Table 4. We extract the top-10 nodes for Mutagenicity and BBBP, and the top-5 for synthetic dataset BA-2motifs. The result suggests that PGExplainer, as an explanation method requiring prior knowledge, outperforms other compared methods without prior knowledge. Overall, the explanation result shows that Illuminati achieves the best EP in real-world datasets and outperforms other explanation methods.
The explanation results for two cybersecurity applications are shown in Figure 3. Table 5 summarizes the result values in the middle from Figure 3. As for smart contract detection, we variate the rate of extracted nodes; and we change the number of extracted nodes in code vulnerability detection. If the graph size to be explained is larger than the input graph size, then this graph is not considered for evaluation.
In general, Illuminati shows the highest EP among other explanation methods in both applications, meaning it identifies the important subgraphs more accurately. For real-world datasets, PGM-Explainer does not perform as well as public datasets and synthetic datasets. The real-world datasets contain a more arbitrary and larger size of node attributes. PGExplainer outperforms other explanation methods in CWE-415, while the performance of Illuminati is close to PGExplainer. To acquire better explanation accuracy, PGM-Explainer should be executed as the size of subgraphs changes; while GNNExplainer and Illuminati only need to be executed once. As an explanation method that requires prior knowledge of GNNs, the performance of PGExplainer is generally better than the peer explanation methods without prior knowledge. However, without exploring nodes in depth, PGExplainer generally does not gain a higher EP than Illuminati. The result also suggests that as the size of explained subgraphs increases, the explanation is more accurate. We use real-world datasets, which ensure a node should not have an extremely high or low contribution. The predictions rely on the interactions between different nodes.
subgraphs are less related to the GNN predictions. As it is observed in the pair of Figure 3 and Figure 4, the increase of EP of explained subgraphs does not directly relate to the decrease of EP of remaining subgraphs.
Our objective is to identify the important subgraphs that retain the original predictions, while the interaction of the remaining nodes can contribute to the prediction as well. GNNs are complex and non-linear models. The important subgraphs are not assembled by all the important nodes individually, but the important node interactions. The remaining subgraphs may contain positive node interactions and important nodes, which are weaker than the explained subgraphs. Thus, the objectives of obtaining the maximum EP of explained subgraphs and the minimum EP of remaining subgraphs are better considered separately, especially for complex models like GNNs. It is proved that GNNs can be attacked easily by correctly identifying important nodes. The domains of attack and explanation share common techniques, e.g., counterfactual explanation. With the explanation method, the attack can be conducted by removing important nodes or identifying important nodes for an incorrect prediction.
**Sparsity.** By default, GNNs are able to make a certain prediction from an empty graph. The default prediction for smart contract vulnerability detection is vulnerable, while the code vulnerability detection is benign. To better differentiate the performance for each explanation method, the \(Sparsity\) is only evaluated from graphs with the opposite default predictions. We collect the \(Sparsity\) from different explanation methods in Table VII. Overall, Illuminati achieves the smallest \(Sparsity\), which is consistent with the result of EP of explained subgraphs. For graphs with bigger sizes from code vulnerability detection, it is only fewer than half of the nodes that lead to the final predictions. It indicates the vulnerability does not take a big part of the code, based on the assumption that GNNs make the prediction by correctly capturing the vulnerability factors. From CWE-476, the GNN identifies the significant difference between benign and vulnerable code since it is able to determine the vulnerability averagely within two nodes. The way GNN makes predictions for this dataset is mainly to find the benign factors rather than vulnerable factors. It also implies that the dataset may not be strong or complete enough to cover all the possible coding situations, as GNN only needs to capture the difference between the graphs with different labels.
**Time complexity.** Table VIII shows the execution time for every explanation method. We use the same training split from Table IV for training PGExplainer. GNNExplainer overall generates the fastest explanation since it
\begin{table}
\begin{tabular}{c r r r r r} \hline \hline
**Methods** & **Rentrancy** & **Infinite loop** & **CWE-415** & **CWE-416** & **CWE-476** \\ \hline PGM-Explainer & 76.1 & 70.1 & 86.5 & 85.6 & 85.3 \\ GNNExplainer & 63.1 & 56.7 & 83.2 & 79.6 & 72.8 \\ PGExplainer & 72.2 & 59.8 & **72.2** & **59.9** & 59.6 \\
**Illuminati** & **51.7** & **58.2** & **72.2** & 62.0 & **49.4** \\ \hline \hline \end{tabular}
\end{table} TABLE VI: EP (%) of remaining subgraphs for cybersecurity applications. \(R=0.5\) for smart contract vulnerability detection; \(K=6\) for code vulnerability detection.
Figure 4: The explanation results for cybersecurity applications. We obtain EP of the remaining subgraphs previously generated. The graph sizes here are for the explained subgraphs.
\begin{table}
\begin{tabular}{c r r r r r} \hline \hline
**Methods** & **Rentrancy** & **Infinite loop** & **CWE-415** & **CWE-416** & **CWE-476** \\ \hline PGM-Explainer & 61.3 & 58.6 & 79.6 & 74.3 & 72.2 \\ GNNExplainer & 81.3 & 72.0 & 81.7 & 74.9 & 85.0 \\ PGExplainer & 84.9 & 73.6 & **90.1** & 77.2 & 92.2 \\
**Illuminati** & **93.4** & **78.2** & 88.0 & **80.8** & **97.3** \\ \hline \hline \end{tabular}
\end{table} TABLE V: EP (%) of explained subgraphs. \(R=0.5\) for smart contract vulnerability detection; \(K=6\) for code vulnerability detection.
Figure 3: Explanation results for cybersecurity applications. We obtain EP of explained subgraphs by changing explained subgraph size.
directly and only learns edge masks from each graph (as for graph structure). The extra training cost from PGExplainer takes the majority of the time consumption, while extra mask learning is not needed for explanation. PGMExplainer spends its running time in node attribute perturbation and calculation. The time consumption is affordable for simple datasets because the graph size is limited and PGM-Explainer provides the accurate explanation, while for complex cybersecurity datasets, more energy is needed for sampling the perturbed dataset. The time complexity of Illuminati is closely higher than GNNExplainer due to more time consumption for the nodes and attributes. The time consumption from Illuminati is acceptable since Illuminati provides a comprehensive and accurate explanation. Large time complexity will be necessary if different explanation methods are combined for a comprehensive explanation.
### _Ablation Study_
**Attribute explanation study.** We further evaluate the node attribute explanation of Illuminati, as shown in Table IX. Generally, the highest EP values are obtained by Illuminati. It proves that the node attributes contribute to the prediction differently, so the importance scores should be applied to them individually.
The results also indicate that only a small number of node attributes are highly important to the prediction. Compared with node explanation, an individual node attribute can contribute more to the prediction than an individual node from the two applications. Intuitively, the attack on node attributes can be easily conducted. Besides, the attack is not as noticeable as the attack on nodes, especially for CWE-476 dataset.
**Ablation study for node explanation.** The node importance scores are gathered by the importance scores of message passing, requiring the importance scores for edges and node attributes. Here, we gather the importance scores for nodes by edge explanation only and attribute explanation only, in order to verify the node explanation requires both edge and node attribute explanation. The importance scores from edges only are gathered in the same way as above experiments without considering importance scores from node attributes. The importance scores from node attributes only are gathered from synchronized attribute mask learning. We evaluate the EP of explained subgraphs in Table X.
Compared with the results in Table VI, generally, the node explanation by edge only or attribute only is not as accurate as when they are combined. Attribute-only explanation overall obtains lower EP in smart contract vulnerability detection but higher EP in code vulnerability detection. By comparing the difference, the results from Teentrancy indicates the graph structure makes the key contribution to the prediction, while those from CWE-416 and CWE-476 indicate the opposite. Node attributes can take an important role to estimate the importance of each node. For graph structure explanation, especially when it comes to unimportant node removal, it is necessary to have nodes specially explained.
### _Evaluation on Node Classification Task_
Additionally, We study the explanation performance on node classification task.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**Methods** & **Reentrancy** & **Infinite loop** & **CWE-415** & **CWE-416** & **CWE-476** \\ \hline Edge only & 83.4 & 72.8 & 82.9 & 74.9 & 80.2 \\ Attribute only & 67.4 & 72.0 & 83.5 & 81.4 & 95.5 \\ \hline \hline \end{tabular}
\end{table} TABLE X: Minimum graph size to retain the original GNN predictions (\(Sparsity\)).
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**Methods** & **Reentrancy** & **Infinite loop** & **CWE-415** & **CWE-416** & **CWE-476** \\ \hline PGM-Explainer & 93.3 & 62.7 & 292.4 & 367.5 & 269.3 \\ GNNExplainer & 37.8 & 35.6 & 92.9 & 94.2 & 91.8 \\ PGExplainer(training) & 0.8(68.3) & 0.6(52.8) & 2.4(83.5) & 3.2(118.8) & 3.0(100.9) \\
**Illuminati** & 52.5 & 37.6 & 99.3 & 103.4 & 98.7 \\ \hline \hline \end{tabular}
\end{table} TABLE IX: Time complexity (seconds).
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**Methods** & **Reentrancy** & **Infinite loop** & **CWE-415** & **CWE-416** & **CWE-476** \\ \hline PGM-Explainer & 93.3 & 62.7 & 292.4 & 367.5 & 269.3 \\ GNNExplainer & 37.8 & 35.6 & 92.9 & 94.2 & 91.8 \\ PGExplainer(training) & 0.8(68.3) & 0.6(52.8) & 2.4(83.5) & 3.2(118.8) & 3.0(100.9) \\
**Illuminati** & 52.5 & 37.6 & 99.3 & 103.4 & 98.7 \\ \hline \hline \end{tabular}
\end{table} TABLE IX: EP (%) of explained subgraphs for attribute explanation study. We pick top-\(3\) node attributes for smart contract vulnerability detection; top-\(5\) for code vulnerability detection.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**Methods** & **Reentrancy** & **Infinite loop** & **CWE-415** & **CWE-416** & **CWE-476** \\ \hline GNNExplainer & 74.3 & 64.0 & **94.3** & **87.4** & 88.9 \\
**Illuminati** & **92.7** & **71.6** & **94.3** & 85.3 & **98.5** \\ \hline \hline \end{tabular}
\end{table} TABLE X: EP (%) of explained subgraphs for ablation study. We pick top-\(3\) node attributes for smart contract vulnerability detection; top-\(5\) for code vulnerability detection.
**Background.** We use the basic Graph Convolutional Network (GCN) [16] as the node classifier. GCN is a GNN with the following propagation rule for one layer:
\[H^{(l+1)}=\phi(\tilde{D}^{-\frac{1}{2}}\tilde{A}\tilde{D}^{-\frac{1}{2}}H^{(l)}W ^{(l)}). \tag{11}\]
Here, \(\phi\) is the activation function, \(A\) is the adjacency matrix, \(\tilde{A}=A+I\), and \(\tilde{D}_{ii}=\sum_{j}\tilde{A}_{ij}\). For node classification task, fully connected layers are adopted after GCN to compute the classification.
**Evaluation.** We use a 2-layer GCN with 64 hidden channels for each layer, and a fully connected layer after GCN for node classification. We adopt ReLU as the activation function. The training and testing split is the public fixed split from [43]. Table XI shows the information of the dataset we use. We use the test split for the explanation. We compare Illuminati with GNNExplainer [26]. Here, we extract the top-5 and top-10 nodes for both datasets and evaluate the performance with the metric Essentialness Percentage (EP). Since we use 2-layer GCN, the extracted nodes are within 2-hop neighbors.
As shown in Figure 5, Illuminati obtains 7.1% EP higher than GNNExplainer on average. From both datasets, Illuminati outperforms GNNExplainer distinctly when the number of extracted nodes is small. Such a promising result also proves that it is necessary to jointly consider edges and attributes for node explanation. We believe Illuminati will outperform significantly on GNN adaptations and alleviate the limitations of general explanation methods in cybersecurity applications.
## 6 Case Study
In this section, we make two case studies of applying Illuminati to real cybersecurity applications, code vulnerability and smart contract vulnerability detection. In order to obtain straightforward results and comprehensive evaluations, we focus on code vulnerability detection.
### **Case #1: Code Vulnerability Detection**
**Background.** We summarize three steps for code vulnerability detection using GNN models. (1) Graph extraction. Code property graphs (CPGs) are generated as the graph representation for source code. A node represents a program construct such as variables, statements, and symbols; an edge contains the direction and relationship information for a pair of nodes such as control flow and data flow. (2) Attribute encoding. To better represent the source code and fit the code property graphs to GNNs, node or edge attributes have to be encoded. Node attributes are the most widely used attributes in code vulnerability detection. (3) Model learning. This application is conducted as a graph classification task. With the code property graphs and node attributes as input, labels of benignity and vulnerability as targets, the model is learned from a set of datasets.
In this experiment, we use AFGs as our CPGs. Therefore, a node denotes a statement, an edge contains the direction and relationship information (control flow and data flow) for a pair of nodes. We use Joern [44] to extract CDFGs from C/C++ code. We make sure each graph contains 32 nodes. The keywords from each statement are extracted for node attribute encoding. A node attribute indicates whether the statement has the corresponding keyword, e.g., char, == *, so it is encoded to be binary. There are 96 node attributes for each node. We use the model, Design, as the code vulnerability detector.
**Evaluating the output of Illuminati.** We measure the reduction in prediction accuracy for each case, which is the probability decrease of the explained subgraph.
The vulnerability in Figure 6 is caused by "double free". Different from Figure 1, the source code here calls a function. The key reasons for vulnerability are the same, while the model considers the function nodes in Figure 6 as the contribution. It is reasonable that the function is the path from line 12 to line 2. The output of Illuminati suggests the model's competence and weakness. It successfully captures the vulnerability, but the performance drops down as the source code becomes complex. From our graph generation technique, the functions are not specially identified, which can be opened up and embedded into the major function.
Figure 7 shows an example from the dataset "use after free". The output suggests that the model's decision-making is the same as human knowledge. The importance score of the edge indicates the edge is not highly important to the prediction, which may be a potential risk.
The dereference of NULL pointer leads to the vulnerability in Figure 8. The explanation results suggest the key reason for the prediction is node 4, where the pointer is assigned as NULL. However, it captures line 6 rather than 7, which is contradictory to human understanding. It is understandable because is mirrored version of benign code contains the symbol!= in if condition (line 6). The explanation suggests the dataset is well learned by the model but less confidence if the model is adopted to real applications.
Figure 9 shows the explanation result from other methods for the same vulnerable code shown in Figure 7. One can observe that Illuminati significantly outperforms other explanation methods by providing a comprehensive explanation for nodes, edges, and attributes. Missing one explanation factor can cause significant difficulty for analysis. The explanation accuracy is also degraded as seen from the reductions in prediction accuracy. PGExplainer, as a global explanation method, may not provide
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Dataset** & \(\mid\)**\(\mathcal{V}\) & \(\mid\)**\(\mathcal{E}\) & \(\mid\)**\(\mathcal{Y}\) & \(\mid\)**\(\mathcal{X}\) & **Accuracy** \\ \hline Cora & 2,708 & 5,429 & 7 & 1,433 & 0.807 \\ Citeseer & 3,327 & 4,732 & 6 & 3,703 & 0.711 \\ \hline \hline \end{tabular}
\end{table} TABLE XI: The specifications of different dataset and the accuracy of the pre-trained models.
Fig. 5: The explanation result of node classification tasks.
customized results for a single input graph. While the reduction in prediction accuracy is significant, Illuminati achieves the lowest reduction and provides human-understandable explanation. As it is observed from Figure 3 and Table 7, the graph size plays an important role in the prediction. Wrong information from explanation methods may lead to more confusion and the wrong conclusion to the models. With the trust of Illuminati, cybersecurity analysts can easily map the output to the source code and understand the model behavior.
Besides, Illuminati alleviates the limitations in graph-specific explanation methods: descriptive accuracy (DA), efficiency, robustness, and stability [45]. Illuminati greatly improves DA and efficiency as the experiment shows. Specifically in code vulnerability detection, which lines of code contribute to the prediction is important to cybersecurity analysts. Each line is represented as a node in the AFG, which makes it vital to accurately determine the importance of nodes. Illuminati accurately identifies the important lines and keywords. By gathering both edge and attribute information for node explanation, Illuminati is robust against edge perturbation. Similarly, we believe stability is also preserved.
**Using the output of Illuminati.** The explanation methods with high EP should be able to provide accurate information on which part of the code is considered vulnerable by the model. They can identify the vulnerable lines when the model's decision-making matches human knowledge. However, the usage of Illuminati is not limited to this. First, Illuminati helps cybersecurity analysts pinpoint the model's misbehavior even though the model gives the correct predictions. Second, Illuminati helps analysts interpret why mispredictions are made. The developers can identify the pitfalls observed from the recent study [46], and take certain actions to troubleshoot and optimize the model based on the output of Illuminati.
More results of paired code are shown in Figure 10. Illuminati detects the important vulnerable factors in Figure 10(a) and (c), benign factors in Figure 10(b) and (d) according to their predicted labels. As the result shows, the code in Figure 10(a) is vulnerable because of "double
Figure 8: The case study for “NULL pointer dereference”. The reduction in prediction accuracy is 0.003, from the original 1.000.
Figure 6: The case study for “double free”. The reduction in prediction accuracy is 0.005, from the original 1.000.
Figure 7: The case study for “use after free”. The reduction in prediction accuracy is 0.966, from the original 0.980.
free", where the model captures the vulnerability and Illuminati successfully identifies the vulnerable lines. The explanation for Figure 10(b) shows benign statements from the source code. Combining Figure 10(a) and (b), the explanation suggests that the model makes the classification by detecting vulnerable factors. The vulnerability in Figure 10(c) is caused by "NULL pointer dereference". Comparing Figure 10(c) and (d), the model detects the vulnerability by the value assignment to the variable, where NULL leads to vulnerability. The model also detects the difference from the conditions. From the dataset, vulnerable functions do not contain a lot of "false" conditions. The model fails to identify a key statement, i.e., line 7 in Figure 10(c) because vulnerable and benign code both contain such statements. Therefore, the model for this dataset is vulnerable to attacks and is not trustable even it achieves high accuracy. To alleviate the issues, different conditions should be considered to fill the dataset, and more semantic information can be extracted.
Furthermore, we evaluate cases of mispredictions in Figure 11, where the gray box is the explanation of GNN predictions (mispredictions). Further explanation for the correct label is also shown in the white box. The labels of the left column are benign, and those on the right are vulnerable. Here we show results from CWE-416 and CWE-476 since the mispredictions from CWE-415 mostly happen to small graphs. As it can be observed, Illuminati suggests GNNs still have captured the important lines for the correct label. The wrong prediction from the left column comes from printLine, which indicates the use of variables in the model's perspective. The model emphasizes the use of variables but fails to determine the variable is not NULL. More different situations should be added into training, e.g., situations of a variable being used by multiple times without being freed in CWE-416. The result shows GNNs are able to detect the vulnerability for CWE-416 at the right column. But the benign lines take the lead through the calculation of GNN, as the impor
Figure 11: The explanation results of mispredictions. The gray box is the explanation for the mispredictions.
Figure 10: The explanation results of two pairs of mirrored source codes.
Figure 9: The explanation result from “use after free” example. The accuracy reductions are 0.973, 0.980 and 0.977, respectively.
tance scores in the gray box do not vary largely. The for loop from CWE-476 exist in codes with different labels, so GNNs randomly assign importance of statements in line 3 to different labels. The vulnerability is identified but not strong enough because the use of variables is in an if condition (line 5). printLine usually indicates the use of variables, but here the argument is a string, which is correctly observed as a benign statement.
From interpreting the output of Illuminati, the pitfalls found in this application includes spurious correlation, the inappropriate performance measures and lab-only evaluation [46]. Spurious correlation is caused by the artifacts of the dataset. Different coding styles, length of code and logical situations are not completely considered. This can be alleviated with lab-only evaluation by collecting datasets with different cases from the real world. Only evaluating the prediction accuracy may lead to the neglect of the dataset issues. This will give developers the wrong conclusion of the model. Inappropriate performance measures are addressed by strong explanation methods such as Illuminati. The developers can interpret the explanation output for the decision-making and the potential risks of the model. The output of Illuminati suggests several internal drawbacks of the models as well, e.g., the model does not learn the semantic meaning. The models we use do not make full use of the source code information. Without enough semantic information of the statements and the type of edges, it prevents the model from making the correct prediction in Figure 11. The developers can build a solid strategy to improve the model with the output of Illuminati.
### **Case #2: Smart Contract Vulnerability Detection**
We consider cases from Reentrancy dataset, as the contract graphs from vulnerable source code contain enough nodes for the case study. The contract graph is constructed according to the work of Zhuang _et al._[40]. The nodes in a contract graph are categorized into major nodes, secondary nodes, and fallback nodes. The major nodes represent important functions, the secondary nodes represent model critical variables and the fallback nodes simulate the fallback function. The edges indicate the relationship between nodes, where the edge attributes are only used for graph construction, not in DR-GCN. The node attributes are derived from the types of functions, operations of variables, etc. Figure 12 and 13 show case studies from Reentrancy dataset. The node M1 is the function that calls withdraw function, M2 is the built-in call.value function and M3 is the withdraw function, all of which are major nodes.
The vulnerability in Figure 12 comes from the value being assigned (line 5) after checking if ether sending (line 2) goes through. From the explanation result, the GNN model successfully identifies the location of vulnerability.
With the same vulnerability in Figure 13, however, the GNN captures the factors leading to the right prediction rather than the vulnerable statements. From the code, the transaction (line 5) is after the if statement (line 2). So the model predicts the function as vulnerable. The explanation result shows the two key statements for the prediction. But they are not exactly the ground truth causing the vulnerability, so the decision-making of the model is still confusing to users. To address the issue, we show the mirrored benign code as follows.
```
functionwithdraw(uintamount)public if(credit[msg.sender]>=amount){ credit[msg.sender]==amount; require(msg.sender.call.value(amount)()); }
```
From its mirrored benign code, the value assignment and ether sending is under if condition. In the if condition, the value is assigned first, then the call.value function is called. Accordingly, the path in the corresponding contract graph would be S1 \(\rightarrow\) S2 \(\rightarrow\) M2. Here, S1 does not directly connect with M2, which causes different node representations from the code in Figure 13 and they
Figure 12: An example of Reentrancy. The reduction in prediction accuracy is 0.332, from the original 0.991.
Figure 13: An example of Reentrancy. The reduction in prediction accuracy is 0.186, from the original 0.845.
are learned by the GNN model. Thus, a potential problem from the dataset is identified.
A common pitfall from the training datasets in the two applications is spurious correlation, specifically the lack of various real-world coding situations. The models may not make the correct predictions in different dataset because the output of Illuminati suggests the models have learned some artifacts rather than the real difference between vulnerability and benignity. The edge type is also neglected in this application. How developers utilize the output of Illuminati and improve the model is similar to code vulnerability detection.
## 7 Related Work
**Graph neural networks.** In recent years, there have been a great number of evolutions in GNNs. Scarselli _et al._[47] firstly introduced GNN as a neural network model, extending the traditional neural network for graph data processing. Bruna _et al._[48] extended the convolutional methods for graph structure by analyzing the constructions of deep neural networks on graphs. Deferrard _et al._[49] proposed the extension of CNNs to graphs using Chebyshev polynomials. GCN identified that the simplifications can be used in the previous work and presented fast approximate convolutions on graphs. Plenty of the GNN models, including GCN [16], GraphSAGE [32], and GAT [33], generate node representations iteratively by aggregating and updating the attributes from the neighbor nodes. The node representations, then are used in different tasks like node classification [16, 50], link prediction [17, 51], and graph classification [18, 52].
**Deep learning explanation.** The generic purpose of an explanation method is to determine the decision-making by a complex deep learning model. The two major classes of an explanation method are black-box based [53, 54] and white-box based [55, 56]. Methods with various techniques are proposed to uncover the behaviors of deep learning models. LIME [53] and work from paper [57] treat the whole deep learning model as a blackbox. The model decision is explained by directly identifying the important factors from the input. Methods such as LRP [58] and DeepLIFT [59] decomposes the output backward through model layers and explain the contribution of neurons. Rather than providing a post-hoc explanation for deep learning models, CapsNet [60] is built as a DNN model with the embedded design of explainability. Some explanation methods work on specific models, e.g., CNN [21] and RNNs [22].
**GNN explanation.** GNNExplainer [26], as the pioneering explanation method directly targeting on GNNs, provides edge and node attribute explanations by learning the corresponding masks, which represent the importance scores. PGExplainer [25] provides an inductive edge explanation method working on a set of graphs, by learning edge masks with a multi-layer neural network. GraphMask [27], however, learns the edge masks for each layer of GNNs and predicts whether an edge can be dropped while retaining the prediction. Differently, PGMExplainer [23] identifies important nodes by random node attribute perturbation and a probabilistic graphical model. SubgraphX [24] explains graph in node-assembled subgraph level by Monte Carlo tree search with Shapley value as the scoring function. Different explanations for GNNs have recently been explored. CF-GNNexplainer [61] targets on counterfactual explanations by learning a binary perturbation matrix that sparsifies the input adjacency matrix. With the evolution of GNN explanation methods, a recent survey [35] categorized graph explanation methods into two major levels -- instance-level and model-level. The aforementioned methods belong to instance-level, which provide explanations for specific inputs. Model-level methods generate a typical graph pattern that explains how the prediction is made. XGNN [62] directly explains a GNN model by graph generation, using a reinforcement learning method. If trained by multiple graphs, PGExplainer is able to provide model-level explanation.
## 8 Discussion
Our method can be adjusted to different cybersecurity applications using GNNs since it is comprehensive and the importance scores are learned from the feedback of GNNs. The design is based on the common architecture of GNNs without requiring prior knowledge. The experiment further proves that Illuminati improves the performance in both graph classification and node classification.
In this paper, we mainly focus on node attributes as for attribute explanation, while it can be adjusted to different attributes. As the importance scores for edges and attributes are learned, node importance scores are able to be obtained. Several applications including code vulnerability detection construct graphs with edge attributes, but the attributes are not learned by GNNs. Edge attributes, as edge labels in many applications, can be learned and utilized by relational models. Then an edge is denoted as \((i,j,r)\), where \(r\) indicates the relationship of the edge. There will be sets of edge lists categorized by the relationships.
The explainability of GNNs is not as well-explored as other traditional deep learning models. Besides understanding the contributive factors to the prediction, there is a significant space to fill in, e.g., global explanation and causal explanation. It is observed from the EP of the remaining subgraphs that these subgraphs still contribute to the prediction. Different types of explanations are needed for cybersecurity applications. Illuminati can easily be adjusted for counterfactual explanations by adopting CF-GNNExplainer [61]. Due to the similarity between explanation and attack, there is work [63] to conduct backdoor attacks against GNNs with explanation methods. Illuminati can also be utilized for attack and defense.
## 9 Conclusion
In this paper, we propose Illuminati, an explanation method that provides a comprehensive explanation for GNNs. By learning the importance scores for both graph structure and node attributes, Illuminati is able to accurately explain the prediction contribution from nodes, edges, and attributes. We apply Illuminati to two cybersecurity applications. Our experiments show Illuminati achieves high explanation fidelity. We also demonstrate the practical usage of Illuminati in cybersecurity applications. |
2306.12327 | Learning the galaxy-environment connection with graph neural networks | Galaxies co-evolve with their host dark matter halos. Models of the
galaxy-halo connection, calibrated using cosmological hydrodynamic simulations,
can be used to populate dark matter halo catalogs with galaxies. We present a
new method for inferring baryonic properties from dark matter subhalo
properties using message-passing graph neural networks (GNNs). After training
on subhalo catalog data from the Illustris TNG300-1 hydrodynamic simulation,
our GNN can infer stellar mass from the host and neighboring subhalo positions,
kinematics, masses, and maximum circular velocities. We find that GNNs can also
robustly estimate stellar mass from subhalo properties in 2d projection. While
other methods typically model the galaxy-halo connection in isolation, our GNN
incorporates information from galaxy environments, leading to more accurate
stellar mass inference. | John F. Wu, Christian Kragh Jespersen | 2023-06-21T15:11:17Z | http://arxiv.org/abs/2306.12327v1 | # Learning the galaxy-environment connection with graph neural networks
###### Abstract
Galaxies co-evolve with their host dark matter halos. Models of the galaxy-halo connection, calibrated using cosmological hydrodynamic simulations, can be used to populate dark matter halo catalogs with galaxies. We present a new method for inferring baryonic properties from dark matter subhalo properties using message-passing graph neural networks (GNNs). After training on subhalo catalog data from the Illustris TNG300-1 hydrodynamic simulation, our GNN can infer stellar mass from the host and neighboring subhalo positions, kinematics, masses, and maximum circular velocities. We find that GNNs can also robustly estimate stellar mass from subhalo properties in \(2d\) projection. While other methods typically model the galaxy-halo connection in isolation, our GNN incorporates information from galaxy environments, leading to more accurate stellar mass inference.
Machine Learning, Galaxy-halo connection, Galaxy-halo connection, Galaxy-halo connection, Galaxy-halo connection, Galaxy-halo connection, Galaxy-halo
## 1 Introduction
In the current \(\Lambda\)CDM paradigm of hierarchical galaxy formation, the galaxy-halo connection is crucial for understanding how galaxies form and evolve, and for constraining the small-scale clustering of matter (Somerville and Dave, 2015; Wechsler and Tinker, 2018; Vogelsberger et al., 2020). Techniques for modeling the co-evolution of galaxies and dark matter range from simple, non-parametric approaches to full-physics magnetohydrodynamic simulations which require \(>10^{8}\) CPU hours of computation (e.g., Vale and Ostriker, 2004; Pillepich et al., 2018). Detailed simulations contribute important insights into galaxy formation, but due to their complexity and heavy computational costs, they are hard to analyze and cannot be performed for cosmologically significant volumes. Machine learning (ML) is a natural option for making progress on both of these problems.
We present an equivariant Graph Neural Network (GNN), which takes as its input a graph composed of halos linked on a linking scale of 5 Mpc, and predicts baryonic properties. The GNN incorporates the effects of a galaxy's environment, thereby improving the prediction of its baryonic properties compared to traditional methods. We are also able to train a network on the Illustris TNG300-1 box in 10 minutes on a single NVIDIA A10G GPU; inference takes one second. In this work, we focus on estimating stellar mass from a catalog of subhalo positions, velocities, \(M_{\rm halo}\), and \(V_{\rm max}\).
## 2 Related work
The connection between galaxies and their dark matter halos has been characterized via abundance matching or halo occupation distribution (HOD) models of central halos (Berlind and Weinberg, 2002; Wechsler et al., 2002), conditional luminosity or mass functions (Yang et al., 2003; Moster et al., 2010), subhalo abundance matching (Kravtsov et al., 2004; Vale and Ostriker, 2004; Conroy et al., 2006), and empirical models of the galaxy-halo connection (e.g., Reddick et al., 2013; Behroozi et al., 2019). Several works have also attempted to perform abundance matching or paint baryons (i.e., stars) onto dark matter maps by using classical machine learning algorithms (e.g., Kamdar et al., 2016; Agarwal et al., 2018; Calderon and Berlind, 2019) and/or neural networks (e.g., Zhang et al., 2019; Moster et al., 2021; Mohammad et al., 2022).
In general, these previous methods treat halo/galaxy systems as unrelated entities with no formation history. To rectify this, Villanueva-Domingo et al. (2022) construct mathematical graphs to represent group halos, and train a GNN to learn the central halo mass, which was later applied to estimate the halo masses of local Group galaxies (Villanueva-Domingo et al., 2021). GNNs have also been successfully used to model the dependence of galaxy properties on merger history (e.g., Jespersen et al., 2022; Tang and Ting, 2022), and generate synthetic galaxy catalogs (Jagvaral et al., 2022).
In cosmology, several works have already demonstrated the representational power of GNNs, and have used it for simulation-based inference (likelihood-free inference).
Villanueva-Domingo & Villaescusa-Navarro (2022) employ GNNs to infer the cosmological parameters \(\Omega_{m}\) and \(\sigma_{8}\), using \(3d\) galaxy positions and stellar properties from the CAMELS simulation suite (Villaescusa-Navarro et al., 2021). Makinen et al. (2022) show that GNNs can optimally extract and compress catalog data for cosmological parameter inference. Shao et al. (2023) and de Santi et al. (2023) train GNNs to infer cosmological parameters from dark matter-only simulations, and then validate their robustness on other \(N\)-body and hydrodynamic simulations.
## 3 Cosmic graphs
### Simulation data
We use SUBFIND\(z=0\) subhalo catalogs (Springel et al., 2001) derived from the Illustris TNG300-1 hydrodynamic simulation (Nelson et al., 2019; Pillepich et al., 2019). We split the full cosmological box into \(6^{3}=216\) subvolumes in order to fit into 16 GB of memory, such that each subvolume is about \((50~{}\mathrm{Mpc})^{3}\). For consistency with the TNG simulations, we adopt the Planck Collaboration et al. (2016) cosmology and set \(H_{0}=67.74~{}\mathrm{km~{}s^{-1}~{}Mpc^{-1}}\).
We select unflagged subhalos that have more than 50 star particles, \(\log(M_{\star}/M_{\odot})>9\), and \(\log(M_{\mathrm{halo}}/M_{\odot})>10\). Due to cosmic variance, some subvolumes only have a few hundred subhalos, while others have thousands. In Figure 1, we show an example of a typical subvolume.
### Equivariant graph neural networks
We construct a mathematical graph for each TNG300 subvolume, such as the one depicted in Figure 1. We designate \(\mathcal{V}_{i}=(\mathbf{x}_{i},\mathbf{v}_{i},M_{\mathrm{halo},i},V_{\mathrm{max},i})\) as the eight node features. Subhalos within a linking length of \(L=5\) Mpc are connected with edges. Subvolumes are padded by \(2.5\) Mpc on each side, such that subvolumes do not share connections that would be relevant for the linking length. We allow nodes to be connected to themselves (i.e., self-loops). On each edge \(\mathcal{E}_{ij}\), we compute three features: the squared Euclidean distance \(d_{ij}\equiv||\mathbf{x}_{i}-\mathbf{x}_{j}||\), the inner product between unit vectors \(\mathbf{e}_{i}\cdot\mathbf{e}_{j}\), and the inner product between unit vectors \(\mathbf{e}_{i}\cdot\mathbf{e}_{i-j}\), where unit vectors \(\mathbf{e}_{i}\equiv(\mathbf{x}_{i}-\bar{\mathbf{x}})/||\mathbf{x}_{i}-\bar{\mathbf{x}}||\) are defined using positions \(\mathbf{x}_{i}\) relative to the centroid of the point cloud distribution \(\bar{\mathbf{x}}\), and \(\mathbf{e}_{i-j}\) is the unit vector in the direction of \(\mathbf{x}_{i}-\mathbf{x}_{j}\).
We use a message-passing GNN based on interaction networks (Battaglia et al., 2016, 2018), similar to the model used by Villanueva-Domingo et al. (2022). By design, the GNN is equivariant to permutations and invariant under the \(E(3)\) group action, i.e., invariant to rotations, reflections, and translations. For more details about equivariant GNNs, see the appendices of Garcia Satorras et al. (2021) and Sections 3.1 and 3.2 of Villanueva-Domingo & Villaescusa-Navarro (2022). We aggregate layer inputs at each node by max pooling over information from neighboring nodes.1 Our GNN has one set of fully connected layers with 256 latent channels and 128 hidden channels. We predict two quantities for each node, which correspond to the logarithmic stellar mass \(y_{i}\equiv\log(M_{\star,i}/M_{\odot})\) and the logarithmic variance, \(\log\Sigma_{i}\) (i.e., the logarithm of the squared uncertainty on stellar mass).
Footnote 1: We do not find significant improvements by using a concatenation of sum, max, mean, and variance aggregations, or by using learnable aggregation functions.
### Optimization
Our loss function is composed of two terms: the mean squared error on the logarithmic stellar mass \(||\hat{\mathbf{y}}-\mathbf{y}||^{2}\), and the squared difference between the predicted and measured variance \(||\hat{\mathbf{\Sigma}}-(\hat{\mathbf{y}}-\mathbf{y})^{2}||^{2}\). The latter term ensures that the variance is appropriately estimated (see Moment Networks, described in Section 2 of Jeffrey & Wandelt, 2020). We stabilize training by taking the logarithm of each loss term before summing them. We monitor the loss as well as the root mean squared error (RMSE) on \(\log(M_{\star}/M_{\odot})\).
We perform \(k=6\)-fold cross-validation. For each fold, we train on 180 subvolumes and validate on 36 subvolumes, such that the validation set forms a \(\sim 50\times 300\times 300\) Mpc\({}^{3}\) subbox. We augment the training data set by adding random noise, sampled from a normal distribution with \(10^{-5}\) times
Figure 1: Cosmic graph of a TNG300 subvolume spanning approximately \(45\) Mpc. Subhalos live on nodes and are colored by their logarithmic subhalo mass. Edges are formed between pairs of subhalos separated by less than the linking length of 5 Mpc.
the standard deviation, for each node variable. Based on a preliminary hyperparameter search, we implement a simple optimization schedule over a total of 1000 epochs using the AdamW optimizer (Kingma & Ba, 2014; Loshchilov & Hutter, 2017) and a batch size of 36. We begin with a learning rate of \(10^{-2}\) and weight decay of \(10^{-4}\), and then decrease both by a factor of \(5\) at 500 epochs, and again decrease both by a factor of \(5\) at 750 epochs. We inspect the training and validation losses to ensure that the optimization is converged and does not overfit the training data.
## 4 Results
Overall, we find that the GNN can infer the stellar mass from subhalo properties with remarkable accuracy. We recover the galaxy stellar mass to within RMSE \(=0.129\) dex of its simulated value by using a GNN. The predictions are largely unbiased as a function of mass.
### Comparisons against baseline models
In Figure 2, we compare the performance of different models trained and cross-validated on the same TNG300 data set. The panels show, from left to right: (a) a subhalo abundance matching (SHAM) model, (b) a random forest (RF) trained using \(M_{\rm halo}\) as input, (c) a RF trained using \(V_{\rm max}\), (d) a RF trained using both \(M_{\rm halo}\) and \(V_{\rm max}\), and (e) a GNN trained using \(3d\) positions, \(3d\) velocities, \(M_{\rm halo}\), and \(V_{\rm max}\). In Table 1, we list performance metrics for various RF and GNN models, including the RMSE, mean average error (MAE), normalized median absolute deviation (NMAD),2 Pearson correlation coefficient (\(\rho\)), correlation of determination (\(R^{2}\)), bias, and outlier fraction (\(>3\times\) NMAD).
Footnote 2: We define NMAD(\(\mathbf{x}\)) \(\equiv\)\(k\cdot{\rm median}(|x-{\rm median}(\mathbf{x})|)\), where \(k\)\(\equiv\)\(1.4826\) ensures that the NMAD and standard deviation are equal for a normally distributed \(\mathbf{x}\).
The SHAM model constructs separate monotonic relationships between \(M_{\rm halo}\) or \(V_{\rm max}\) and \(M_{\star}\) for centrals and satellites. Another difference between the SHAM model and other approaches considered here is the former's explicit treatment of subhalo centrality. In order to facilitate an apples-to-apples comparison, we also train an abundance matching (AM) model that does not distinguish between satellites and centrals; however the AM model performs considerably worse than the SHAM counterpart. We note that the AM and SHAM models are trained and evaluated on the same data set, so their performance metrics may be overinflated.
We also train several RF models, which serve as reasonable proxies for AM or conditional luminosity function models (Calderon & Berlind, 2019). By comparing panels (b) and (c), we observe that \(V_{\rm max}\) is more physically connected to \(M_{\star}\) than \(M_{\rm halo}\), in agreement with previous findings (i.e., Conroy et al. 2006; Reddick et al. 2013; we find this to be true for the RF, AM, and SHAM models). A RF trained on both \(M_{\rm halo}\) and \(V_{\rm max}\) provides an even better reconstruction (RMSE = 0.148 dex).
Ultimately, we find that the GNN strongly outperforms all baseline models. While the GNN does not distinguish between centrals and satellites, it may be able to learn whether a given subhalo is a central based on surrounding subhalo properties (see Section 5.2).
### Centrals versus satellites
Satellite dark matter halos are preferentially stripped relative to stars in a host halo's tidal field (Smith et al., 2016). In Appendix A, we show the stellar mass-halo mass relation for satellite and central galaxies in TNG300 (Figure 3). Indeed, we observe that satellite galaxies exhibit significantly more dispersion than centrals \(M_{\star}\)-\(M_{\rm halo}\) relation. Our \(3d\) GNN is also worse at predicting \(\log(M_{\star}/M_{\odot})\) for satellites than for centrals (see bottom two rows of Table 1), but this is due to the inherently larger scatter in the satellite-halo relation. We find that there is an overall negative bias for satellites and and positive bias for centrals, because the GNN must learn separate offset relations for both centrals and satellites.
Figure 2: Predicted stellar mass versus true stellar mass for the TNG300 data. From left to right, we show results for a subhalo abundance matching (SHAM) model, three random forest (RF) models, and our \(3d\) GNN trained using \(\mathbf{x}\), \(\mathbf{v}\), \(M_{\rm halo}\), and \(V_{\rm max}\). We also report the scatter in the reconstructed \(\log(M_{\star}/M_{\odot})\).
### Cosmic substructure in projection
We also construct cosmic graphs in projection, i.e. projected coordinates \(x_{1}\) and \(x_{2}\), and radial velocity \(v_{3}\), instead of the full phase space information (see Appendix B). This \(2d\) GNN model achieves RMSE = \(0.135\) dex scatter, which still exceeds the performance of the best RF estimator (see Table 1). Because the \(2d\) GNN encode projected large scale structure information, it outperforms the RF models that can only learn isolated subhalo information.
## 5 Discussion
We have presented a novel method for populating dark matter subhalos with galaxy stellar masses. Mathematical graphs combine individual halo properties and environmental parameters in an equivariant representation, resulting in robust predictions for both central and satellite galaxies. As shown in Table 1 and Figure 2, the cosmic graphs outperform random forests trained on \(V_{\rm max}\) and \(M_{\rm halo}\). For galaxies with \(\log(M_{\star}/M_{\odot})\geq 9\) and \(\log(M_{\rm halo}/M_{\odot})\geq 10\), we recover the logarithmic stellar mass to within a root mean squared error (RMSE) of \(0.129\) dex.
### Inductive biases of GNNs
We note that previous works have employed convolutional neural networks (CNNs) for painting stars onto dark matter maps (Zhang et al., 2019; Mohammad et al., 2022). Unlike abundance matching models and RFs, CNNs are able to represent local spatial information. However, CNNs and GNNs have different inductive biases: CNNs are well-suited for representing fields discretized onto a Cartesian grid, while GNNs are well-suited for representing objects and relationships between them. Galaxies have small sizes (\(\sim\)kpc) relative to their typical separations (\(\sim\)Mpc), and they interact with each other (and their surrounding media) through multiple physical mechanisms (e.g., gravitational attraction, tides, ram pressure, etc). Therefore, cosmic structures naturally conform to a graphical representation, motivating our use of GNNs in this work.
### Galaxy environments
We note that a GNN with no edges except self-loops would essentially model the galaxy-halo connection in isolation; all environmental information is contained and passed along the edges. However, if we remove self-loops from the GNN, then the GNN is still able to infer \(\log(M_{\star}/M_{\odot})\) to within RMSE \(\sim\) 0.145 dex. A GNN without self-loops must estimate galaxy stellar mass _solely_ from neighboring halo information, which demonstrates that galaxy environments are informative for modeling the galaxy-halo connection.
We find that the GNN with max-pooling aggregation function achieves 0.001 dex lower RMSE than a GNN with sum-pooling. This result suggests that the GNN selects the largest value for some combination of \(M_{\rm halo}\), \(V_{\rm max}\), and distance to neighboring subhalos in order to best make predictions. We can speculatively interpret this as evidence that the largest and most nearby subhalo is most informative to a GNN. The largest subhalo might dominate environmental effects (e.g. tides and ram pressure) and control a given subhalo's stellar mass. Meanwhile, the summed information should capture _all_ of the forces, and we expect it to be more robust or transferable across domains. This interpretation
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline
**Model** & **RMSE** & **MAE** & **NMAD** & **Pearson \(\rho\)** & \(R^{2}\) & **Bias** & **Outlier fraction** \\ & (dex) & (dex) & (dex) & & & (\(10^{-3}\) dex) & (\%) \\ \hline AM - \(M_{\rm halo}\) & 0.424 & 0.327 & 0.323 & 0.736 & 0.472 & 0.1 & 3.73 \\ AM - \(V_{\rm max}\) & 0.173 & 0.150 & 0.132 & 0.956 & 0.912 & **0.0** & 1.91 \\ SHAM - \(M_{\rm halo}\) & 0.332 & 0.231 & 0.235 & 0.838 & 0.677 & 0.1 & 6.20 \\ SHAM - \(V_{\rm max}\) & 0.151 & 0.133 & 0.115 & 0.966 & 0.933 & **0.0** & 1.75 \\ RF - \(M_{\rm halo}\) & 0.366 & 0.308 & 0.277 & 0.780 & 0.606 & \(\mathbf{-0.0\pm 0.1}\) & \(2.53\pm 0.01\) \\ RF - \(V_{\rm max}\) & 0.197 & 0.177 & 0.152 & 0.942 & 0.886 & \(-0.3\pm 0.0\) & \(1.44\pm 0.01\) \\ RF - \(M_{\rm halo}+V_{\rm max}\) & 0.148 & 0.135 & 0.114 & 0.967 & 0.936 & \(0.3\pm 0.0\) & \(1.31\pm 0.00\) \\ GNN (\(2d\) projection) & 0.135 & 0.131 & 0.106 & 0.973 & 0.946 & \(-3.9\pm 2.2\) & \(0.68\pm 0.01\) \\
**GNN (\(3d\))** & **0.129** & **0.125** & **0.102** & **0.975** & **0.951** & \(0.8\pm 0.6\) & \(\mathbf{0.68\pm 0.00}\) \\ \hline GNN (\(3d\)) - centrals & 0.123 & 0.119 & 0.097 & 0.979 & 0.959 & \(4.6\pm 0.7\) & \(0.67\pm 0.01\) \\ GNN (\(3d\)) - satellites & 0.138 & 0.136 & 0.109 & 0.968 & 0.936 & \(-5.0\pm 0.6\) & \(0.58\pm 0.01\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Cross-validation metrics for the AM, SHAM, RF, and GNN models discussed in the text. The GNN trained using positions, velocities, \(M_{\rm halo}\), and \(V_{\rm max}\) achieves the best metrics (shown in bold) in nearly every category. The last two rows report metrics for the \(3d\) GNN model, except that only central and satellite subhalos are selected from the cross-validation set. We note that the AM/SHAM models are trained and evaluated on the same data set. For RF and GNN models, we repeat the entire training and cross-validation experiment three times; the scatter is too small to be shown in the displayed significant figures for all columns except the bias and outlier fraction.
requires addition testing and an exhaustive hyperparameter search over GNN architecture and optimization procedures, which we aim to do in a follow-up work.3
Footnote 3: The linking length is a particularly important hyperparameter. In our preliminary tests, we have found 5 Mpc to give good results.
### Applications to observations
The strong performance of \(2d\) GNNs (SS4.3) is promising for facilitating comparisons to observations beyond the Local Group, where we can only reliably measure projected positions and line-of-sight velocities rather than full phase space information. Our method can be used to quickly estimate galaxy properties of constrained \(N\)-body (McAlpine et al., 2022) and Gpc-scale \(N\)-body simulated volumes (Garrison et al., 2018; Maksimova et al., 2021) for comparison with wide-area galaxy surveys in the low-redshift Universe (Ruiz-Macias et al., 2021; Carlsten et al., 2022; Darragh-Ford et al., 2022; Driver et al., 2022; Wu et al., 2022).
### Limitations and caveats
While we have shown that the GNN outperforms other methods, this demonstration does not definitively prove that GNNs are exploiting environmental information. Indeed, we have used a linking length of 5 Mpc, but this hyperparameter may be suboptimal and should be tuned. It is also possible that intrinsic scatter imposes a RMSE floor (i.e., due to the "butterfly effect" in cosmological simulations Genel et al., 2019), although GNN results using merger trees have shown that galaxy stellar mass can be recovered to even lower scatter (Jespersen et al., 2022). Finally, it may be that merger history is more important than environmental information, and that the clustering information learned by a GNN only incrementally improves performance relative to other approaches.
Our results will depend on choice of halo finder, i.e. if we were to use an alternative to the SUBFIND algorithm (e.g. ROCKSTAR; Behroozi et al. 2013). We have not tested our results using different halo finding tools, and it is unclear whether a GNN trained using one halo finder catalog will properly generalize to another catalog produced by a different halo finder. We also note that our results, while promising, must be tested on dark matter only simulations with halo catalogs matched to the hydrodynamic simulation catalogs before we can rely on GNNs to paint galaxies onto dark matter subhalos.
Additionally, domain adaptation will likely be needed to ensure simulated results can transfer to other simulations (e.g., while varying cosmological parameters; Villaescusa-Navarro et al. 2021) or to observations (e.g, Ciprijanovic et al., 2023). As a preliminary test, we repeat our experiment by training on TNG300 and validating on TNG50 data, and vice versa; in both cases the results are poor (\(>0.2\) dex). However, by training on a subset both simulations, we can recover \(\log(M_{\star}/M_{\odot})\) to \(\sim 0.13\) dex for TNG300 and \(\sim 0.14\) dex for TNG50 (Nelson et al., 2019; 2). This test suggests that cross-domain applications, such as transferring GNN results from simulations to observations, will necessitate some form of domain adaptation.
## Software and Data
Our code is completely public on Github: [https://github.com/jwupphysics/halo-gnns/tree/halos-to-stars](https://github.com/jwupphysics/halo-gnns/tree/halos-to-stars). We have used the following software and tools: numpy (Harris et al., 2020), matplotlib(Hunter, 2007), pandas(Wes McKinney, 2010), pytorch(Paszke et al., 2019), and pytorch-geometric (Fey and Lenssen, 2019).
We only use public simulation data from Illustris, which can be downloaded from [https://www.tng-project.org/data/](https://www.tng-project.org/data/).
## Acknowledgments
JFW and CKJ thank Peter Behroozi, Haley Bowden, Francisco Villaescua-Navarro, Tjitske Starkenburg, and Risa Wechsler for valuable discussions that sharpened this work. We also thank the two anonymous reviewers who provided excellent comments and suggestions that improved this manuscript. This research has made use of NASA's Astrophysics Data System Bibliographic Services. The authors are grateful to the Kavli Institute for Theoretical Physics "Building a Physical Understanding of Galaxy Evolution with Data-driven Astronomy" program, where this work began. This research was supported in part by the National Science Foundation under Grant No. NSF PHY-1748958.
|
2308.04943 | Differentially Private Graph Neural Network with Importance-Grained
Noise Adaption | Graph Neural Networks (GNNs) with differential privacy have been proposed to
preserve graph privacy when nodes represent personal and sensitive information.
However, the existing methods ignore that nodes with different importance may
yield diverse privacy demands, which may lead to over-protect some nodes and
decrease model utility. In this paper, we study the problem of
importance-grained privacy, where nodes contain personal data that need to be
kept private but are critical for training a GNN. We propose NAP-GNN, a
node-importance-grained privacy-preserving GNN algorithm with privacy
guarantees based on adaptive differential privacy to safeguard node
information. First, we propose a Topology-based Node Importance Estimation
(TNIE) method to infer unknown node importance with neighborhood and centrality
awareness. Second, an adaptive private aggregation method is proposed to
perturb neighborhood aggregation from node-importance-grain. Third, we propose
to privately train a graph learning algorithm on perturbed aggregations in
adaptive residual connection mode over multi-layers convolution for node-wise
tasks. Theoretically analysis shows that NAP-GNN satisfies privacy guarantees.
Empirical experiments over real-world graph datasets show that NAP-GNN achieves
a better trade-off between privacy and accuracy. | Yuxin Qi, Xi Lin, Jun Wu | 2023-08-09T13:18:41Z | http://arxiv.org/abs/2308.04943v1 | # Differentially Private Graph Neural Network with
###### Abstract
Graph Neural Networks (GNNs) with differential privacy have been proposed to preserve graph privacy when nodes represent personal and sensitive information. However, the existing methods ignore that nodes with different importance may yield diverse privacy demands, which may lead to over-protect some nodes and decrease model utility. In this paper, we study the problem of importance-grained privacy, where nodes contain personal data that need to be kept private but are critical for training a GNN. We propose NAP-GNN, a node-importance-grained privacy-preserving GNN algorithm with privacy guarantees based on adaptive differential privacy to safeguard node information. First, we propose a Topology-based Node Importance Estimation (TNIE) method to infer unknown node importance with neighborhood and centrality awareness. Second, an adaptive private aggregation method is proposed to perturb neighborhood aggregation from node-importance-grain. Third, we propose to privately train a graph learning algorithm on perturbed aggregations in adaptive residual connection mode over multi-layers convolution for node-wise tasks. Theoretically analysis shows that NAP-GNN satisfies privacy guarantees. Empirical experiments over real-world graph datasets show that NAP-GNN achieves a better trade-off between privacy and accuracy.
## 1 Introduction
In recent years, graph neural networks have achieved outstanding performance in several domains, such as social analysis [14], financial anomaly detection [3], time series analysis [15], and molecule synthesis [1]. Through aggregating the feature information of neighboring nodes and fully mining and fusing the topological associations in graph data, graph neural networks yield state-of-art results in tasks such as link prediction [17], node classification [18], and sub-graph classification [14]. However, graph data in the real world usually contain private information. For example, in social network graphs, the edges between nodes indicate the existence of social attributes, such as being friends. Node features also carry sensitive information, such as the average online time of users per week and the average number of comments per week. To satisfy the privacy preservation demands of nodes, privacy-preserving GNN models have been proposed in [1]. The Locally Private Graph Neural Network (LPGNN) presented in [13] realizes the GNN model framework under local differential privacy guarantee. The features and labels of nodes can be protected. An edge-level differential privacy graph neural network (DP GNN) algorithm was proposed in [20] by perturbing the adjacency matrix of the graph. Some other researches attempt to address the privacy problem in GNN through federated [14] and split learning [16].
Although the existing DP GNN models enable privacy data protection, it does not consider the diverse privacy demand of nodes in the graph. Take node degree, for example. Nodes with different degree may yield different privacy protection needs. Most realistic graph data meet the power-law distribution [10]. The degree of most nodes is small, and few nodes have a relatively large degree. In the social media graph, suppose the nodes represent users, and the edges represent the following behaviors among users. The indegree of the nodes indicates how many followers they have, and the out-degree indicates how many other users they follow. Users with a larger number of followers have stronger social effects, and other users will notice their comments on social media and their friends list. Nodes with higher degrees require a more vital level of privacy protection.
However, existing works treat all nodes' privacy demands as the same and do not provide corresponding privacy guarantees for nodes with different privacy needs, which would lead to over-protecting some nodes, injecting over-needed noise, and decreasing model utility. Another drawback of existing methods protecting GNN through DP-SGD where noise is added on the parameter gradient is that the magnitude of injected noise and the privacy budget is accumulated during the training phase in proportion to epoch number.
To solve these problems, we propose a Node-Importance-Grained Adaptive Differential Private Graph Neural Network (NAP-GNN), which adds different extents of noise based on node importance and independent of the training epoch.
Our goal is to develop a fine-grained and adaptive differential privacy algorithm in GNN that flexibly distributes privacy budget according to known node importance and uses topology information in graphs. In this paper, we calculate the degree of a node as the sum of in-degree and out-degree. A supervised node importance estimation method, TNIE, is proposed to capture relations node neighbors and flexibly make centrality awareness adjustments. Then depending on the calculated node importance, we propose an importance-awareness fine-grained permutation method to realize node diverse privacy-preserving demand. An adaptive message passing method with residual connection is applied to improve model utility. We show theoretically that NAP-GNN satisfies the requirement of differential privacy. Experiments on real graph datasets show that the adaptive privacy budget allocation mechanism and adaptive residual aggregation of NAP-GNN can improve the utility of the model and outperforms existing approaches under a given privacy budget. The main contributions of this work are as follows.
* We study a novel problem of node-importance-related privacy concerns in GNNs. To the best of our knowledge, this is the first work to analyze this issue.
* We propose a new algorithm named Node-Importance-Grained Adaptive Differential Private Graph Neural Network (NAP-GNN), which tackles different privacy requirements among nodes by assigning global budget.
* We present Topology-based Node Importance Estimation (TNIE), a supervised estimation method, to address unknown nodes' importance estimation problem considering neighborhood and centrality awareness from the topology perspective.
* The experimental results on four benchmark graph datasets demonstrate the availability and effectiveness of our proposed NAP-GNN method.
## 2 Problem Formulation
In this section, we first revisit the definition of differential privacy proposed by [14] and graph neural work. Then we define the problem of learning a GNN with node data privacy concerns related to node importance.
### Graph Neural Network
Let \(\mathcal{G}=\{V,E,A,X,Y\}\) be an unweighted undirected graph, where \(V\) and \(E\) denote the nodes set and edges set. The adjacency matrix \(A\in\{0,1\}^{N\times N}\) represents the link among edges, \(|N|\) denotes the node number. For \(\forall v_{i},v_{j}\in V\), if there exists an edge between \(v_{i},v_{j}\), then \(A_{ij}=1\), for else \(A_{ij}=0\). Node feature of \(v\in V\) is a \(d\)-dimension vector, and the N \(\times\)\(d\) matrix \(X\) represents the stack of all nodes' feature, where \(X_{v}\in X\) denotes the feature of \(v\). \(Y\in\{0,1\}^{N\times M}\) represents the label of nodes, and M is the class number.
The typical message-passing-based GNN consists of two phases: message aggregation and updating. In the message aggregation phase of \(i\)-th layer, every node share and receive neighbors embedding of the former \(i\)-1-th layer and output a new embedding after applying a transformation, which can be defined as:
\[E_{v}^{i}=f_{agg}(\{h_{u}^{i-1},u\in\mathcal{N}(v)\}). \tag{1}\]
\(\mathcal{N}(v)\) denotes the adjacent node set of node \(v\), and \(h_{u}^{i-1}\) represents the embedding output of node \(u\) at \(i\)-1-th layer. \(f_{agg}\) is the aggregation linear function like SUM, MEAN, MAX etc. \(E_{v}^{i}\) is the aggregate output of \(i\)-th layer after the aggregation transformation of all adjacent nodes. Update transformation is employed on the \(E_{v}^{i}\), which is shown as:
\[h_{v}^{i}=f_{upd}(E_{v}^{i},h_{v}^{i-1};\theta_{i}). \tag{2}\]
\(f_{upd}\) denotes the learnable function that takes the aggregate vector as input and outputs the embedding of node \(v\) at \(i\)-th layer. \(f_{upd}\) is determined by parameter \(\theta_{i}\). The input \(h_{v}^{0}\) of GNN's first layer is \(X_{v}\), and the last layer generates embedding vectors \(h_{v}^{L}\) for \(\forall v\in V\), which can be used in downstream tasks, \(L\) represents the total layer. In this paper, we focus on the node classification task. A softmax layer is employed on the final embedding vectors \(h_{v}^{L}\) to get the class probability \(C_{v}\) of node \(v\).
### Problem Definition
The goal of this paper is to preserve the adaptive privacy of the graph nodes from importance-grain in the training step of GNN through differential privacy. Different from previous work in [1], we aim to propose an epoch-independent method considering node importance from topology without losing much utility of GNN. Refer to the definition in [11], we first define the notion of Node-level adjacent graph in this paper as follows:
**Definition 1** (Node-level adjacent graph).: _Graphs \(\mathcal{G}\) and \(\mathcal{G}^{\prime}\) are node-level adjacent if at most one node's feature vector is different. Without loss of generality, let \(\mathcal{G}\) can be obtained by altering a node in \(\mathcal{G}^{\prime}\)._
Then we define the \(\epsilon\)-Node-level differential privacy as:
**Definition 2** (\(\epsilon\)-Node-level differential privacy).: _Let \(\mathcal{G}\) and \(\mathcal{G}^{\prime}\) be two node-level adjacent graph datasets, given \(\epsilon>0\), the random algorithm \(\mathcal{A}\) is \(\epsilon\)-Node-level differential privacy if for any set of outputs \(S\in\text{Range}(\mathcal{A})\), satisfies:_
\[\text{Pr}[\mathcal{A}(\mathcal{G})\in S]\leq e^{\epsilon}\text{Pr}[\mathcal{A }(\mathcal{G}^{\prime})\in S]. \tag{3}\]
Based on Definition 2, the global graph sensitivity can be defined as:
**Definition 3** (Global graph sensitivity).: _The global graph sensitivity of function \(f\) on two node-level adjacent graphs \(\mathcal{G}\) and \(\mathcal{G}^{\prime}\) is:_
\[\Delta_{g\mathcal{G}}=\text{max}||f(\mathcal{G})-f(\mathcal{G}^{\prime})||_{1} \tag{4}\]
## 3 Proposed Method:NAP-GNN
In this section, we present our proposed **N**ode-Importance-Grained **A**daptive **I**pivate **G**raph **N**eural **N**etwork (NAP-GNN). First, the overall flow of NAP-GNN is provided. Then we show the algorithm design and key components of NAP-GNN in detail.
### Overall Flow of NAP-GNN
Figure 1 shows the overall flow of NAP-GNN, which consists of three components: _Topology-based Node Importance Estimation Module_, _Node-Importance-Grained Permutation Module_ and _Adaptive Residual Multi-Layer Aggregation Module_. The key mechanism of NAP-GNN is to preserve the graph data privacy considering node importance, which is achieved in a fine-grain permutation module. We use the Laplacian mechanism to add noise on the aggregation vector in a customized way, motivated by the fact that altering a feature vector can be viewed as adding or mincing extra noise.
The propagation layer is essential to graph neural network utility. Compared to fully propagating all neighbors embedding, we propose to use an adaptive conversion between noised aggregation embedding and residual connection, letting different nodes employ customized propagation layers.
### Topology-based Node Importance Estimation
To estimate topology-based node importance in a given graph, we propose a TNIE method considering the _Neighborhood Awareness_ and _Centrality Awareness_ motivated by [13]. Neighborhood Awareness refers to the idea that the importance of a node is influenced by its neighboring nodes, so the neighboring nodes' importance is a good representation of the current node's importance. Centrality Awareness means that nodes with higher centrality are more important than nodes with lower centrality.
The effect of neighboring nodes includes two aspects: node original feature and connectivity. To model this relationship, TNIE first uses a score computing network to map the original node features to a \(r\)-dimensional space:
\[f_{u}^{0}=ScoreComputing(\mathbf{x_{u}};u\in V), \tag{5}\]
where \(u\) is a node in \(\mathcal{G}\). Score computing network can be any neural network that takes node original feature as input and output encoded feature. In this paper, we use Multi-Layer Perception (MLP). Then through intermediate embedding propagation, TNIE realizes weighted aggregation from node \(u\) and its neighbors for the \(t\)-th layer (\(t=1,2,...T\)):
\[f_{u}^{t}=\sum_{v\in N(u)}\frac{1}{\sqrt{D_{u}}}\frac{1}{\sqrt{D_{v}}}f_{v}^{ t-1}, \tag{6}\]
where \(D_{u}\) denotes degree of \(u\). After \(T\) layers aggregation, \(\forall u\in V\) gets neighborhood awareness-based embedding \(f_{u}^{T}\).
The degree is a common proxy for node centrality. The degree of node has an impact on node importance estimation. Instead of initial degree \(log(D_{u}+\alpha)\) of node \(u\), where \(\alpha\) is a small parameter, TNIE adopts a shifting degree \(\lambda(log(D_{u}+\alpha))+\phi)\) to allow the possible discrepancy between degree and importance rank, where \(\lambda\) and \(\phi\) are learnable parameters. Then the shifting degree is used to adjust the neighborhood awareness-based embedding \(h_{u}^{T}\) of node \(u\) from centrality awareness consideration, which is as follows:
\[s(u)^{*}=\sigma((\lambda(log(D_{u}+\alpha))+\phi)\cdot f_{u}^{T}), \tag{7}\]
where \(\sigma\) is a non-linear activation function, \(s(u)^{*}\) denotes estimated importance score.
To estimate unknown nodes' importance score, TNIE uses mean squared error between the given importance score \(\mathcal{NI}(u)\) and estimated score \(s(u)^{*}\) for node \(u\in V_{t}\subset V\) to train the TNIE model. The loss function is:
\[\frac{1}{|V_{t}|}\sum_{u\in V_{t}}\left(s(u)^{*}-\mathcal{NI}(u)\right)^{2} \tag{8}\]
### Node-Importance-Grained Adaptive Laplace Mechanism
The goal of the Node-Importance-Grained Permutation Module is to privately release nodes' first aggregation embedding using adaptive noise proportion to sensitivity and node importance. Motivated by the fact that perturbing a node \(u\)'s edges in the graph can be seen as changing neighborhood aggregation of \(u\)'s adjacent nodes \(\forall v\in N(u)\), we realize Node-Importance-Grained Permutation Module by adding appropriate noise on the first layer's aggregation function. Specially, we use the sum aggregation function as the first layer, which is equivalent to the multiplication of the adjacent matrix and the input row-normalized feature. The permutation process can be presented as follows:
\[\overline{\mathcal{H}^{0}}(A,V,X)=\{\overline{H_{u}^{0}}\}_{u\in V}\ s.t.\ \overline{H_{u}^{0}}=\sum_{j=1}^{|N|}a_{uj}\mathbf{x_{j}}+Lap(\frac{\Delta \mathcal{H}^{0}}{\epsilon_{u}}), \tag{9}\]
where \(H_{u}^{0}=\sum_{j=1}^{|N|}a_{uj}\mathbf{x_{j}}\) denotes the sum aggregation process of node \(u\), \(a_{uj}\in A\), \(\mathbf{x_{j}}\) is the row-normalized feature of node \(j\), \(Lap(\frac{\Delta\mathcal{H}^{0}}{\epsilon_{u}})\) is the perturbed noise, \(\Delta\mathcal{H}^{0}\) denotes the sensitivity of aggregation function, \(\epsilon_{u}\) is the privacy budget.
Figure 1: Overview of NAP-GNN’s architecture
**Lemma 1**.: _Let \(\mathcal{G}=\{A,V,X\}\) and \(\mathcal{G}=^{\prime}=\{A^{\prime},V^{\prime},X^{\prime}\}\) be two adjacent graph dataset. Then the global graph sensitivity of first sum aggregation layer \(\Delta\mathcal{H}^{0}\leq 2D_{max}\), where \(D_{max}\) is maximum node degree._
Proof.: Assume adjacent graph dataset \(\mathcal{G}\) and \(\mathcal{G}^{\prime}\) differs in node \(k\). Then we have:
\[\Delta\mathcal{H}^{0} =\max_{u\in V}||\sum\mathcal{H}^{0}(A,V,X)-\sum\mathcal{H}^{0}(A ^{\prime},V^{\prime},X^{\prime})||_{1} \tag{10}\] \[=\max\sum_{u\in V}||\sum_{j=1}^{|N|}(a_{uj}\textbf{x}_{j}-a^{ \prime}_{uj}\textbf{x}^{\prime}_{j})||_{1}\]
\(A\) and \(A^{\prime}\) are two adjacent matrix. Without loss of generality, in \(\mathcal{G}^{\prime}\), we assume that node \(k\) is removed from \(\mathcal{G}\). Therefore, for nodes \(i\) and \(j\), we have \(a^{\prime}_{ij}=0,\ if\ i=k\ or\ j=k\), otherwise \(a^{\prime}_{ij}=a_{ij}\). Then we can get the following inequality:
\[\Delta\mathcal{H}^{0} =||\sum_{j=1}^{|N|}a_{kj}\textbf{x}_{j}+\sum_{i=1}^{|N|}a_{ik} \textbf{x}_{k}||_{1}\leq\sum_{j=1}^{|N|}a_{kj}+\sum_{i=1}^{|N|}a_{ik} \tag{11}\] \[\leq D_{out_{k}}+D_{in_{k}}\leq 2D_{max},\]
which concludes the proof.
Denote the _Node Importance_ of node \(u\) as \(\mathcal{NI}(u)\), the rank of importance as \(\mathcal{R}\)-\(\mathcal{NI}(u)\). \(\mathcal{C}(u)\) represents the privacy protection level of node \(u\). For node \(m\) and \(n\), if \(\mathcal{NI}(m)>\mathcal{NI}(n)\), then \(\mathcal{C}(m)>\mathcal{C}(n)\). Let \(\epsilon_{\mathcal{A}}\) denotes the total privacy cost of first aggregation layer, then the assigned privacy budget \(\epsilon_{u}\) of node \(u\) is proportional to \(\mathcal{C}(u)\), which is \(\epsilon_{u}=\epsilon_{\mathcal{A}}\cdot\beta_{u}\), where \(\beta_{u}\) is a weight coefficient that quantifies the impact of node importance in the privacy protection level.
Many real-world graph datasets follow power-law distribution. When node degree distribution is extremely imbalance, the maximum node degree is obviously higher than most nodes, Laplace-based noise generation mechanism may yield high noise. To tackle this challenge, we propose to leverage the potential wasted privacy budget generated by node degree difference of adjacent graph nodes.
For arbitrary node \(u\in V\), it receives potential reusable privacy budget from neighboring node \(k\in N(u)\). Total reusable privacy ratio of \(k\) is \(D_{max}-D_{k}\), for each neighbor node of \(k\), the assigned ratio \(r(u,k)\) is
\[r(u,k)=\frac{\mathcal{R}\mathcal{NI}(u)}{\sum_{j\in N(k)}\mathcal{R}\mathcal{NI }(j)}, \tag{12}\]
which is proportion to node importance rank. Then the weight coefficient of \(u\) is the minimum of all reusable budget ratio:
\[\beta_{u}=\min_{k\in N(u)}\{\frac{\mathcal{R}\mathcal{NI}(u)}{ \sum_{j\in N(k)}\mathcal{R}\mathcal{NI}(j)}(D_{max}-D_{k})+1,\frac{D_{max}}{D_ {k}}\}. \tag{13}\]
### Adaptive Residual Multi-Layer Aggregation
The Adaptive Residual Multi-Layer Aggregation Module takes the node-importance-grained perturbed aggregation vectors as input and outputs the embedding vectors, which are generated through aggregating the multi-hops neighbors embedding with adaptive residual. Compared with equally receiving all neighbors embedding, the adaptive residual aggregation module allows each node to learn from different neighbors with different weights.
To keep edges and node labels private, we perturb them before the message passing and updating steps. Edge Randomization [20] is selected as the edge preprocessing method because it can preserve the adjacency matrix's sparse structure. To reduce the impact of adding or removing edges on the graph, we propose a degree-preserving Edge Randomization method, in which an unbiased sample method is used at the end with the sampling probability of:
\[p_{u}^{sample}=\frac{2D_{u}}{D_{u}+N-Ns+D_{u}s} \tag{14}\]
for node \(u\), where \(D_{u}\) is the degree of \(u\) before permutation, \(s\) is a parameter of Edge Randomization satisfying \(s\geq\frac{2}{e^{sB}+1}\), \(\epsilon_{B}\) is the privacy budget. The expectation of sampled node degree \(\mathbb{E}(\overline{D^{\prime}_{u}})\) equals \(\mathbb{E}(D_{u})\) for \(\forall u\in V\). We exploit Random Response [17] to encode node \(i\)'s labels for it outperforms other oracles in low dimensions. The transformation distribution is:
\[p(y^{\prime}_{i}|y_{i})=\begin{cases}\frac{e^{\epsilon_{C}}}{e^{ \epsilon_{C}}+M-1},if\ y^{\prime}_{i}=y_{i}\\ \frac{1}{e^{\epsilon_{C}}+M-1},\ otherwise,\end{cases} \tag{15}\]
where \(\epsilon_{C}\) is the privacy budget, \(M\) is class number.
For the aggregation process, motivated by [17], a node-wise adaptive embedding aggregation and the residual connection are applied to balance the smoothness of different perturbed vectors between neighbors and the node itself. The Adaptive Message Passing (AMP) process includes three steps: feature aggregation, residual weight computing, and linear combination, which can be presented as:
\[\textbf{AMP}(u,k,1):M_{u}^{k-1}=\tilde{A}H_{u}^{k-1}, \tag{16}\]
\[\textbf{AMP}(u,k,2):\gamma_{u}=\max(1-\tau.\frac{1}{||M_{u}^{k-1}-\overline{H_{ u}^{0}}||_{2}},0), \tag{17}\]
\[\textbf{AMP}(u,k,3):H_{u}^{k}=(1-\gamma_{u})\overline{H_{u}^{0}}+\gamma_{u}M_{u }^{k-1}. \tag{18}\]
Equation (16) shows the feature aggregation step for node \(u\) at layer \(k\in\{1,2,...K\}\), where \(\tilde{A}=\tilde{D}^{-\frac{1}{2}}\tilde{A}\tilde{D}^{-\frac{1}{8}}\), \(\tilde{A}=\overline{A^{\prime}}+I\) is the sampled noise adjacent matrix with self loop, \(D\) is the degree matrix, \(H_{u}^{0}=\overline{H_{u}^{0}}\). Residual weight \(\beta_{u}\) of node \(u\) is computed through equation (17) depending on the deviation of adaptive perturbed embedding \(\overline{H_{u}^{0}}\) and feature aggregation of neighbors \(M_{u}^{k}\), \(\tau\) is a parameter that controls the smoothing. Finally, the aggregation embedding of \(u\) at layer \(k+1\) is the linear combination of \(\overline{H_{u}^{0}}\) and \(M_{u}^{k}\) weighted by \(\beta_{u}\) in (18).
## 4 Theoretical Analysis
In this section, we give the theoretical analysis of NAP-GNN. The complete process of NAP-GNN is shown in Algorithm 1.
**Theorem 1**.: _Algorithm 1 preserves \(\epsilon_{A}\)-DP in the computation of first differential private aggregation layer \(\mathcal{H}^{0}\)._
Proof.: All nodes' aggregation embedding in \(\mathcal{G}\) are perturbed, therefore we have:
\[Pr(\overline{\mathcal{H}^{0}}(A,V,X))=\prod_{i=1}^{N}exp(\frac{\epsilon_{i}}{ \Delta\mathcal{H}^{0}}||\sum_{j=1}^{N}a_{ij}\textbf{x}_{j}-\overline{H_{i}^{0} }||_{1}). \tag{19}\]
\(\Delta\mathcal{H}^{0}\) is set to \(2D_{max}\) in Algorithm 1. Assume adjacent graph dataset \(\mathcal{G}\) and \(\mathcal{G}^{\prime}\) differs in node \(k\). We have:
\[\begin{split}&\frac{Pr(\overline{\mathcal{H}^{0}}(A,V,X))}{Pr( \overline{\mathcal{H}^{0}}(A^{\prime},V^{\prime},X^{\prime}))}\\ &\leq\prod_{i=1}^{N}exp(\frac{\epsilon_{i}}{\Delta\mathcal{H}^{0 }}||\sum_{j=1}^{N}a_{ij}\textbf{x}_{j}-\sum_{j=1}^{N}a^{\prime}_{ij}\textbf{ x}^{\prime}_{j}||_{1})\\ &=exp(\sum_{i=1}^{N}\sum_{j=1}^{N}\frac{\epsilon_{i}}{\Delta \mathcal{H}^{0}}||a_{ij}\textbf{x}_{j}-a^{\prime}_{ij}\textbf{x}^{\prime}_{j }||_{1})\\ &=exp(\sum_{j\in N(k)}a_{kj}\frac{\epsilon_{k}}{\Delta\mathcal{H} ^{0}}+\sum_{i\in N(k)}a_{ik}\frac{\epsilon_{i}}{\Delta\mathcal{H}^{0}}). \end{split} \tag{20}\]
Let \(f(k)=\sum_{j\in N(k)}a_{kj}\epsilon_{k}+\sum_{i\in N(k)}a_{ik}\epsilon_{i}\). Then we have:
\[\begin{split}& f(k)=D_{k}\epsilon_{k}+\sum_{i\in N(k)}\epsilon _{i}=\epsilon_{A}(D_{k}\beta_{k}+\sum_{i\in N(k)}\beta_{i})\\ &\leq\epsilon_{A}(D_{k}\beta_{k}+\sum_{i\in N(k)}(\frac{\mathcal{ R\_NI}(u)}{\sum_{j\in N(k)}\mathcal{R\_NI}(j)}(D_{max}-D_{k}+1))\\ &\leq\epsilon_{A}(D_{k}\frac{D_{max}}{D_{k}}+D_{max}-D_{k}+D_{k} )\leq 2\epsilon_{A}D_{max}.\end{split} \tag{21}\]
Substitute equation (21) into (20), we can get:
\[\frac{Pr(\overline{\mathcal{H}^{0}}(A,V,X))}{Pr(\overline{\mathcal{H}^{0}}(A ^{\prime},V^{\prime},X^{\prime}))}\leq exp(\frac{2\epsilon_{A}D_{max}}{\Delta \mathcal{H}^{0}})=exp(\epsilon_{A}). \tag{22}\]
Theorem 1 is proved.
Since the first aggregation function is linear, the generated embedding is unbiased, as follows:
**Proposition 1**.: _The sum aggregator function defined in (9) for the first layer is unbiased._
Now we analyze the unbiased property of degree-preserving edge sampling:
**Proposition 2**.: _Let \(\overline{A}\) denote the noised adjacent matrix obtained by Edge Randomization, \(D_{u}\) is the degree of node \(u\) in the original graph, \(\overline{D_{u}^{\prime}}\) denotes the degree after sampling. Then \(\mathbb{E}(\overline{D_{u}^{\prime}})=\mathbb{E}(D_{u})\)._
Proof.: In the original adjacent matrix \(A\), there are \(D_{u}\) 1s and \((N-D_{u})\) 0s for node \(u\). The expectation of \(u\)'s degree after Edge Randomization is:
\[\begin{split}\mathbb{E}(\overline{D_{u}})&=D_{u}s+D _{u}(1-s)+0.5(N-D_{u})(1-s)\\ &=0.5D_{u}+0.5N-0.5Ns+0.5D_{u}s,\end{split} \tag{23}\]
where \(s\) is the Bernoulli sample probability [20]. Then we can have \(\mathbb{E}(\overline{D_{u}^{\prime}})=\mathbb{E}(\overline{D_{u}})*p_{u}^{sample }=\mathbb{E}(D_{u})\).
**Theorem 2**.: _Algorithm 1 preserves \((\epsilon_{A}+\epsilon_{B}+\epsilon_{C})\)-DP._
Proof.: From theorem 1, we know that the first aggregation layer satisfies \(\epsilon_{A}\)-DP. Adjacent matrix perturbation guarantees \(\epsilon_{B}\)-DP for \(\epsilon_{B}\geq(\ln\frac{2}{s}-1),s\in(0,1]\)[20]. Random response mechanism on node label guarantees \(\epsilon_{C}\)-DP under equation (15).
Node-level DP preserves all information of one node, including feature, edges and label. Differential aggregation layer \(\mathcal{H}^{0}\) process node feature and edge privately. The following Adaptive Residual Multi-Layer Aggregation Module do not expose node features and edges for it only post-processing the noised aggregation embedding and perturbed adjacent matrix without access private features and links. Training phase also guarantees DP because node label is perturbed former. The total privacy cost is \((\epsilon_{A}+\epsilon_{B}+\epsilon_{C})\)-DP.
## 5 Experimental Evaluation
In this section, we provide experiments on the proposed method and evaluate it under different parameter settings.
### Experiment Settings
#### 5.1.1 Datasets
Four publicly available datasets are used, Cora, Citeseer [11], Lastfm [12] and Facebook [13]. Cora and Citeseer are typical citation networks, representing two sparse graphs. Lastfm and Facebook are social networks, representing large scale and dense graphs compared with Cora and Citeseer. Detailed information of datasets is shown in Table 2.
#### 5.1.2 Compared Methods
We compare with the following methods: DP-GNN [12] that apply DP-Adam on 1-layer GNN. We also compare with MLP-DP, for MLP does not use graph topology. It is trained with DP-Adam to provide node-level protection. What's more, we compare two types of variants by privacy cost distribution methods: equal and adaptive. Two state-of-art GNN architectures, GCN and GraphSAGE, are used to substitute the proposed adaptive residual multi-layer aggregation module. We compare the performance of node importance estimation between node degree and TNIE through substitute \(\mathcal{NI}(i)\) with \(D_{i}\). Degree-preserving edge sample is removed from NAP-GNN as competitors.
#### 5.1.3 Training and Evaluation Settings
We randomly split the nodes in the dataset into training, test, and validation sets in the ratio of 50%, 25%, and 25%. Since the original Lastfm is highly imbalanced, we choose the top 10 classes that have most samples. The compared methods, GCN and GraphSAGE, are two convolutional layers with hidden dimension 128, RELU activation function, and a dropout layer with a probability of 0.2. We use the same noise scale on all noise-added perturbation modules. All models are trained based on Adam optimizer [13] with a maximum epoch of 1500. The initial value of the learning rate is 1e-3, and the decay mechanism is used with patience of 20 and a decay rate of 0.5. We measure the model performance by training 10 consecutive rounds on the test set and taking the average value with a 95% confidence interval under bootstrapping with 2000 samples. All experiments are implemented by using Pytorch and PyTorch-Geometric (PyG).
### Results and Analysis
#### 5.1.1 Evaluation on Adaptive Budget over Different \(\epsilon\)
We first compare the difference in accuracy between the proposed NAP-GNN and competitors under different privacy budgets. The results are shown in Table 1 and Figure 2. It can be observed that the adaptive DP mechanism achieves higher accuracy than equally distributing the total privacy budget under the same method in most cases. The performance gap between the two differential privacy
\begin{table}
\begin{tabular}{c|c c|c c|c c|c c} \hline \hline Dataset & \multicolumn{2}{c|}{Cora} & \multicolumn{2}{c|}{Citeseer} & \multicolumn{2}{c|}{Lastfm} & \multicolumn{2}{c}{Facebook} \\ \hline Privacy Level & Low & High & Low & High & Low & High & Low & High \\ \hline MLP-DP & 38.3\(\pm\)0.92 & 30.7\(\pm\)1.5 & 36.5\(\pm\)1.59 & 26.4\(\pm\)2.52 & 41.7\(\pm\)1.39 & 26.1\(\pm\)1.34 & 53.2\(\pm\)0.35 & 43.4\(\pm\)0.51 \\ DP-GNN & 64.2\(\pm\)1.17 & 55.7\(\pm\)2.80 & 54.5\(\pm\)0.85 & 42.1\(\pm\)4.25 & 72.1\(\pm\)0.43 & 63.3\(\pm\)1.73 & 73.4\(\pm\)0.69 & 59.8\(\pm\)0.93 \\ \hline GCN-DP\({}_{equa}\) & 75.4\(\pm\)0.45 & 62.6\(\pm\)0.65 & 63.1\(\pm\)0.21 & 42.1\(\pm\)0.62 & 77.2\(\pm\)0.49 & 57.7\(\pm\)0.47 & 82.5\(\pm\)0.31 & 61.1\(\pm\)0.17 \\ GCN-DP\({}_{adap}\) & 77.2\(\pm\)0.42 & 58.3\(\pm\)0.35 & 64.1\(\pm\)0.39 & 42.2\(\pm\)0.39 & 79.5\(\pm\)0.58 & 59.2\(\pm\)0.43 & 86.1\(\pm\)0.21 & 62.0\(\pm\)0.15 \\ SAGE-DP\({}_{equa}\) & 63.8\(\pm\)0.33 & 44.1\(\pm\)0.63 & 46.1\(\pm\)1.19 & 31.3\(\pm\)0.45 & 72.9\(\pm\)0.24 & 50.1\(\pm\)0.36 & 85.7\(\pm\)1.31 & 51.3\(\pm\)1.18 \\ SAGE-DP\({}_{adap}\) & 67.2\(\pm\)0.27 & 44.3\(\pm\)0.36 & 58.9\(\pm\)0.50 & 31.5\(\pm\)0.48 & 73.3\(\pm\)0.50 & 51.6\(\pm\)0.69 & 86.0\(\pm\)1.44 & 50.1\(\pm\)0.91 \\ \(\text{Ours}_{equa}\) & 83.1\(\pm\)0.11 & 63.4\(\pm\)0.26 & 64.5\(\pm\)0.21 & 42.4\(\pm\)0.17 & 84.5\(\pm\)0.10 & 65.3\(\pm\)0.15 & 89.8\(\pm\)0.07 & 65.5\(\pm\)1.10 \\
**Ours** & **33.5\(\pm\)1.84** & **64.9\(\pm\)1.75** & **65.0\(\pm\)0.16** & **44.6\(\pm\)0.26** & **86.3\(\pm\)0.84** & **65.9\(\pm\)0.15** & **91.3\(\pm\)0.64** & **67.8\(\pm\)0.17** \\ \hline Ours\({}_{deg}\) & 79.6\(\pm\)0.18 & 64.1\(\pm\)0.44 & 64.9\(\pm\)1.12 & 44.2\(\pm\)0.28 & 85.7\(\pm\)0.19 & 63.4\(\pm\)0.14 & 91.1\(\pm\)0.03 & 64.8\(\pm\)0.37 \\ Ours\({}_{-edgeSam}\) & 81.5\(\pm\)0.15 & 64.5\(\pm\)1.30 & 63.8\(\pm\)1.14 & 43.9\(\pm\)2.07 & 85.4\(\pm\)1.87 & 64.9\(\pm\)0.14 & 90.0\(\pm\)1.07 & 67.1\(\pm\)0.09 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance of Compared Methods on Cora, Citeseer, Lastfm and Facebook. Test accuracy(%) over two privacy level (for MLP-DP, set \(\epsilon=10\) for high privacy cost and \(\epsilon=20\) for low cost; for other models, set \(\epsilon=7\) for high cost and \(\epsilon=12\) for low cost, ) is reported. ‘equa’, ‘adap’, ‘deg’ and ‘-edgeSam’ is short for ‘equal’, ‘adaptive’, ‘degree’ and ‘without edge sample’, respectively.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Datasets & Nodes & Edges & Features & Avg\_Deg \\ \hline Cora & 2708 & 5278 & 1433 & 3.89 \\ Citeseer & 3327 & 4552 & 3703 & 3.73 \\ Lastfm & 7083 & 25814 & 7842 & 8.28 \\ Facebook & 22470 & 170912 & 4717 & 15.21 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Detailed Statistic of Used Datasets
Figure 2: Evaluation on Different Privacy Budget.
the average degree graph. Under the same global privacy budget, we also compare different noise scale ratios between differential private aggregation layer \(\mathcal{H}^{0}\), the degree-preserving adjacent matrix \(\overline{A}\) and noised label \(\overline{Y}\). With noise on \(\overline{A}\) be constant, we add more noise on \(\mathcal{H}^{0}\) (decline \(\epsilon_{A}\)), which is less \(\epsilon\) on \(\mathcal{H}^{0}\). Compared with the equal distribution method, the accuracy improvement of accuracy represents adaptive DP on \(\mathcal{H}^{0}\) can mitigate the noise waste problem caused by node over-protecting. The scheme that assigns less \(\epsilon\) on \(\mathcal{H}^{0}\) converges faster. As \(\epsilon\) increases (noise decreases), the model's utility tends to be the same for the two compared schemes.
#### 5.2.2 Evaluation on Node Importance Estimation
We investigate the effectiveness of the importance estimation module. We compare the case where TNIE is used as usual and the case where node degree is regarded as the importance score. The accuracy gap of method _Ours\({}_{deg}\)_ in line 10 and _Ours_ in line 9 of Table 1 demonstrates that the performance of the model with TNIE is better. This is because node degree only reflects the neighbor number and does not consider interactions between the adjacent nodes and the node feature.
#### 5.2.3 Evaluation on Degree-Preserving Edge Sampling
We compare the usual NAP-GNN with the case that edge sampling is removed and directly inputs noised adjacent matrix to the next module. The result is shown in line 11 in Table 1. We can see that in all cases, the accuracy of NAP-GNN is higher with edge sampling than without it.
#### 5.2.4 Evaluation on Maximum Degree \(D_{max}\) of Graph
We analyze the impact of the maximum node degree of the graph on model performance. The performance variation of the adaptive differential privacy mechanism is shown in Figure 3(a)-(d). The accuracy of the model increases to a peak and then decreases as \(D_{max}\) increases. The maximum degree of the sub-graphs taken at the peak differs on data with different degree average value. When \(D_{max}\) increases, the neighbor information that nodes can aggregate grows, but at the same time, more noise is received, thus affecting the accuracy of the model. Meanwhile, the Laplace mechanism is affected by data sensitivity. When \(D_{max}\) increases, the data sensitivity increases, and thus the noise added to each node on average increases, reducing the utility of the model.
#### 5.2.5 Evaluation on Multi-Adaptive-Layer \(K\)
We investigate the effect of different hops on the model performance. The results are shown in Figure 3(e)-(g). As can be seen, both NAP-GNN and its competitors can aggregate more information from neighbors when \(K\) increases. However, there is a trade-off between \(K\) and accuracy. When the value of \(K\) increases, the accuracy of both NAP-GNN and its competitors increases first, then reaches a peak and decreases. This is because when more layers of neighbor information are aggregated, the noise data collected from the neighbors also increase, affecting the behavior of the model. Besides, when \(\epsilon\) is small, the noise scale of the joined data is larger, and the model needs higher \(K\) to reach the peak.
## 6 Conclusion
In this paper, we propose a novel adaptive differential private graph neural network from node-importance-grain. We use adaptive aggregation perturbation, where the node-importance-grained Laplace mechanism is applied to the first aggregation function, and adaptive residual multi-layer aggregation, where embedding vectors are generated through multi-hops with the adaptive residual connection. Moreover, we present a new node importance estimation method named TNIE from graph topology considering neighborhood and centrality awareness. Experimental results over real-world graph datasets show that our NAP-GNN can achieve better privacy and accuracy trade-off and outperforms existing methods. In the future, we would like to investigate the rationale behind the trade-off between graph properties and noise scale from theoretical and algorithmic views, which would improve model utility and promote differentially private graph neural networks in practice.
Figure 3: Evaluation on Maximum Degree \(D_{max}\) of Graph (a-d) and Hop \(K\) of Multi-Adaptive-Layer (e-g). |
2302.05435 | A deep convolutional neural network for salt-and-pepper noise removal
using selective convolutional blocks | In recent years, there has been an unprecedented upsurge in applying deep
learning approaches, specifically convolutional neural networks (CNNs), to
solve image denoising problems, owing to their superior performance. However,
CNNs mostly rely on Gaussian noise, and there is a conspicuous lack of
exploiting CNNs for salt-and-pepper (SAP) noise reduction. In this paper, we
proposed a deep CNN model, namely SeConvNet, to suppress SAP noise in
gray-scale and color images. To meet this objective, we introduce a new
selective convolutional (SeConv) block. SeConvNet is compared to
state-of-the-art SAP denoising methods using extensive experiments on various
common datasets. The results illustrate that the proposed SeConvNet model
effectively restores images corrupted by SAP noise and surpasses all its
counterparts at both quantitative criteria and visual effects, especially at
high and very high noise densities. | Ahmad Ali Rafiee, Mahmoud Farhang | 2023-02-10T18:51:19Z | http://arxiv.org/abs/2302.05435v1 | A Deep Convolutional Neural Network for Salt-and-pepper Noise Removal Using Selective Convolutional Blocks
###### Abstract
In recent years, there has been an unprecedented upsurge in applying deep learning approaches, specifically convolutional neural networks (CNNs), to solve image denoising problems, owing to their superior performance. However, CNNs mostly rely on Gaussian noise, and there is a conspicuous lack of exploiting CNNs for salt-and-pepper (SAP) noise reduction. In this paper, we proposed a deep CNN model, namely SeConvNet, to suppress SAP noise in gray-scale and color images. To meet this objective, we introduce a new selective convolutional (SeConv) block. SeConvNet is compared to state-of-the-art SAP denoising methods using extensive experiments on various common datasets. The results illustrate that the proposed SeConvNet model effectively restores images corrupted by SAP noise and surpasses all its counterparts at both quantitative criteria and visual effects, especially at high and very high noise densities.
Image denoising Salt-and-pepper noise Selective convolutional block Convolutional neural network Deep learning Edge preservation Very high noise density
## 1 Introduction
Images are polluted by miscellaneous types of noises during acquisition, compression, and transmission. The resulting image information loss and poor quality of images give rise to considerable difficulties for many other image processing and computer vision tasks [1, 2, 3]. Therefore, image denoising is considered a classical yet still vital low-level vision issue, and is a basic necessity in many computer vision applications. The image denoising process aims to remove noise from noisy images along with retaining the edges, textures, and details of images, which leads to obtaining denoised images with superior quality [4]. Impulse noise is one of the prevalent types of noise in images that changes the value of some randomly selected pixels in images. Impulse noise can be classified into two categories, i.e., salt-and-pepper (SAP) noise and random-valued noise. In the presence of SAP noise, noisy pixels take the maximum value (i.e., 255 in 8-bit images) with a probability of \(D_{s}\) or the minimum value (0) with a probability of \(D_{p}\), which are respectively
referred to as the salt noise (white dots) and the pepper noise (black dots) [5]. The probability density function (PDF) of SAP noise in the 8-bit gray-scale images is given by [5]:
\[p(z)=\begin{cases}D_{s}&\quad\text{for }z=255\\ D_{p}&\quad\text{for }z=0\\ 1-D&\quad\text{for }z=\nu\end{cases} \tag{1}\]
where \(z\) represents intensity of the image and \(\nu\) is any integer value such that \(0<\nu<255\). \(D_{s}\) and \(D_{p}\) denote the probability of pixel corruption by salt noise and pepper noise, and \(D=D_{s}+D_{p}\) is the noise density. Usually, \(D_{s}\) and \(D_{p}\) are considered equal.
Many attempts have been hitherto suggested for SAP denoising ([6, 7, 8, 9, 10, 11, 12, 13, 14]). Attributed to the fact that corrupted pixels by SAP noise are the maximum or minimum values of noisy images, median-based filter approaches have received considerable attention as the starting point of the efforts to remove SAP noise. The classic median filter (MF) [1], adaptive median filter (AMF) [15], decision-based algorithm (DBA) [16], modified decision based unsymmetric trimmed median filter (MDBUTMF) [7] can be mentioned as the most famous traditional median-based filter methods. These primitive methods generally encounter formidable obstacles to eliminating SAP noise, particularly at high noise densities in which almost all the pixels in the local window are noisy [2, 6, 8]. They also suffer from blurring the details and edges of the images and producing streaking effects [17, 18].
Several algorithms using novel interpolation techniques have been recently developed with the aim of alleviating these restrictions by image information preservation and visual quality improvement, particularly in the presence of high-density noises. Some approaches use mean-based filters on non-noisy (clean) pixels in a local window to restore noisy pixels. Adaptive weighted mean filter (AWMF) [8] and its modified versions--e.g., adaptive switching weight mean filter (ASWMF) [19] and improved adaptive weighted mean filter (IAWMF) [20]--restore noisy pixels by using various weighting methods to calculate the weighted average of non-noisy pixels in an adaptive window. Riesz mean is another average-based technique that has met with success in SAP noise removal. This technique is employed in the pixel similarity-based adaptive Riesz mean filter (ARmF) [21] and the different adaptive modified Riesz mean filter (DAMRmF) [22]. There are also some approaches that adopt mean-based or median-based filters or a combination of both in multiple stages to denoise images. The different applied median filter (DAMF) uses two consecutive median filters. In contrast to DAMF, adaptive Cesaro mean filter (ACmF) [23] employs the Cesaro mean instead of the median in a recursive algorithm which improves performance but decreases the execution time. In [24], a four-stage median-average (FoMA) algorithm is introduced, employing four-step median and mean filters.
Kriging interpolation is another interpolation technique utilized for SAP noise reduction. The adaptive decision based Kriging interpolation filter (ADKIF) [9] can achieve good results by employing Kriging interpolation. Besides the aforementioned methods, a multistage selective convolution filter (MSCF) was recently proposed in [25], which could significantly reduce the computation time along with a considerable noise suppression performance. Finally, an impulse denoising method based on noise accumulation and harmonic analysis techniques (NAHAT) [26] could achieve the best SAP denoising performance among non-deep learning-based algorithms by employing noise accumulation and solving a system of equations corresponding to a harmonic function.
Recent years have seen greater emphasis than heretofore on the application of machine learning methodologies, particularly deep learning, in a wide variety of issues by benefiting from advancements in hardware architecture. Advances in deep learning-based approaches, especially by taking advantage of convolutional neural networks (CNNs) in image data, have led to a significant breakthrough in a variety of computer vision tasks, including image reconstruction [27], image segmentation [28], and object classification [29]. Deep learning application in Gaussian image noise reduction has also attracted a great deal of attention as it offers vastly improved performance over classical algorithms [30, 31, 32, 33, 34].
Although these learning-based networks are significantly effective in removing Gaussian noise, they cannot denoise well in the presence of SAP noise, specifically at high-density noises. This poor performance of networks which are designed to remove Gaussian noise in SAP noise removal can be attributed to a variety of reasons. Firstly, these networks modify all pixels in noisy images; however, unlike Gaussian and Poissonian noise, SAP noise does not affect all pixels. Therefore, some pixels are noise-free, which should not alter throughout denoising. Needless to say, the quantity of clean pixels is considerable at low noise densities. Secondly, they restore noisy pixels using all pixels, even noisy ones, in the receptive field. In the case of SAP noise, there is no correlation between the value of noisy pixels and their original values. This phenomenon can be problematic in the denoising performance of the networks, particularly at high noise densities in which a considerable part of pixels is noisy. Thus, since SAP noise is pure noise, it can produce an adverse effect on the SAP noise removal function of CNNs [35]. Finally, since SAP is not an additive noise, a residual learning strategy, which is adopted in some of these networks such as DnCNN [30], is ineffective and introduces undesirable visual artifacts [36].
As opposed to Gaussian noise, relatively few research studies on impulse noise reduction, specifically SAP noise, use artificial neural networks (ANNs) [37, 35, 38, 39]. A non-local switching filter followed by a pretrained multi-layer perception (NLSF-MLP) model was suggested for SAP noise reduction in [37]. Fu et al. proposed employing a non-local switching filter as a preprocessing stage to a CNN (NLSF-CNN) to improve the performance of SAP noise reduction from corrupted images [35]. Since SAP can adversely affect the CNN models, the preprocessing stage is considered to prepare noisy images for the CNN model by changing the type of noise. Hence, NLSF-MLP and NLSF-CNN are indeed an integration of conventional and learning-based strategies. Moreover, NLSF-MLP and NLSF-CNN change all pixels of the images, which causes the performance to dwindle.
In spite of the fact that the approaches outlined above can be effective in SAP noise reduction, they are still suffering from a shortage of adequate performance and details preservation, especially at very high noise densities. Aiming to overcome the shortcomings of the aforementioned learning-based networks in SAP noise removal, we propose a deep convolutional neural network, namely SeConvNet, by introducing a new selective convolutional (SeConv) block. The proposed network can be extremely effective in SAP denoising, particularly at very high noise densities up to 95%, and experimental results show that it outperforms its state-of-the-art counterparts by a considerable margin in both quantitative metrics of denoising performance and visual effects.
The remainder of this paper is organized as follows. The architecture of SeConvNet and the new selective convolutional (SeConv) block are introduced in section 2. Section 3 first depicts different phases of training SeConvNet, including data employed in the training and test stages, data preprocessing, as well as network parameters and configuration. Subsequently, experimental results are reported to assess SeConvNet performance in SAP noise removal. Finally, section 4 concludes the paper.
## 2 The Proposed Denoising CNN Model
According to the aforementioned adverse impact of SAP noise on CNNs, we introduce a selective convolutional (SeConv) block and employ it in the first part of the proposed network model. These SeConv blocks indeed prepare the network's input for the subsequent conventional convolutional layers and tackle the issue of participating pure noisy pixels in image reconstruction. Before proceeding to the network architecture, we first define the concept of noisy pixels map. The noisy pixels map of an arbitrary tensor \(\mathbf{\mathcal{A}}\) is a tensor with the same shape as \(\mathbf{\mathcal{A}}\) defined by
\[[\mathbf{M}_{\mathbf{\mathcal{A}}}]_{i,j,k}=\begin{cases}1&\text{if }[\mathbf{ \mathcal{A}}]_{i,j,k}=0\\ 0&\text{otherwise},\end{cases} \tag{2}\]
where \([\mathbf{X}]_{i,j,k}\) denotes the element of \(\mathbf{X}\) at coordinates \((i,j,k)\). The non-noisy pixels map of \(\mathbf{\mathcal{A}}\), i.e., \(\tilde{\mathbf{M}}_{\mathbf{\mathcal{A}}}\), is obtained by flipping 1s and 0s in \(\mathbf{M}_{\mathbf{\mathcal{A}}}\).
### The Network Architecture
The architecture of the proposed SeConvNet model with \(D\) layers is shown in figure 1. The input noisy image first goes through a preprocessing stage which converts pixels with a value of 255 into 0, so that all noisy pixels will be zero thereafter. As we shall see, this will simplify the processing in the following blocks. Seven SeConv blocks constitute the first part of SeConvNet. SeConv blocks perform an initial SAP denoising and replace pure noisy pixels by an accurate estimate of them, whereby the subsequent convolutional layers could improve the denoising performance. As described in the following subsection, a SeConv block of size \(s\) selectively restores some of the noisy pixels of its input image using a trainable \(s\times s\) kernel, and non-noisy pixels remain unchanged as they pass through the SeConv blocks. It should also be noted that the SeConv blocks are arranged in the ascending order of \(s=3,5,7,9,11,13,15\).
For the \(8^{th}\) to \((D-1)^{th}\) layers, convolution (Conv) with 64 filters of size \(3\times 3\) generating 64 feature maps, along with ReLU activation are used. Furthermore, to benefit from the properties of batch normalization (BN) [30], BN is considered between convolution and ReLU. In order to reconstruct images, convolution with \(C\) filters of size \(3\times 3\) is employed in the \(D^{th}\) layer, where \(C\) is the number of channels (it is equal to 1 for gray image denoising and 3 for color denoising).
Ultimately, as it is anticipated that only noisy pixels will change, the obtained output is multiplied by the noisy pixels map of the network's input (**M**), and the result is then added to the input of the network to prevent any change in clean pixels.
### Selective Convolutional Block
As an initial denoising step, SeConv blocks are introduced to efficiently replace pure SAP noisy pixels by a first-order estimate. To provide an efficient and accurate estimation of a SAP noisy pixel, we use a normalized weighted average of non-noisy pixels in its neighborhood [8, 40]. This weighted average filtering can be considered as a convolution between the input image \(\mathbb{X}\) and a kernel \(\omega\), in which only uncorrupted (non-noisy) pixels are used in the convolution [25], i.e.,
\[[\mathbf{S}]_{i,j,k}=\begin{cases}\frac{\sum\limits_{l,m,n}[\mathbb{X}]_{l,m,n }[\omega]_{i-l,j-m,k-n}}{\sum\limits_{l,m,n}[\omega]_{l,m,n}[\omega]_{i-l,j-m,k-n}}&\text{if }\sum\limits_{l,m,n:[\Omega]_{m,n,k}\neq 0}[\omega]_{i-l,j-m,k-n} \neq 0\\ 0&\text{otherwise}.\end{cases} \tag{3}\]
where the condition \(l,m,n:[\mathbb{X}]_{m,n,k}\neq 0\) indicates that only non-noisy pixels are incorporated in the restoration process (as all noisy pixels are turned into 0 in the preprocessing stage). Using the notion of non-noisy pixels map \(\mathbf{\tilde{M}}_{\mathbb{X}}\)
Figure 1: The architecture of SeConvNet. Note that “SeConv Block- \(s\)” indicates a SeConv block of size \(s\).
we can reformulate Eq. 3 as
\[[\mathbf{S}]_{i,j,k}=\begin{cases}\frac{\sum\limits_{l,m,n}[\mathbf{M}_{\mathbb{X}} ]_{l,m,n}[\omega]_{i-l,j-m,k-n}}{\sum\limits_{l,m,n}[\mathbf{M}_{\mathbb{X}}]_ {l,m,n}[\omega]_{i-l,j-m,k-n}}&\text{if }\sum\limits_{l,m,n}[\mathbf{\tilde{M}}_{\mathbb{X}}]_{l,m,n}[\omega]_{i-l,j-m,k-n}\neq 0\\ 0&\text{otherwise},\end{cases} \tag{4}\]
which can be equivalently restated as
\[[\mathbf{S}]_{i,j,k}=\begin{cases}\frac{[\mathbf{X}*\omega]_{i,j,k}}{[ \mathbf{\tilde{M}}_{\mathbb{X}}*\omega]_{i,j,k}}&\text{if }[\mathbf{\tilde{M}}_{\mathbb{X}}*\omega]_{i,j,k}\neq 0\\ 0&\text{otherwise}.\end{cases} \tag{5}\]
where \(*\) denotes the conventional convolution operation.
For weighted average filtering to effectively restore a noisy pixel, however, a sufficient number of non-noisy pixels should incorporate in the above convolution [9, 41]. Accordingly, to further improve the initial denoising, we selectively restore only those noisy pixels for which there are at least \(\eta\) non-noisy pixels in the local \(s\times s\) around them, and the remaining noisy pixels are restored in subsequent SeConv blocks with larger kernel sizes [25]. This will assure a minimum degree of reliability for restored pixels at each block. It should also be noted that for effective denoising with weighted average filtering, the value of \(\eta\) should be increased with the size of the kernel (since more non-noisy pixels are required in larger windows). For a SeConv block with a size of \(s\times s\), \(\eta\) has been set experimentally to \(s-2\). To determine the number of non-noisy pixels involved in the estimation of each noisy pixel, the non-noisy pixels map \(\mathbf{\tilde{M}}_{\mathbb{X}}\) is convolved with an \(s\times s\) kernel of ones (\(\omega_{\text{one}}\)), and the reliability tensor is obtained as
\[[\mathbf{R}_{\mathbb{X}}]_{i,j,k}=\begin{cases}1&\text{if }[\mathbf{\tilde{M}}_{ \mathbb{X}}*\omega_{\text{one}}]_{i,j,k}\geq\eta\\ 0&\text{otherwise}.\end{cases} \tag{6}\]
Therefore, the non-zero entries of the element-wise product of \(\mathbf{S}\), \(\mathbf{\tilde{M}}_{\mathbb{X}}\), and \(\mathbf{R}_{\mathbb{X}}\) yield the restored pixels which are qualified by the above reliability criterion. Noting they all noisy pixels of the input were set to zero, the restored image at the output of the SeConv block is obtained as
\[\hat{\mathbb{X}}=\mathbb{X}+\mathbf{S}\odot\mathbf{M}_{\mathbb{X}}\odot \mathbf{R}_{\mathbb{X}}. \tag{7}\]
where \(\odot\) denotes the Hadamard (element-wise) product.
The pseudo-code of the proposed SeConv block is shown in algorithm 1.
## 3 Experiments
### Datasets
In the training stage of SeConvNet, following [42], we consider 400 images with sizes of \(180\times 180\) as the training dataset for gray-scale image denoising. To assess the SAP noise removal performance, we employ two widely used datasets in the test phase. The first dataset contains a gray-scale version of 68 natural images of sizes 321-by-481 and 481-by-321 (figure 2), which are part of the Berkeley segmentation dataset (BSD68) [43]. The second is the 20 traditional test images dataset. This dataset comprises the \(512\times 512\) gray-scale versions of "Baboon", "Barbara", "Blonde Woman", "Boat", "Bridge", "Cameraman", "Dark Haired Woman", "Einstein", "Elaine", "Flintstones", "Flower", "Hill", "House", "Jet Plane", "Lake", "Lena", "Living Room", "Parrot", "Peppers", and "Pirate" images shown in figure 3.
Moreover, for color image denoising we train SeConvNet using the rest of the 500 color images in the Berkeley segmentation dataset except BSD68. We also utilized the color version of BSD68 (CBSD68) as the test data.
We choose the size of the patches for training as \(40\times 40\) by cropping the data, which is augmented by random rotation, random flip and random resizing with various scale factors of 1, 0.9, 0.8, and 0.7. We train the SeConvNet using the training dataset and its corrupted version by SAP noise with known specific noise density.
```
Input :\(\mathbb{X},\omega,\eta\)\(\triangleright\) Input Image, Kernel, Minimum Reliability Output :\(\hat{\mathbb{X}}\) \(H,W\leftarrow\) Size of \(\mathbb{X}\) \(C\leftarrow\) Number of Channels for \(\mathbb{X}\) for\(i\gets 1\)to\(H\)do
1for\(j\gets 1\)to\(W\)do
2for\(k\gets 1\)to\(C\)do
3if\([\mathbb{X}]_{i,j,k}=0\)then
4\([\mathbb{M}_{\mathbb{X}}]_{i,j,k}\gets 1\)
5\([\mathbb{\tilde{M}}_{\mathbb{X}}]_{i,j,k}\gets 0\)
6else
7\([\mathbb{M}_{\mathbb{X}}]_{i,j,k}\gets 0\)
8\([\mathbb{\tilde{M}}_{\mathbb{X}}]_{i,j,k}\gets 1\)
9 end for
10if\([\mathbb{\tilde{M}}_{\mathbb{X}}*\omega]_{i,j,k}\neq 0\)then
11\([\mathbb{S}]_{i,j,k}\leftarrow\frac{[\mathbb{X}*\omega]_{i,j,k}}{[\mathbb{M}_{ \mathbb{X}}*\omega]_{i,j,k}}\)
12else
13\([\mathbb{S}]_{i,j,k}\gets 0\)
14 end if
15if\([\mathbb{\tilde{M}}_{\mathbb{X}}*\omega]_{i,j,k}\geq\eta\)then
16\([\mathbb{R}_{\mathbb{X}}]_{i,j,k}\gets 1\)
17else
18\([\mathbb{R}_{\mathbb{X}}]_{i,j,k}\gets 0\)
19 end if
20
21 end for
22
23 end for
24
25 end for
26\(\hat{\mathbb{X}}\leftarrow\mathbb{X}+\mathbb{M}_{\mathbb{X}}\mathbb{R}_{\mathbb{ X}}\mathbb{S}\)
```
**Algorithm 1**Proposed SeConv Block.
### Preprocessing Stage
As a preprocessing step, all noisy pixels are set to zero before feeding to SeConvNet. The process can be easily accomplished by converting all pixels with values of 255 to 0. The principal reason is to make all noisy pixels numerically identical.
### Network Training
To meet the need for enough spatial information in the noise reduction task and also to consider the complexity of the network, we set the network depth \(D\) to 27. We utilize ones and orthogonal matrices as the initial weights of the SeConv blocks and convolutions, respectively. Moreover, all biases are set to zero during the training stage. The following loss function is adopted to train the trainable parameters of the network \(\Theta\) for predicting the clean images using \(P\) patches
\[\mathcal{L}(\Theta)=\frac{1}{2P}\sum_{i=1}^{P}\left\|\mathcal{F}\left(\mathbf{X }_{i};\Theta\right)-\mathbf{Y}_{i}\right\|^{2}. \tag{8}\]
where \(\mathbf{\hat{X}}=\mathcal{F}\left(\mathbf{X};\Theta\right)\) is the estimated image from the input noisy observation \(\mathbf{X}\), and \(\mathbf{Y}\) is the desired clean image. The loss function is minimized using the ADAM solver algorithm [44] with \(\alpha\) of \(1e-3\), \(\beta 1\) of 0.9, \(\beta 2\) of 0.999, and \(\epsilon\) of \(1e-7\). The network is trained 50 epochs with a batch size of 128. The learning rate declines from \(1e-3\) to \(1e-4\). The model was trained in the Python Programming Language using the TensorFlow library running on a 64-bit operating system with an Intel(r) Core(tm) i9-9900K processor, 64GB of RAM, and an NVIDIA GeForce RTX 2070 graphics card.
is the mean square error (MSE) between \(\hat{\mathbf{X}}\) and \(\mathbf{Y}\), and SSIM is defined as
\[\text{SSIM}\left(\hat{\mathbf{X}},\mathbf{Y}\right)=\frac{\left(2\mu_{\hat{ \mathbf{X}}}\mu_{\mathbf{Y}}+c_{1}\right)\left(2\sigma_{\hat{\mathbf{X}}\mathbf{ Y}}+c_{2}\right)}{\left(\mu_{\hat{\mathbf{X}}}^{2}+\mu_{\mathbf{Y}}^{2}+c_{1} \right)\left(\sigma_{\hat{\mathbf{X}}}^{2}+\sigma_{\mathbf{Y}}^{2}+c_{2} \right)}. \tag{11}\]
Here \(\mu_{\hat{\mathbf{X}}}\), \(\mu_{\mathbf{Y}}\), \(\sigma_{\hat{\mathbf{X}}}\), \(\sigma_{\mathbf{Y}}\), and \(\sigma_{\hat{\mathbf{X}}\mathbf{Y}}\) indicate, respectively, the average intensity of \(\hat{\mathbf{X}}\), average intensity of \(\mathbf{Y}\), standard deviation of \(\hat{\mathbf{X}}\), standard deviation of \(\hat{\mathbf{Y}}\), and cross-covariance of \(\hat{\mathbf{X}}\) and \(\mathbf{Y}\). The two small constants \(c_{1}=(k_{1}L)^{2}\) and \(c_{2}=(k_{2}L)^{2}\) are used for division stabilization. The value of \(L\) is assumed to be 255 for 8-bit gray-scale version of images and the default values of \(k_{1}\), \(k_{2}\) are considered to be 0.01 and 0.03.
### Experiments on SAP noise Removal
In this subsection, we discuss the results of SAP denoising measured by PSNR and SSIM. A comparison is provided between the proposed SeConvNet and seven state-of-the-art methods, including ADKIF [9], NLSF-MLP [37], NLSF-CNN [35], ARmF [21], AcmF [23], IAWMF [20], NAHAT [26], and DAMRmF [22]. The source code of SeConvNet is available at [https://github.com/AliRafiee7/SeConvNet](https://github.com/AliRafiee7/SeConvNet).
Tables 1 to 4 report the SAP denoising performance of the proposed SeConvNet as well as the state-of-the-art competing methods based on the PSNR (dB) and SSIM criteria at varying noise densities ranging from 10% to 95%. In addition, the average performance results of 10% to 95% noise densities are presented in the last column of the tables. The best result of each noise density is highlighted in bold font. Table 1 contains the PSNR (dB) results of gray-scale SAP noise
\begin{table}
\begin{tabular}{l l c c c c c c c c c c} \hline \hline Dataset & Methods & 10\% & 20\% & 30\% & 40\% & 50\% & 60\% & 70\% & 80\% & 90\% & 95\% & Mean \\ \hline \hline \multirow{8}{*}{\begin{tabular}{} \end{tabular} } & ADKIF & 40.64 & 37.63 & 35.61 & 33.86 & 32.12 & 30.30 & 28.47 & 26.66 & 24.68 & 23.09 & 31.31 \\ & NLSF-MLP & 38.95 & 34.74 & 31.95 & 29.79 & 27.96 & 26.24 & 26.62 & 24.25 & 20.85 & 19.51 & 28.09 \\ & NLSF-CNN & 40.39 & 36.03 & 33.13 & 31.67 & 29.72 & 27.89 & 27.29 & 24.87 & 21.38 & 20.00 & 29.24 \\ & ARmF & 41.29 & 38.10 & 35.98 & 34.20 & 32.54 & 30.89 & 29.20 & 27.34 & 24.83 & 22.90 & 31.73 \\ & AcmF & 40.93 & 37.80 & 35.72 & 34.00 & 32.40 & 30.82 & 29.19 & 27.39 & 24.92 & 22.89 & 31.61 \\ & IAWMF & 41.39 & 38.26 & 36.17 & 34.43 & 32.81 & 31.20 & 29.61 & 27.96 & 25.57 & 23.54 & 32.09 \\ & NAHAT & 42.22 & 38.94 & 36.72 & 34.90 & 33.27 & 31.69 & 30.04 & 28.17 & 25.70 & 23.85 & 32.55 \\ & DAMRmF & 41.36 & 38.23 & 36.14 & 34.40 & 32.82 & 31.21 & 29.63 & 27.97 & 25.57 & 23.42 & 32.07 \\ & SeConvNet & **44.30** & **41.28** & **39.53** & **36.45** & **35.85** & **34.92** & **32.85** & **31.14** & **28.29** & **25.88** & **35.05** \\ \hline \multirow{8}{*}{
\begin{tabular}{} \end{tabular} } & ADKIF & 36.72 & 33.63 & 31.68 & 30.12 & 28.65 & 27.17 & 25.69 & 24.24 & 22.68 & 21.48 & 28.21 \\ & NLSF-MLP & 35.20 & 31.05 & 28.43 & 26.50 & 24.94 & 23.54 & 24.02 & 22.05 & 19.16 & 18.14 & 25.30 \\ & NLSF-CNN & 36.50 & 32.20 & 29.48 & 28.17 & 26.51 & 25.02 & 24.63 & 22.61 & 19.65 & 18.61 & 26.34 \\ & ARmF & 36.76 & 33.56 & 31.54 & 29.93 & 28.47 & 27.06 & 25.61 & 24.05 & 22.04 & 20.52 & 27.95 \\ & AcmF & 36.22 & 33.05 & 31.09 & 29.55 & 28.18 & 26.84 & 25.47 & 23.96 & 22.03 & 20.51 & 27.69 \\ & IAWMF & 37.26 & 34.19 & 32.21 & 30.61 & 29.17 & 27.82 & 26.43 & 24.93 & 22.98 & 21.50 & 28.71 \\ & NAHAT & 38.35 & 34.96 & 32.77 & 31.07 & 29.60 & 28.24 & 26.90 & 25.46 & 23.67 & 22.30 & 29.33 \\ & DAMRmF & 36.05 & 32.87 & 30.87 & 29.33 & 27.98 & 26.73 & 25.50 & 24.17 & 22.40 & 20.84 & 27.67 \\ & SeConvNet & **40.47** & **37.36** & **35.13** & **33.12** & **30.36** & **30.41** & **28.67** & **27.23** & **24.83** & **23.21** & **31.08** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results of PSNR (dB) for different methods at different noise densities on the 20 traditional test images and BSD68 gray-scale datasets.
reduction on the 20 traditional test images and BSD68 datasets. The PSNR results of color SAP denoising on CBSD68 are listed in table 2. Also included in tables 3 and 4 are the SSIM results of gray-scale and color SAP denoising.
As one can see in the results of the 20 traditional test images, BSD68, and CBSD68 datasets, SeConvNet can attain the best PSNR/SSIM results compared to other counterparts at low, moderate, and high noise densities. SeConvNet surpasses NAHAT, which has the second-best PSNR/SSIM results, by a significant margin. On average, SeConvNet outperforms NAHAT by 2.5dB/0.064, 1.7dB/0.067, and 6.0dB/0.045 in the PSNR/SSIM criterion on the 20 traditional
\begin{table}
\begin{tabular}{l l r r r r r r r r r r} \hline Dataset & Methods & 10\% & 20\% & 30\% & 40\% & 50\% & 60\% & 70\% & 80\% & 90\% & 95\% & Mean \\ \hline \hline \multirow{8}{*}{
\begin{tabular}{} \end{tabular} } & ADKIF & 34.53 & 32.24 & 30.63 & 29.26 & 27.95 & 26.60 & 25.25 & 23.89 & 22.40 & 21.21 & 27.40 \\ & NLSF-MLP & 33.10 & 29.77 & 27.48 & 25.75 & 24.33 & 23.04 & 23.61 & 21.74 & 18.93 & 17.92 & 24.57 \\ & NLSF-CNN & 34.32 & 30.87 & 28.50 & 27.37 & 25.86 & 24.49 & 24.21 & 22.29 & 19.41 & 18.38 & 25.57 \\ & ARmF & 35.11 & 32.62 & 30.87 & 29.42 & 28.07 & 26.74 & 25.37 & 23.85 & 21.90 & 20.38 & 27.43 \\ & ACmF & 34.63 & 32.15 & 30.46 & 29.07 & 27.79 & 26.53 & 25.23 & 23.78 & 21.88 & 20.38 & 27.19 \\ & IAWMF & 35.19 & 32.95 & 31.30 & 29.89 & 28.61 & 27.35 & 26.06 & 24.62 & 22.73 & 21.26 & 28.00 \\ & NAHAT & 36.63 & 33.97 & 32.08 & 30.54 & 29.18 & 27.90 & 26.62 & 25.23 & 23.47 & 22.10 & 28.77 \\ & DAMRmF & 33.92 & 31.49 & 29.85 & 28.56 & 27.40 & 26.29 & 25.18 & 23.93 & 22.21 & 20.62 & 26.94 \\ & SeConvNet & **45.10** & **42.64** & **39.70** & **38.19** & **34.87** & **33.82** & **32.20** & **29.82** & **26.64** & **24.38** & **34.73** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results of PSNR (dB) for different methods at different noise densities on the CBSD68 color dataset.
\begin{table}
\begin{tabular}{l l r r r r r r r r r r} \hline Dataset & Methods & 10\% & 20\% & 30\% & 40\% & 50\% & 60\% & 70\% & 80\% & 90\% & 95\% & Mean \\ \hline \hline \multirow{8}{*}{\begin{tabular}{} \end{tabular} } & ADKIF & 0.987 & 0.973 & 0.958 & 0.940 & 0.916 & 0.883 & 0.836 & 0.772 & 0.685 & 0.616 & 0.857 \\ & NLSF-MLP & 0.946 & 0.899 & 0.860 & 0.827 & 0.798 & 0.765 & 0.782 & 0.702 & 0.579 & 0.521 & 0.768 \\ & NLSF-CNN & 0.981 & 0.932 & 0.892 & 0.879 & 0.848 & 0.813 & 0.802 & 0.720 & 0.593 & 0.534 & 0.799 \\ & ARmF & 0.988 & 0.976 & 0.962 & 0.945 & 0.924 & 0.897 & 0.861 & 0.810 & 0.725 & 0.647 & 0.873 \\ & ACmF & 0.987 & 0.974 & 0.959 & 0.942 & 0.921 & 0.893 & 0.857 & 0.807 & 0.723 & 0.645 & 0.871 \\ & IAWMF & 0.988 & 0.976 & 0.962 & 0.945 & 0.924 & 0.897 & 0.862 & 0.812 & 0.743 & 0.672 & 0.878 \\ & NAHAT & 0.989 & 0.977 & 0.964 & 0.948 & 0.928 & 0.904 & 0.871 & 0.824 & 0.746 & 0.677 & 0.883 \\ & DAMRmF & 0.988 & 0.976 & 0.961 & 0.945 & 0.924 & 0.897 & 0.862 & 0.813 & 0.744 & 0.671 & 0.878 \\ & SeConvNet & **0.995** & **0.991** & **0.986** & **0.975** & **0.972** & **0.963** & **0.948** & **0.927** & **0.882** & **0.833** & **0.947** \\ \hline \multirow{8}{*}{
\begin{tabular}{} \end{tabular} } & ADKIF & 0.983 & 0.964 & 0.943 & 0.918 & 0.886 & 0.841 & 0.780 & 0.700 & 0.597 & 0.524 & 0.814 \\ & NLSF-MLP & 0.942 & 0.890 & 0.846 & 0.808 & 0.771 & 0.729 & 0.729 & 0.637 & 0.504 & 0.443 & 0.730 \\ & NLSF-CNN & 0.977 & 0.923 & 0.877 & 0.859 & 0.820 & 0.774 & 0.748 & 0.653 & 0.517 & 0.454 & 0.760 \\ & ARmF & 0.983 & 0.966 & 0.948 & 0.925 & 0.897 & 0.861 & 0.813 & 0.747 & 0.643 & 0.555 & 0.834 \\ & ACmF & 0.981 & 0.963 & 0.942 & 0.918 & 0.889 & 0.853 & 0.806 & 0.741 & 0.639 & 0.553 & 0.829 \\ & IAWMF & 0.986 & 0.970 & 0.952 & 0.930 & 0.903 & 0.869 & 0.824 & 0.763 & 0.664 & 0.582 & 0.844 \\ & NAHAT & 0.988 & 0.973 & 0.955 & 0.932 & 0.905 & 0.870 & 0.824 & 0.761 & 0.665 & 0.588 & 0.846 \\ & DAMRmF & 0.975 & 0.958 & 0.939 & 0.917 & 0.890 & 0.857 & 0.813 & 0.754 & 0.659 & 0.574 & 0.834 \\ & SeConvNet & **0.993** & **0.987** & **0.979** & **0.966** & **0.946** & **0.936** & **0.907** & **0.875** & **0.804** & **0.742** & **0.913** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results of SSIM for different methods at different noise densities on the 20 traditional test images and BSD68 gray-scale datasets.
test images, BSD68, and CBSD68 datasets, respectively. Specifically, at very high noise density, i.e., 95%, while NAHAT can denoise the images in the 20 traditional test images, BSD68, and CBSD68 datasets with a PSNR/SSIM performance of about 23.8dB/0.883, 22.3dB/0.846, 22.10dB/0.735, SeConvNet can drastically boost PSNR/SSIM gain on these datasets to 25.8dB/0.947, 23.2dB/0.913, and 24.38dB/0.794. Aside from SeConvNet and NAHAT, IAWMF can offer the best performance among other remaining methods on these datasets.
To show the SAP denoising efficiency of SeConvNet at a very high noise density, we choose three gray and three color images and illustrate the visual results of various methods on the corrupted version of them by 95% SAP noise in Figures 4 to 9. Based on the results shown, it is evident that ARmF, AcmF, IAWMF, and DAMRmF fail to recover the edges and details and produce artifacts in the edges. The NAHAT method blurs images and generates overly smooth
\begin{table}
\begin{tabular}{l l c c c c c c c c c c} \hline \hline Dataset & Methods & 10\% & 20\% & 30\% & 40\% & 50\% & 60\% & 70\% & 80\% & 90\% & 95\% & Mean \\ \hline \hline \multirow{8}{*}{
\begin{tabular}{} \end{tabular} } & ADKIF & 0.984 & 0.973 & 0.960 & 0.944 & 0.924 & 0.897 & 0.859 & 0.809 & 0.741 & 0.685 & 0.878 \\ & NLSF-MLP & 0.944 & 0.898 & 0.861 & 0.831 & 0.804 & 0.777 & 0.803 & 0.736 & 0.626 & 0.579 & 0.786 \\ & NLSF-CNN & 0.978 & 0.931 & 0.893 & 0.883 & 0.855 & 0.825 & 0.824 & 0.755 & 0.642 & 0.594 & 0.818 \\ & ARmF & 0.985 & 0.975 & 0.964 & 0.950 & 0.933 & 0.910 & 0.879 & 0.836 & 0.762 & 0.692 & 0.889 \\ & AcmF & 0.984 & 0.973 & 0.960 & 0.946 & 0.928 & 0.905 & 0.875 & 0.832 & 0.760 & 0.692 & 0.885 \\ & IAWMF & 0.987 & 0.978 & 0.967 & 0.953 & 0.936 & 0.915 & 0.887 & 0.846 & 0.777 & 0.713 & 0.896 \\ & NAHAT & 0.990 & 0.981 & 0.970 & 0.956 & 0.940 & 0.919 & 0.891 & 0.852 & 0.789 & 0.735 & 0.902 \\ & DAMRmF & 0.970 & 0.959 & 0.948 & 0.935 & 0.920 & 0.900 & 0.875 & 0.838 & 0.773 & 0.705 & 0.882 \\ & SeConvNet & **0.997** & **0.995** & **0.992** & **0.989** & **0.976** & **0.971** & **0.959** & **0.932** & **0.869** & **0.794** & **0.947** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Results of SSIM for different methods at different noise densities on CBSD68 color dataset.
Figure 4: Restoration results of different methods for the Peppers image with \(95\%\) SAP noise. (a) Original image (b) Noisy image (c) ARmF (d) AcmF (e) IAWMF (f) NAHAT (g) DAMRmF (h) SeConvNet.
features and edges. However, SeConvNet not only preserves fine details and sharp edges but also provides impressive visual results in smooth regions. For instance, it is obviously seen that while other methods fail to recover the eye and the surrounding area of the parrot's eye in figure 5, SeConvNet can effectively preserve fine details and sharp edges, as well as textures. Also, in figure 6 the number 96 on the white racing car becomes entirely indistinct by other SAP denoising methods, whereas it is almost clearly visible by SeConvNet. Moreover, in the case of color SAP denoising, ARmF, AcmF, IAWMF, NAHAT, and DAMRmF tend to introduce false color artifacts in addition to losing edges and
Figure 5: Restoration results of different methods for the Parrot image with \(95\%\) SAP noise. (a) Original image (b) Noisy image (c) ARmF (d) AcmF (e) IAWMF (f) NAHAT (g) DAMRmF (h) SeConvNet.
Figure 6: Restoration results of different methods for one image from the CBSD68 dataset with \(95\%\) SAP noise. (a) Original image (b) Noisy image (c) ARmF (d) AcmF (e) IAWMF (f) NAHAT (g) DAMRmF (h) SeConvNet.
details. These false-color artifacts are clearly apparent in figure 7, whilst SeConvNet can recover high-quality visual results without any false-color artifacts.
For better visual comparison of the obtained results, figures 10, 11, and 12 demonstrate the line graph of the average PSNR (dB) and SSIM versus noise densities for the 20 traditional test images, BSD68, and CBSD68 dataset, respectively. As it can be obviously seen, the proposed SeConvNet model can consistently surpass other state-of-the-art methods by a considerable margin.
Figure 8: Restoration results of different methods for one image from the BSD68 dataset with \(95\%\) SAP noise. (a) Original image (b) Noisy image (c) ARmF (d) ACmF (e) IAWMF (f) NAHAT (g) DAMRnF (h) SeConvNet.
Figure 7: Restoration results of different methods for one image from the CBSD68 dataset with \(95\%\) SAP noise. (a) Original image (b) Noisy image (c) ARmF (d) ACmF (e) IAWMF (f) NAHAT (g) DAMRnF (h) SeConvNet.
## 4 Conclusion
A new CNN model, namely SeConvNet, is proposed in this paper to denoise both gray-scale and color images corrupted by SAP noise, especially at high and very high noise densities. To meet this purpose, we introduced a new selective convolutional (SeConv) block for the beginning part of the network, which computes an initial estimation of noisy pixels by considering only non-noisy pixels. In the following layers of the network, conventional convolutional layers, as well as batch normalization (BN) and rectified linear units (ReLU), are employed. The denoising performance of SeConvNet is compared with seven state-of-the-art methods in terms of the PSNR and SSIM criteria on different gray-scale and color datasets, including 20 traditional test images and gray and color versions of 68 images of the Berkeley segmentation dataset (BSD68 and CBSD68). The results indicate that SeConvNet not only surpasses all counterparts by a significant margin (particularly at high noise densities) in quantitative denoising performance but also produces favorable restored images by preserving fine details and sharp edges along with providing impressive visual results in the smooth region.
Figure 9: Restoration results of different methods for one image from the CBSD68 dataset with \(95\%\) SAP noise. (a) Original image (b) Noisy image (c) ARmF (d) AcmF (e) IAWMF (f) NAHAT (g) DAMRmF (h) SeConvNet. |
2307.06084 | Neuromorphic analog circuits for robust on-chip always-on learning in
spiking neural networks | Mixed-signal neuromorphic systems represent a promising solution for solving
extreme-edge computing tasks without relying on external computing resources.
Their spiking neural network circuits are optimized for processing sensory data
on-line in continuous-time. However, their low precision and high variability
can severely limit their performance. To address this issue and improve their
robustness to inhomogeneities and noise in both their internal state variables
and external input signals, we designed on-chip learning circuits with
short-term analog dynamics and long-term tristate discretization mechanisms. An
additional hysteretic stop-learning mechanism is included to improve stability
and automatically disable weight updates when necessary, to enable continuous
always-on learning. We designed a spiking neural network with these learning
circuits in a prototype chip using a 180 nm CMOS technology. Simulation and
silicon measurement results from the prototype chip are presented. These
circuits enable the construction of large-scale spiking neural networks with
online learning capabilities for real-world edge computing tasks. | Arianna Rubino, Matteo Cartiglia, Melika Payvand, Giacomo Indiveri | 2023-07-12T11:14:25Z | http://arxiv.org/abs/2307.06084v1 | # Neuromorphic analog circuits for robust on-chip always-on learning in spiking neural networks
###### Abstract
Mixed-signal neuromorphic systems represent a promising solution for solving extreme-edge computing tasks without relying on external computing resources. Their spiking neural network circuits are optimized for processing sensory data on-line in continuous-time. However, their low precision and high variability can severely limit their performance. To address this issue and improve their robustness to inhomogeneities and noise in both their internal state variables and external input signals, we designed on-chip learning circuits with short-term analog dynamics and long-term tristate discretization mechanisms. An additional hysteretic stop-learning mechanism is included to improve stability and automatically disable weight updates when necessary, to enable continuous always-on learning. We designed a spiking neural network with these learning circuits in a prototype chip using a \(180\,\mathrm{nm}\) CMOS technology. Simulation and silicon measurement results from the prototype chip are presented. These circuits enable the construction of large-scale spiking neural networks with online learning capabilities for real-world edge computing tasks.
always-on learning, edge-computing, on-chip learning online, SNN, hysteresis, tristability.
## I Introduction
The requirements of artificial intelligence (AI) systems operating at the edge are similar to those that living organisms face to function in daily life. They need to measure sensory signals in real-time, perform closed-loop interactions with their surroundings, be energy-efficient, and continuously adapt to changes in the environment and in their own internal state. These requisites are well supported by neuromorphic systems and emerging memory technologies that implement brain-inspired mixed-signal spiking neural network (SNN) architectures [1, 2, 3, 4, 5]. These types of SNNs operate in a data-driven manner, with an event-based representation that is typically sparse in both space and time. Since they compute only when data is present, they are very power efficient. Similar to the biological neural systems they model, these SNNs are particularly well-suited to processing real-world signals. They can be designed to operate at the same data-rate of the input streams in real-time by matching the time constants of neural computation with those of the incoming signal dynamics. However, similar to their biological counterparts, these systems are affected by a high degree of variability and sensitivity to noise. One of the most effective strategies that biology uses to cope with noise and variability is to utilize adaptation and plasticity. This strategy has also been adopted by the neuromorphic community: several on-chip implementations of spike-based learning circuits have been proposed in the past [6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17]. However, few have addressed the problem of being able to operate robustly and autonomously in continuous time, with the ability to switch automatically and reliably between learning and inference modes. Following the original neuromorphic engineering approach [18], we propose a set of analog circuits that faithfully emulate synaptic plasticity mechanisms observed in pyramidal cells of cortical circuits and implement complex spike-based learning and state-dependent mechanisms that support this functionality. In addition, we extend the concept of long-term bi-stability of synaptic weights, proposed to increase robustness to noise and variability in the input signals [14, 19], to a tristate stability and weight discretization circuit that increases the resolution of the (stable and crystallized) synaptic weights. The synaptic plasticity circuits update an internal state variable of the synapse on every pre-synaptic input spike. The change in this state variable is computed in continuous time by the soma block of the neuron. In parallel, depending on its value, the internal variable is driven to one of three possible stable states and converted to a discrete three-state synaptic weight current value. The post-synaptic learning circuits comprise an additional mechanism that gates
Fig. 1: Micrograph of the prototype chip comprising of the learning circuits in a network of 4 neurons with 64 synapses each (see highlighted area). The chip, comprising of additional test structures, measures \(3\times 5\,\mathrm{mm}^{2}\).
the weight changes, to stop the learning process when the neuron's mean firing rate is outside a defined learning window. The circuits were fabricated on a prototype SNN chip designed in a 180 nm 6M1P CMOS technology and tested within a network of 4 neurons with 64 synapses each (see Fig. 1). In the following sections we describe the main building blocks used at both the synapse and neuron level, demonstrate their expected behavior with circuit simulations, and provide experimental results measured from the chip.
## II Network architecture
The block diagram of each neuron in the network is shown in Fig. 2. Input digital events (\(\mathsf{x_{0-N}}\)) arrive at the individual synapses via asynchronous logic [2] and trigger local weight update circuits to induce a change in the voltage stored on a local capacitor by an amount determined by the post-synaptic learning circuits. In parallel, a tristate stability circuit drives this internal voltage to one of three possible stable states. This local internal voltage is then discretized and converted to a low, intermediate or high current value. All currents produced by all synapses are summed spatially and conveyed to a differential pair integrator (DPI), which integrates the weighted sum over time [20]. A parallel and analogous pathway receives input events representing a desired target signal (\(\mathsf{x_{t}}\)), and produces a corresponding current from its dedicated DPI circuit. The target and input currents are both summed to drive the neuron's postsynaptic Integrate & Fire (I&F) circuit [21], and subtracted to drive the soma's Delta rule circuit [22]. The Delta rule circuit produces either positive or negative weight update signals proportional to the difference between the target input and the weighted synaptic input. These signals are broadcast to all the neuron's input synapses in continuous time if learning is enabled. Learning is enabled (or disabled) by means of two hysteretic Winner-Take-All (hWTA) circuits [23] that compare the neuron's mean output firing rate to a low and a high threshold (see Section III-B for details).
## III Circuits
As the details of the Delta rule and I&F circuits have already been presented [11, 21, 22], we describe the synapse learning circuits and the soma hWTA circuit.
### _Plastic synapse circuit_
Figure 3 presents all of the learning circuits used at the synaptic level. With every pre-synaptic spike, the weight update circuit (Fig. 2(a)) increases or decreases the internal analog weight variable \(\mathsf{V_{w}}\) by an amount determined by the voltages \(\mathsf{V_{UP}}\) and \(\mathsf{V_{DN}}\), produced by the post-synaptic Delta rule circuits. The tri-stability supply voltage circuit (Fig. 2(b)), produces the biases that power either of the positive feedback amplifiers of Fig. 2(c), depending on the state of \(\mathsf{V_{w}}\) with respect to \(\mathsf{V_{dd}}/2\). The tri-stability circuit (Fig. 2(c)) consists of two slew-rate limited positive feedback amplifiers which slowly drive \(\mathsf{V_{w}}\) towards ground, \(\mathsf{V_{dd}}/2\), or \(\mathsf{V_{dd}}\) depending on the value of \(\mathsf{V_{w}}\) relative
Fig. 3: Synapse learning circuits. (a) The weight update circuit increases or decreases the internal analog weight variable \(\mathsf{V_{w}}\) with every input spike \(\mathsf{V_{PRE}}\); (b) The tri-stability supply voltage circuit determines which of the two amplifiers in (c) to power depending on the value of \(\mathsf{V_{w}}\) with respect to \(\mathsf{V_{dd}}/2\) by activating either \(\mathsf{V_{DH}}\) or \(\mathsf{V_{DL}}\). (c) The tristability circuit drives the \(\mathsf{V_{w}}\) voltage towards ground, \(\mathsf{V_{dd}}/2\) or \(\mathsf{V_{dd}}\) depending on its value relative to \(\mathsf{V_{THH}}\) and \(\mathsf{V_{THL}}\). (d) The current discretization circuit converts \(\mathsf{V_{w}}\) into a low (the leakage current \(\mathsf{I_{0}}\)), intermediate, or high current.
Fig. 2: Block diagram of a single neuron row. The plastic synapses (in red) consist of input logic, a weight update, tristate stability, and a current ADC block. Input spikes arriving at a synapse update the internal weight variable \(\mathsf{V_{w}}\), and change it by an amount that is determined by the post-synaptic learning circuits at the soma (in green). The weight voltage is slowly driven to one of three possible stable states, and converted into a synaptic current by thresholding circuits. A parallel pathway provides a target current (in blue). Both input and target currents are integrated over time by differential pair integrator (DPI) circuits. The soma (in green), comprises of an Integrate & Fire (I&F) block which integrates the sum of target and input currents, a Delta rule block that calculates the difference of the two DPI currents to determine the amplitude of the weight change, and a hysteretic Winner-Take-All (hWTA) block used to determine if and when to “stop-learning”.
to \(\mathsf{V_{THH}}\) and \(\mathsf{V_{THL}}\). The weight discretization circuit (Fig. (d)d) sets the value of the effective synaptic current \(\mathsf{l_{w}}\) to \(\mathsf{l_{0}}\), \(\mathsf{l_{wb}}\), or \(2\mathsf{l_{wb}}\) depending on the state of \(\mathsf{V_{w}}\) with respect to \(\mathsf{V_{THL}}\) and \(\mathsf{V_{THH}}\).
### _Hysteretic WTA for "stop-learning"_
Figure 4 shows an instance of a hWTA circuit: it consists of two identical cells, (\(M_{2}\)-\(M_{6}\)) and (\(M_{7}\)-\(M_{11}\)) that compete with each other. As soon as one cell wins (e.g., the left one), the bias current \(\mathsf{l_{bn}}\) is copied and added to the input current of the winning branch (e.g., \(\mathsf{l_{L}}\)). This creates a hysteresis window, such that for the winning (left) cell to lose the competition, its input current has to decrease below the input current of the opposite branch by an additional factor equal to the bias current (\(\mathsf{l_{L}}<\mathsf{l_{R}}-\mathsf{l_{bn}}\)). The output voltage of this circuit \(\mathsf{V_{OUT}}\) switches to "high" when the left cell wins, and to "low" when the right cell becomes the winner.
To implement the "stop-learning" mechanism [14], we produce a current \(\mathsf{l_{Ca}}\) (a surrogate of the neuron's calcium concentration) by integrating the post-synaptic neuron spikes with a DPI circuit [20]. We then compare this current to two thresholds with the two hWTA circuits. The digital output nodes of the two hWTA circuits were connected to logic gates to produce an active high \(\mathsf{Learn}\) signal when the \(\mathsf{l_{Ca}}\) current is within the set bounds (i.e., within the learning region) and a low when it is outside this region. This \(\mathsf{Learn}\) signal is then used as a "third factor" to enable or disable the Delta rule weight circuit, and switch on or off the weight updates. The hysteresis windows of the hWTA circuits are used to distinguish between cases in which the target input is present (to enable learning) or absent (to disable learning and automatically switch to an "inference" mode). The effect of this window is described in Section IV-A.
## IV Results
We validate the learning circuits with both circuit simulations and with experimental results measured from the fabricated chip.
### _Circuit simulation results_
Here, we show simulations of a single neuron and 40 plastic synapses during a learning task and show how the hWTA enables automatic switching from learning to inference.
After initializing all synaptic weights to zero, we started a training phase by stimulating each plastic synapse with a \(25\,\mathrm{Hz}\) input spike train, and by sending a spike train with a \(1\,\mathrm{kHz}\) frequency to the target synapse. As expected, during this training phase, the weights of the synapses potentiated and the total weighted synaptic input current increased (see the red trace of Fig. 5). During the inference phase, we removed the target input spike train while keeping on stimulating the input synapses. As expected, without this extra input, the average
Fig. 4: hysteresis WTA circuit used to determine the “stop-learning” signals. The digital output voltage \(\mathsf{V_{OUT}}\) switches from low to high only if the left input current \(\mathsf{l_{L}}\) increases above \(\mathsf{l_{R}}\) by an amount at least equal to \(\mathsf{l_{bh}}\).
Fig. 5: Row simulation results. In the top plots of both sub-figures, the DPI synapse current (in red) follows the DPI target current (in blue) when the target input is high. The neuron’s membrane activity (in grey) is scaled for visibility. In the lower plots of both sub-figures the calcium current (in black) enables the \(\mathsf{Learn}\) signal when it lies between the learning region’s low (green line) and high (red line) thresholds. (a) Small hysteresis bias (\(\mathsf{l_{bh}}=100\,\mathrm{pA}\)): after the target current is removed the calcium current drops back into the learning region, the weights are decreased, and the neuron forgets its tuning (b) Large hysteresis bias (\(\mathsf{l_{bh}}=800\,\mathrm{pA}\)): the large hysteresis region around the highest threshold (shaded pink) keeps the learning disabled, despite the fact that the calcium current fell below the high threshold. The weights of the plastic synapses do not change and the neuron maintains its proper tuning to the trained pattern.
mean firing rate of the neuron decreased, and the calcium concentration current fell below the upper bound of the learning region. Figure 5 shows this task performed with two values for \(\mathsf{b_{th}}\), which governs the width of the hysteresis window. Without a proper hysteresis window (Fig. 4(a)), when the neuron falls back into a learning region it "forgets" its training (i.e., the learning circuits decrease the weights). On the other hand, by properly tuning the hysteresis window (Fig. 4(b)), the network remains in a "stop-learning" mode, and the neuron retains a high output firing rate response to the trained pattern, even in absence of a target signal. In the larger hysteresis window case, the total estimated power consumption is 1.07 uW, and a maximum (mean) energy of 740 pJ (680 pJ) is required to update the weights.
### _Chip measurement results_
#### Iv-B1 Tristability
The results from the measurements of the plastic synapse circuits are shown in Fig. 6. Initially, the neuron is presented with a high target activity, triggering large positive weight updates and causing a rapid increase in the synapse weight internal variable. Upon removal of the target, the weight is decreased. By increasing the power to the tristate stability amplifiers (Fig. 4(c)), i.e., by increasing \(\mathsf{b_{bs}}\) of Fig. 4(b), the circuit opposes the weight changes more strongly and drives \(\mathsf{V_{w}}\) to one of the three stable states more quickly. Stability at \(\mathsf{V_{dd}}\) and ground is shown in Fig. 5(a) and at \(\mathsf{V_{dd}}\)/2 in Fig. 5(b). Once the stimulation ends, the tristability circuit crystalizes the weight to one of the three stable states depending on the value of \(\mathsf{b_{bs}}\).
#### Iv-B2 Hysteresis for "stop-learning"
Figure 7 shows the results of the characterization of the hysteretic calcium-based stop-learning mechanism. Similarly to the previous experiment, the neuron is initially stimulated with a high target activity, bringing it to the learning region. The plastic synapse weight rapidly increases, pushing the neuron into the "stop-learning" region. Once the target activity is removed, the neuron returns to the learning region, and, for small hysteresis window settings (top blue plot in Fig. 7), the plastic synapse decreases its weight as it is stimulated. For higher values of \(\mathsf{b_{th}}\) the hysteresis window increases (orange plot in Fig. 7) and when the target is removed, the neuron's return to the learning mode is delayed. As this delay increases, even though the plastic synapse keeps on being stimulated, the neuron remains in the "stop-learning" region and the weight remains unchanged (purple plot in Fig. 7).
## V Conclusions
We presented a set of analog circuits that enable learning in mixed-signal neuromorphic SNNs, with tristate stability and weight discretization circuits. By comparing the neuron's calcium concentration to lower and upper bounds, and by using hysteresis, we demonstrate effective always-on learning features, that automatically switch from learning mode to inference mode, without having to manually disable or enable learning. Comparisons to previous efforts are provided in Table I.
## Acknowledgment
The authors thank Shyam Narayanan, Charlotte Frenkel, and Junren Chen for fruitful discussions and contributions.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline & [24] & [25] & [17] & [11] & [This work] \\ \hline \begin{tabular}{l} **Technology** \\ **Design** \\ **Learning** \\ **Stop** \\ **learning** \\ **Weight** \\ **resolution** \\ **Energy/SOP** \\ **Power** \\ **supply** \\ \end{tabular} & \begin{tabular}{l} 28 nm \\ digital \\ semi super- \\ vised \\ yes \\ yes \\ **Weight** \\ **22.7 pJ** \\ 0.55 V \\ \end{tabular} & \begin{tabular}{l} 14 nm \\ digital \\ program- \\ mobile \\ no \\ 120.75 V \\ \end{tabular} & \begin{tabular}{l} 180 nm \\ mixed- \\ signal super- \\ error based \\ yes \\ no \\ 120.75 V \\ \end{tabular} &
\begin{tabular}{l} 180 nm \\ mixed- \\ signal \\ error based \\ yes \\ yes \\ \end{tabular} \\ \hline \hline \end{tabular}
\end{table} TABLE I: Comparison to the state-of-the-art
Fig. 6: Tristability chip measurements. (a) Stability at \(\mathsf{V_{dd}}\) and ground: When the target is presented, the plastic synapse weight is increased by the post-synaptic learning circuits (\(\overline{\mathsf{V_{up}}}\) and \(\mathsf{V_{dn}}\) scaled for visibility). As the tristability bias increases, the circuit opposes the weight update more strongly. When the target is removed, the tristability maintains the weight value around \(\mathsf{V_{dd}}\) for larger values of \(\mathsf{b_{bs}}\) (in orange and purple). (b) Stability at \(\mathsf{V_{dd}}\)/2: as \(\mathsf{b_{bs}}\) increases the tristability opposes the learning with more strength.
Fig. 7: Hysteresis chip measurements. The hysteresis window is shown for three different values of \(\mathsf{b_{th}}\). Increases in \(\mathsf{b_{th}}\) produce larger hysteresis windows which are useful for tuning the “stop-learning” properties of the network. |
2308.15899 | Beyond Traditional Neural Networks: Toward adding Reasoning and Learning
Capabilities through Computational Logic Techniques | Deep Learning (DL) models have become popular for solving complex problems,
but they have limitations such as the need for high-quality training data, lack
of transparency, and robustness issues. Neuro-Symbolic AI has emerged as a
promising approach combining the strengths of neural networks and symbolic
reasoning. Symbolic knowledge injection (SKI) techniques are a popular method
to incorporate symbolic knowledge into sub-symbolic systems. This work proposes
solutions to improve the knowledge injection process and integrate elements of
ML and logic into multi-agent systems (MAS). | Andrea Rafanelli | 2023-08-30T09:09:42Z | http://arxiv.org/abs/2308.15899v1 | Beyond Traditional Neural Networks: Toward adding Reasoning and Learning Capabilities through Computational Logic Techniques
###### Abstract
Deep Learning (DL) models have become popular for solving complex problems, but they have limitations such as the need for high-quality training data, lack of transparency, and robustness issues. Neuro-Symbolic AI has emerged as a promising approach combining the strengths of neural networks and symbolic reasoning. Symbolic Knowledge Injection (SKI) techniques are a popular method to incorporate symbolic knowledge into sub-symbolic systems. This work proposes solutions to improve the knowledge injection process and integrate elements of ML and logic into multi-agent systems (MAS).
## 1 Introduction and Problem Description
Deep Learning (DL) models have gained popularity for addressing complex problems in various domains. However, they face challenges. Insufficient or biased training data can result in poor model performance and limited generalization. DL models heavily rely on statistical models which can restrict their ability to reason about concepts not explicitly represented in the data. This limitation hampers their capacity to understand the world and draw accurate conclusions from available information. To address these issues, Neuro-Symbolic AI has emerged, which seeks to combine the strengths of neural networks and symbolic reasoning to overcome the limitations of purely statistical models. There are various techniques for integrating neural and symbolic systems (for a better understanding c.f., e.g., [7, 13, 22]). This work focuses mainly on methods that aim to combine these two systems by injecting symbolic knowledge into neural networks, known as Symbolic Knowledge Injection (SKI) techniques. Additionally, we propose the creation of collaborative networks that combine the strengths of logic-based agents and neural agents within a Neuro-Symbolic architecture, creating a more comprehensive and efficient system. This approach offers a unique synergy, paving the way for more advanced and intelligent systems.
## 2 Background and Existing Literature
In this section, we will provide a quick overview of the concepts and methods considered in our work.
Symbolic Knowledge InjectionSKI is a strategy used to improve the performance of sub-symbolic predictors, such as neural networks, by integrating useful symbolic knowledge into them. It has several benefits, such as alleviating the problem of insufficient training data, reducing the required time
and computational resources for learning, increasing the predictive accuracy of sub-symbolic predictors, and providing human-interpretable frameworks. There are several ways the knowledge can be injected into predictors, mainly: _(i) loss constraining_[4, 14, 29], i.e., the inclusion of penalty terms in the loss function. Penalty terms are indicative of the limitations imposed by prior knowledge; _(ii) structure constraining_[5, 20, 21], i.e., the alteration of the structure of the sub-symbolic model to reflect symbolic knowledge expressed as constraints, e.g. by changing the number and size of hidden layers or by setting the connections between neurons; _(iii) knowledge embedding_[6, 8, 28], i.e., incorporating additional domain-specific knowledge by generating numeric data from the symbolic ones.
Abductive Logic ProgrammingALP is a programming paradigm [17] that combines logic programming with abductive reasoning [19] to generate hypotheses (or abducibles) that explain certain observations. Abduction is formally described [21] as a framework where, given a knowledge base \(KB\) and observations \(O\), one seeks to find a hypothesis \(H\), s.t. \(KB\cup H\models O\).
ALP is a version of traditional logic programming in which the reasoner generates abductive hypotheses by assuming the truth of abducibles (a set of open predicates) given some integrity constraints. Integrity constraints are logical formulas of the form \(:\)-\(Body.\), where \(Body\) is a conjunction of logical literals. Intuitively, an integrity constraint specifies a set of conditions that must not hold for any valid solution to the problem. According to [18], an abductive framework is known as a triple \(<P,A,I_{c}>\), where \(P\) is a collection of clauses, \(A\) is a set of predicate symbols, called _abducibles_, and \(I_{c}\) is a set of closed formulae. Clauses in \(P\) are of the form \(H\gets L_{1},L_{2},..,L_{k},\;\;k\geq 0\), where \(H\) is an atom with a predicate symbol not present in \(A\) and \(L_{i}\) is a single literal. Therefore, \(P\) is a general logic program with the limitation that predicates present in \(A\) do not have definitions in \(P\). Here, the idea is to link to an abductive framework a collection of models-commonly called generalized stable models, or GSMs [15]- that are assumed to reflect the framework itself and may be used to characterise abduction within this framework. Hence, any observation has an abductive explanations if it is consistent with at least one of these GSM models. Let \(<P,A,I_{c}>\) represent an abductive framework, whereas \(O\) represents an observation. Then \(O\) is explained abductively given a set of hypotheses \(\Delta\) iff exists a GSM, \(M(\Delta)\), s.t. \(M(\Delta)\models O\). In general, the main aim of abduction, given an observation \(O\), is to find a set of _abductive explanation_\(\Delta\subseteq A\), s.t.: _(i)_\(M\models O.\), and _(ii)_\(M\models I_{c}\).
DaliDali8, [9] is a programming language developed to construct logical agents. It is an extension of the Prolog language1, and any Prolog program can be considered a DALI agent. The agent orientation is provided through events as first-class objects in the language, as well as two types of rules: _reactive rules_ and _proactive rules_. Reactive rules permit agents to interact with and respond to their surroundings. These rules enable the agents to react to external events. _External events_ are triggered by the environment of the agent and are denoted by the postfix \(\mathbf{E}\). An agent may choose to react to an external event by employing a reactive rule containing the event in its head. Proactive rules, on the other hand, are used by DALI agents to take initiative and start activities when they deem it suitable. Using _internal events_, that are denoted by the postfix \(\mathbf{I}\), agents can act independently of their environment and other agents. DALI agents use _actions_ to influence their surroundings, potentially in response to external or internal events. Actions may have pre-conditions, which are specified in the action rule. Otherwise, they are just an action atom denoted with the postfix \(\mathbf{A}\). Agents can communicate with and affect their environment through actions.
## 3 Goal of the Research
This work aims to address challenges in the field of DL models by exploring the integration of neural models and computational logic. It will focus on two main tools: SKI and MAS. The main goals are: _(i)_**RG 1**: use abduction as a way of integrating targeted knowledge into the injection process. _(ii)_**RG 2**: develop metrics to assess the goodness-of-fit of injection mechanisms from several perspectives, both qualitative and efficiency-related. _(iii)_**RG3**: formalise an integrated MAS in which one or more agents have purely perceptual tasks and other purely logical tasks.
**RG 1** aims to use abduction to improve the SKI process. In supervised learning, the observation comprises input-output pairs encoded as a model \(f(\mathcal{X})\rightarrow\mathcal{Y}\). When using SKI, we want to modify \(f(\cdot)\) to align it with symbolic knowledge. However, selecting the right rules for each output becomes challenging with increasing rule count, causing possible scalability issues. One solution is to leverage abduction, which helps identify plausible rules for a given output. Here, abduction can be seen as a rule \(Y:-A\)., where \(Y\) is the observation (ground-truth) and \(A\) are abducibles generated by the reasoner. The injection procedure can then be modified to incorporate abducibles, making the predictions of \(f(\cdot)\) consistent with the provided abducibles.
**RG 2** aims to investigate different metrics to gain a deeper understanding of the potential benefits and limitations of SKI from multiple perspectives. In the field of SKI, the effectiveness of the injection procedure is often measured by comparing the performance of the SKI predictor with its counterpart (non-injected predictor), using metrics such as accuracy, F1 score or MSE. However, these metrics may not fully capture all aspects of knowledge injection, such as whether such injection resulted in a more sustainable predictor in terms of allocated resources or a more robust predictor, and so on. Therefore, **RG 2** seeks to study the injection process through a broader lens, including factors such as the sustainability and robustness of the injection procedure.
Finally, the aim of **RG 3** is to create a MAS, in which ML and logic-based elements can collaborate and interact with each other. In practice, given a neuro-symbolic architecture, the main objective is to allocate the different modules of the architecture to different collaborating agents. The integration of ML and logic-based elements into MAS could potentially lead to more efficient and effective decision-making processes.
ResultsWe provided a rough idea of **RG 1** in a conference position paper [25], in which we proposed the incorporation of abductive reasoning through two possible frameworks. The idea is applied to image segmentation to address issues of data scarcity and low robustness. In this proposal, we suggest the use of either ABL [12] or a combination of ABL and knowledge injection. The article aims to indicate possible uses of abduction within the realm of Neuro-Symbolic integration.
We presented a preliminary idea of **RG 2** in [3], where we provide the first - to the best of our knowledge - set of Quality-of-Service (QoS) metrics for SKI, with a focus on quantifying robustness and efficiency advantages owing to injection. In this paper, we offer an initial formulation for the following metrics: _(i) memory footprint efficiency_, i.e., the gain in model complexity; _(ii) energy efficiency_, i.e., the gain in total energy required to train and deploy a sub-symbolic model; _(iii) latency efficiency_, i.e., improvements in terms of the time required for inference; _(iv) data efficiency_, i.e., the improvement in terms of the amount of data required to optimise a sub-symbolic model; _(v) robustness_, i.e., the capability of the injection mechanism to adapt to variations of input data and knowledge; _(vi) comprehensibility_, i.e., the capability of the injection mechanism to produce more intelligible models. In addition, this work discusses the potential impact of these metrics within the field of MAS, considering that a probable
inefficiency of the sub-symbolic models incorporated in the agents affects their functioning in multiple ways (e.g. energy inefficiency, data inefficiency, computational inefficiency, and so on).
In a subsequent paper, [2], we extended and enhanced **RG 2** by providing a more rigorous mathematical definition and implementation of the following metrics: memory footprint, energy efficiency, latency efficiency, and data efficiency. The software tools necessary for the practical use of these metrics were also provided, and the metrics were evaluated on three different datasets using three different injection models2. The results provide valuable insights into the performance of different injection predictors on various datasets, highlighting the importance of adopting specific metrics when evaluating them. The paper emphasises that the use of these metrics in the context of MAS could be a crucial tool for comparing different predictors and selecting the most suitable one for a given task.
Footnote 2: The repository containing the experiments can be found at the following link: [https://github.com/pikalab-unibo/ski-qos-jaamas-experiments-2022](https://github.com/pikalab-unibo/ski-qos-jaamas-experiments-2022)
In [23], an initial approach to symbolic and sub-symbolic integration within a MAS was proposed. This paper was later extended in [24]. These two works are a first step towards **RG 3**. In [23], we present the potential capabilities of an integrated system consisting of logical agents and a neural network specialized in monitoring flood events for civil protection purposes. The paper describes a framework composed of a group of intelligent agents performing various tasks and communicating with each other to efficiently generate alerts during flood crisis events. In [24], the work is extended, and an initial implementation of the framework is provided. The paper presents a preliminary prototype of a MAS that autonomously collects weather warnings, categorises related images using a DL module, filters the results, and alerts human operators only if there is a reasonable certainty that a risk situation is occurring. To this end, we implemented a system3 using a combination of logical agents and a DL component. The neural network is trained on eight classes of topographic entities to segment the images. Once the images have been segmented, a _Logical Image Descriptor_ (LID) is used to generate a logical description of the segmented mask predicates. This description is then submitted to a logical agent that performs the reasoning. In our system, the reasoning is performed using a perception-fusion approach, where the MAS agents use their perceptions to reason about the environment and make decisions collectively. The paper emphasises the adoption of agents based on computational logic with a logical basis to provide verifiability, explainability, and reliability.
Footnote 3: The repository containing the experiments can be found at the following link: [https://github.com/AAAI-DISIM-UnivAQ/MAS_Py_FLOOD](https://github.com/AAAI-DISIM-UnivAQ/MAS_Py_FLOOD)
## 4 Future Works
In the future, potential research directions include: _(i)_ for **RG 1**: explore abductive reasoning technique to enhance knowledge injection efficiency and effectiveness. Evaluate the framework's performance through extensive experiments in various domains; _(ii)_ for **RG 2**: refine and expand evaluation metrics, including comprehensibility metrics, for better assessment of injected models. Validate metrics across different scenarios and datasets; _(iii)_ for **RG 3**: explore collaborative frameworks and hybrid models combining logical and neural reasoning. Possibly evaluate their performance in real-world applications.
Overall, these future works aim to improve knowledge injection, evaluation methodologies, and the integration of logical and neural agents, contributing to more efficient and interpretable machine learning models.
## Acknowledgements
This PhD work is conducted under the supervision of professors Stefania Costantini (University of L'Aquila, Italy), Fosca Giannotti (Scuola Normale Superiore, Pisa, Italy), and Andrea Omicini (Alma Mater Studiorum-University of Bologna, Italy).
|
2302.09555 | Estimation and Early Prediction of Grip Force Based on sEMG Signals and
Deep Recurrent Neural Networks | Hands are used for communicating with the surrounding environment and have a
complex structure that enables them to perform various tasks with their
multiple degrees of freedom. Hand amputation can prevent a person from
performing their daily activities. In that event, finding a suitable, fast, and
reliable alternative for the missing limb can affect the lives of people who
suffer from such conditions. As the most important use of the hands is to grasp
objects, the purpose of this study is to accurately predict gripping force from
surface electromyography (sEMG) signals during a pinch-type grip. In that
regard, gripping force and sEMG signals are derived from 10 healthy subjects.
Results show that for this task, recurrent networks outperform nonrecurrent
ones, such as a fully connected multilayer perceptron (MLP) network. Gated
recurrent unit (GRU) and long short-term memory (LSTM) networks can predict the
gripping force with R-squared values of 0.994 and 0.992, respectively, and a
prediction rate of over 1300 predictions per second. The predominant advantage
of using such frameworks is that the gripping force can be predicted straight
from preprocessed sEMG signals without any form of feature extraction, not to
mention the ability to predict future force values using larger prediction
horizons adequately. The methods presented in this study can be used in the
myoelectric control of prosthetic hands or robotic grippers. | Atusa Ghorbani, Aghil Yousefi-Koma, Amirhosein Vedadi | 2023-02-19T12:20:10Z | http://arxiv.org/abs/2302.09555v1 | **Estimation and Early Prediction of Grip Force Based on sEMG Signals and Deep Recurrent Neural Networks**
###### Abstract
Hands are used for communicating with the surrounding environment and have a complex structure that enables them to perform various tasks with their multiple degrees of freedom. Hand amputation can prevent a person from performing their daily activities. In that event, finding a suitable, fast, and reliable alternative for the missing limb can affect the lives of people who suffer from such conditions.
As the most important use of the hands is to grasp objects, the purpose of this study is to accurately predict gripping force from surface electromyography (sEMG) signals during a pinch-type grip. In that regard, gripping force and sEMG signals are derived from 10 healthy subjects. Results show that for this task, recurrent networks outperform non-recurrent ones, such as a fully connected multilayer perceptron (MLP) network. Gated recurrent unit (GRU) and long short-term memory (LSTM) networks can predict the gripping force with R-squared values of 0.994 and 0.992, respectively, and a prediction rate of over 1300 predictions per second. The predominant advantage of using such frameworks is that the gripping force can be predicted straight from preprocessed sEMG signals without any form of feature extraction, not to mention the ability to predict future force values using larger prediction horizons adequately. The methods presented in this study can be used in the myoelectric control of prosthetic hands or robotic grippers.
Keywords: gripping force prediction- EMG- recurrent neural networks- LSTM - GRU
## Declarations
Funding: The work is based upon research funded by Iran National Science Foundation (INSF) under project No. 4013238.
Conflicts of interest/Competing interests: The Authors declare that there is no conflict of interest.
Availability of data and material and code availability: all data and codes are available at [https://github.com/Atusa-gh/GrippingForcePrediction](https://github.com/Atusa-gh/GrippingForcePrediction).
## 1 Introduction
One of the dramatic events that can happen to human beings is amputation, which can result from trauma, ischemia, infection, or malignancy. This tragic event dramatically decreases the quality of life of amputees. Moreover, hand amputation is one of the most common amputations. Upper limb amputations can be categorized into partial hand amputation, wrist disarticulation, trans-radial (below-elbow), trans-humeral (above-elbow), and shoulder disarticulation in accordance with the amputation level.
Prosthetic limbs have been the best solution for amputation, giving amputees the chance to regain their limb capabilities fully or partially. An active hand prosthesis is a device that replaces the missing hand and provides grasping functions. It is crucial to translate amputees' intentions accurately and quickly to develop a broadly useable prosthesis.
Many of the active prostheses are controlled by sEMG signals, which originate from muscles' electrophysiological and mechanical activations. Due to their ease of collection and noninvasiveness, sEMG signals are used for various applications, such as myoelectric control of prosthetics. Subsequently, deriving precise and fast methods for estimating movement parameters such as hand gestures, fingers or wrist angles, and gripping force from sEMG signals has been broadly investigated by researchers.
Recently, along with the successful development of artificial intelligence, more attention has been paid to neural network methods to estimate gripping variables. More specifically, the number of published
papers associated with deep learning methods and sEMG signals has quadrupled between 2017 and 2018 [1]. Various types of neural networks have been proven to deliver an adequate level of error. As a result, many deep neural networks such as convolutional neural networks (CNN), recurrent neural networks (RNN), gated recurrent units (GRU), and long short-term memory (LSTM) have been deployed to estimate hand movements or gestures with considerable accuracy. For instance, upper-limb joint angles for multiple movements have been estimated from sEMG signals with the help of a deep neural network, and the deep network's performance has been proven to be much more precise than a simple multilayer perceptron network (MLP) [2]. Other than the framework of the neural networks, the EMG signal processing, and the feature extraction process play a significant role in the estimation [3, 4, 5]. Regarding the estimation of hand position variables such as fingers' angles, RNNs outperform non-recurrent neural networks with the same size in terms of mean absolute error. Moreover, adding more neurons and layers to the RNN offers only tiny improvements in estimation; therefore, preprocessing the input signals of the neural network has a more significant effect on its performance [6].
Nowadays, position control is implemented in the majority of robots performing position-strict tasks all around the world. Nevertheless, this is not well-suited for close human interactions due to safety issues. On the contrary, force control is much more effective for such applications, especially rehabilitation. A compliant behavior of a robotic hand can provide safe interactions between the amputee and the prosthetic device, which prohibits secondary injuries to the user and damages to objects being tackled by the device. However, very few prosthetic limbs or grippers function via force control [7].
Having great applications in rehabilitation and biomedical engineering, estimating muscular forces of both upper and lower limbs has also been studied. To exemplify, both sEMG signals and inverse dynamic analysis have been used to estimate upper-limb muscular forces [8]. Also, multiple muscle models have been deployed and compared for predicting muscular forces of gastrocnemius medialis, gastrocnemius lateralis, soleus, and tibialis anterior muscles from sEMG signals [9].
In addition, estimating the gripping force or fingers' force values have been studied to some extent. For example, by extracting the main time-domain features of the sEMG signals, the force of the power-type gripping has been estimated with reasonable precision via a general regression neural network (GRNN) [10]. Also, a three domains fuzzy wavelet neural network (TDFNN) algorithm alongside a conventional radial basis function neural network (RBFNN) method has been utilized to estimate power-type gripping force with an acceptable precision [11]. Regarding other machine learning algorithms, finger forces during a power-type grip have been successfully estimated via a gradient boosting machine (LightGBM) model [12].
Moreover, using deep neural networks (DNNs) as well as dimension reduction methods such as principal component analysis (PCA) in estimating the gripping force of the pinch-type grip has shown promising results for myoelectric control of a prosthetic hand [13]. Besides backpropagation neural networks, other machine learning methods such as gene expression programming algorithm (GEP) have also shown remarkable results in estimating gripping force [14]. Using non-negative matrix factorization (NMF) based input signal extraction for neural networks such as CNN and LSTM combined with strong signal processing instead of the traditional feature extraction methods can also result in precise estimations [15].
In this paper, to achieve a quick and precise prosthetic hand, the gripping force of the three-finger pinch mode, which is one of the most commonly used gripping gestures, is estimated straight from the filtered and normalized sEMG signals using a variety of neural networks. Furthermore, the performance of simple RNN, LSTM, GRU, and MLP networks for this task are compared.
## 2 Material and methods
At the beginning of this section, experiment protocols are defined. Then, a schematic figure of the whole predicting process is presented, followed by a description of the signal processing of sEMG and force signals. Lastly, all implemented neural networks are presented.
### Experiment protocols
A test setup based on a three-finger pinch mode grip is designed to derive an adequate force-EMG data set. In this setup, EMG signals of the forearm muscles and the compressive force of the three-finger pinch grip are simultaneously measured.
An ATI 6-axis force/torque sensor from the mini45 series is used to measure the gripping force. This sensor is capable of recording all six components of force and torque. However, only one component of the force created by applying pressure on the z-axis of the sensor is sufficient for estimating the pinching gripping force.
A Myo armband bought from the Thalmic Labs is used to measure the sEMG signals. This armband has 8 EMG channels that can be wrapped around an arm and measure and record electrical impulses given off by the hand's muscles via the sEMG sensors. By digesting
these pulses, the armband itself can control an electronic device based on the hand's gestures. The raw EMG signals of all eight sensors can also be derived through the Bluetooth port via Python. The Myo armband is put in the middle of the forearm-length area of the hand. Between three different electrode positions in the superficial forearm muscles' positions such as the extensor digitorum muscle, Flexor digitorum superficialis muscle, and Palmaris longus muscle, setting the armband in the middle of the forearm produces the best results in finger movement classification [16]. This is the ground reason why the Myo armband has been placed in the upper middle of the forearm muscle during the testing of this research.
Since the arm position has proved to affect discrimination between upper limb motion classes when using both surface and intramuscular EMG [17] and flexion of the wrist also influences the grip force [18], a fixed state is considered during all tests to minimize these disturbances. The subject's hand and forearm positions on the table are such that the elbow and wrist angles are 90 and 0 degrees, respectively. The process of one test and both the force sensor and Myo armband are shown in Fig. 1. In each test, the subject first applies pressure on the F/T sensor, decreases the pressure, and repeats this action for 30 seconds. Since the change of the EMG signals can be visually seen during applying pressure on the sensor, the EMG signals are plotted live during each test to monitor the whole experiment. There is a 60-second pause between each test of one subject and each subject repeats the test three times, making it a total of 3 minutes and 30 seconds for each subject's test. It should be noted that an exact measurement of the velocity of the grip was deliberately not considered because, ideally, the user of the prosthesis will want to perform the gripping with different velocities. Consequently, it's more effective to include different gripping velocities in the training data as well. The data set in this project is derived from 10 subjects with an average age of 23.8 years old (9 males and one female). Participants were chosen from healthy students of the University of Tehran and they all were fully informed about the nature of the study and its details. University's ethics committee has approved this experiment.
It should be noted that the F/T sensor reports the measured force in counts which can be converted to Newton with a simple multiplication according to the sensor's datasheet. The maximum force measured in the tests is in the range of 30-70 N, according to the maximum pinch-type grip force reported, which is 82 N for males in their twenties [19]. The signal derived from one test is shown in Fig. 2.
### Signal preprocessing
A schematic of the whole force prediction process consisting of data collection, signal processing, network training, and performance analysis is shown in Fig. 3. In summary, at the beginning of the process, the raw sEMG signals of the Myo armband's eight channels and the gripping force are collected simultaneously using Python 3.8 via Bluetooth and serial port, respectively. Subsequently, all data is preprocessed to achieve a higher prediction accuracy. A band-pass filter is designed for each data type to limit the bandwidth of data signals. In addition, a notch filter is also used to attenuate signals over a narrow range of frequencies while leaving the signal at other frequencies unaltered to eliminate the effects of transmitting data through wires and common noises. The cut-off frequencies of the implemented filters are carefully chosen with regards to both filter results and the frequency of the muscle activation, which has been observed from 20 Hz to 60 Hz [20]. Muscle activations are adequately recorded and preserved during filtering with the Myo armband sampling frequency of 200 Hz. The F/T sensor has a much higher sampling rate. Therefore, to make the sampling frequencies of both sensors compatible with each other, both sEMG and force data are measured simultaneously using one code with a frequency of 200 Hz.
Figure 1: The experimental protocol
Figure 2: Raw sEMG and force signals of one subject (30 seconds)
In the next step of the process, the filtered data is normalized. This step aims to change the values of dataset columns to a standard scale without distorting differences in the ranges of values. In other words, the data is normalized so that all variables are in the same range. It should be noted that since each person can implement a different maximum force level, the data obtained from all subjects is first merged and then all data is normalized. This way, the algorithm is able to distinguish the different force levels that different subjects implement and, as a result, is more adjustable for different people. The Min-Max scale normalization, which is very common in neural networks, is used in this research. This method puts the data between 0 and 1. Fig. 4 shows one sample of filtered gripping force signals as well as filtered and normalized sEMG signals.
## 2 Neural networks in force prediction
The human hand is a dynamic system; therefore, the collected data in this research is a time series, meaning the data received at a specific time is dependent on the previously collected data. Recurrent neural networks can learn long-term features from sequential and time-series data sets. In other words, RNNs can learn past complex data for long periods because of the non-linear structures of their layers [21]. This is the principal reason why recurrent neural networks have more precise predictions compared with non-recurrent ones. To prove this point, a fully connected MLP network is compared with a simple RNN. More complex recurrent neural networks such as GRU and LSTM are also implemented to achieve higher precisions. In the following, each of the networks mentioned above is described in sufficient detail.
MLP network is widely used for various classification and regression applications. The fully connected MLP with two hidden layers implemented in this study is shown in Fig. 5. The input data of the MLP must be a vector of only one column, while recurrent neural networks can receive matrixes as inputs. The input data from each time step is stacked to form a vector to train the MLP network with the built dataset. The first and second hidden layers in the MLP have 200 and 80 neurons, respectively.
The first recurrent neural network implemented in this study is a simple RNN. Input data goes to the recurrent layer, and the recurrent layer consists of an input layer, a hidden layer with 50 neurons, and an output layer. The structure of this layer is shown in Fig. 6. The output layer of the recurrent layer is connected to a fully connected layer with 100 neurons and then to the main output layer.
Figure 4: Filtered gripping force and filtered and normalized sEMG signals
Figure 5: Fully connected MLP Neural network
Figure 3: Schematic diagram of the whole force prediction process
state (\(h_{l}\)), and this output is affected by both the input data and the hidden state of the previous time step. Also, \(W_{x}\) and \(W_{rec}\) are weight matrices between the current input and the previous hidden state, which are updated in the training process. The hidden state in each layer is obtained according to Eq. (1). The function, f, in this equation is a Rectified Linear Unit (ReLU) function.
\[h_{i}=f(W_{rec}h_{i-1}+W_{x}x_{i}) \tag{1}\]
One major problem of the simple RNN network is the vanishing gradient, limiting its ability to learn long-term dependencies. LSTM networks solve this issue. This network was first introduced by Hochreiter and Schmidhuber [22]. In addition to the hidden state, the cell state is also used to transfer long-term dependencies in this network. Fig. 7 shows a view of the hidden unit in the LSTM layer. \(C\) and \(h\) are cell state and hidden state, respectively. Also, subscripts of n and n-1 represent current and previous step times.
In each step time, long-term dependencies are applied to the cell state through various gates. These gates are used to add or remove information to the information stream in each network unit, whose weights and biases are updated in the training process. Also, the cell state affects the hidden state. Hidden state and cell state are obtained from Eq. (2), in which \(f\) and \(g\) functions are ReLU and sigmoid functions, respectively. \(\times\) is the element-wise multiplication of vectors. Also, in each equation, \(W\) and \(b\) are the related weight matrix and bias vectors. \(f_{n}\), \(i_{n}\) and \(o_{n}\) represent forget gate, input gate, and output gate, respectively. Moreover, \(\tilde{C}_{n}\) is an intermediate gate for calculating the cell state. In the forget gate, it is decided which cell state information should be deleted. Furthermore, new information is added to the cell state in the input gate, and finally, the output gate is used to create the hidden and output states.
\[\begin{split}& f_{n}=g(W_{f}[h_{n-1},x_{n}]+b_{f})\\ & i_{n}=g(W_{i}[h_{n-1},x_{n}]+b_{i})\\ &\tilde{C}_{n}=f(W_{c}[h_{n-1},x_{n}]+b_{c})\\ & C_{n}=f_{n}\times C_{n-1}+i_{n}\times\tilde{C}_{n}\\ & o_{n}=g(W_{o}[h_{n-1},x_{n}]+b_{o})\\ & h_{n}=o_{n}\times f(C_{n})\end{split} \tag{2}\]
GRU is a simpler version of LSTM networks, from which the cell state has been removed, and it tries to send the information of long-term dependencies by the hidden state only [23]. Also, some gates have been merged. Fig. 8 shows the hidden unit in the GRU network of this research.
In this network, a hidden state is obtained with the help of Eqs (3). Similar to Eqs (2), \(f\) and \(g\) are ReLU and sigmoid functions, respectively, and \(W\) and \(b\) are related weight matrix and the bias vector. In Eqs (3), \(z_{n}\), \(r_{n}\), and \(\tilde{h}_{n}\) are all intermediate gates for finding a new hidden state \(h_{n}\).
Figure 8: Hidden unit of GRU layer
Figure 6: Structure of the recurrent layer
Figure 7: Hidden unit of LSTM layer
\[\begin{split} z_{n}&=f\left(W_{z}[h_{n-1},x_{n}]+b_{z} \right)\\ r_{n}&=g\left(W_{r}[h_{n-1},x_{n}]+b_{r}\right)\\ \tilde{h}_{n}&=f\left(W[r_{n}\times h_{n-1},x_{n}]+b \right)\\ h_{n}&=(1-z_{n})\times h_{n-1}+z_{n}\times\tilde{h}_{n} \end{split} \tag{3}\]
## 3 Results and discussions
To compare the results of the networks, they should be trained under the same conditions, and the same parameters must be used to train them. These parameters are specified in Table I.
Adam is an optimization method that has replaced the stochastic gradient descent method and has shown good performance in training neural networks [24]. All networks are compared using various performance criteria, including mean squared error (MSE), R\({}^{2}\) score, and test time. The first two mentioned criteria are both a type of metrics used to evaluate the regression models' performance, especially statistical ones. Their significant difference is that MSE is defined based on whether the data is scaled and captures the residual error; whereas, R\({}^{2}\) represents the fraction of variance of the response variable captured by the regression model. These two criteria are calculated based on the formulations in Eqs (4) and (5). \(\mu_{F}\) in the latter is the mean of the actual forces.
Test time is the amount of time each neural network, needed to predict the force values of the test data.
\[\begin{split} MSE&=\frac{1}{n}\sum_{i=1}^{n}(F_{i} ^{actual}-F_{i}^{predicted})^{2}\\ R^{2}&=1-\frac{\sum_{i=1}^{n}(F_{i}^{actual}-F_{i} ^{predicted})^{2}}{\sum_{i=1}^{n}(F_{i}^{actual}-\mu_{F})^{2}}\end{split} \tag{4}\]
Each network was trained and tested at different prediction horizons. The prediction horizon refers to the number of future force values that the network predicts. A higher prediction horizon provides information about further future, which can be used to make more accurate decisions in advance, provided that the error of these predictions does not increase. After training, the performance of the networks with the test data, which is 15% of the total data for different prediction horizons, is studied. A comparison between all implemented networks can be seen in Table II, which depicts the precise MSE of the normalized force values, R\({}^{2}\) score of the predictions, and test time for normalized test data at different prediction horizons.
As seen in Table II, in general, as the value of the prediction horizon increases, the MSE of each network increases, and conversely, R\({}^{2}\) decreases. Using larger prediction horizons has many practical benefits. For instance, it allows us to be aware of future events faster and have more time for control calculations. Moreover, we can use control methods such as the model predictive control scheme by predicting the output in the next few steps.
\begin{table}
\begin{tabular}{|c|c|c|c|} \cline{2-4} \multicolumn{1}{c|}{} & **prediction horizon** & **MSE** & **R\({}^{2}\)** & **Test time** (s) \\ \hline
**MLP** & & 0.0016848 & 0.9788782 & 1.512 \\ \cline{2-4} \cline{5-4}
**Simple RNN** & & 0.0011560 & 0.9855077 & 5.592 \\ \cline{2-4} \cline{5-4}
**GRU** & & 0.0004762 & 0.9940306 & 10.808 \\ \cline{2-4} \cline{5-4}
**LSTM** & & 0.0006643 & 0.9916720 & 10.509 \\ \hline
**MLP** & & 0.0020688 & 0.9740624 & 02.730 \\ \hline
**Simple RNN** & & 0.0016571 & 0.9792235 & 10.851 \\ \cline{2-4} \cline{5-4}
**GRU** & & 0.0010200 & 0.9872116 & 10.815 \\ \cline{2-4} \cline{5-4}
**LSTM** & & 0.0014009 & 0.9824362 & 21.182 \\ \hline
**MLP** & & 0.0027825 & 0.9651110 & 2.759 \\ \hline
**Simple RNN** & & 0.0033593 & 0.9578787 & 10.773 \\ \hline
**GRU** & & 0.0028747 & 0.9639549 & 10.875 \\ \hline
**LSTM** & & 0.0036569 & 0.9541473 & 21.142 \\ \hline \end{tabular}
\end{table}
Table II: Comparison of neural networks’ performance for different prediction horizons
Among the results with a prediction horizon of one, the most accurate method is the GRU recurrent neural network which can predict gripping force with an R\({}^{2}\) error of 0.9940306, closely followed by the LSTM network with an R\({}^{2}\) error of 0.9916720. The simple RNN network is placed third with an R\({}^{2}\) of 0.9855077, while the MLP network is placed last with an R\({}^{2}\) error of 0.9788782. Conversely, GRU and LSTM have rather long prediction times, predicting the test data in 10.808 and 10.509 seconds, respectively; whereas, MLP has quickest prediction rate by predicting the test data in only 1.512 seconds. Lastly, the simple RNN takes 5.592 seconds to predict the force values of the test data. The difference between the networks' prediction accuracy and test time is due to their different topologies and hyperparameters. For instance, recurrent networks, especially LSTM and GRU, can outperform MLP because their topologies are more capable of preserving information from previous data points in an effective manner. Moreover, the low test time of MLP is due to its simplicity and low computational cost. Comparing LSTM and GRU in higher prediction horizons, GRU can predict the gripping force with higher accuracy. GRU also has a much lower test time, which means it would be a strongly preferable choice for Internet of Medical Things (IoMT) technologies or embedded systems. For further elaboration, Fig. 9 shows the performance of all the implemented neural networks in this study by depicting each network's predicted gripping force values with prediction horizons of 1, 3, and 5 and the actual gripping force values for 15 seconds of the normalized test data. This figure shows that all recurrent networks can precisely estimate the grip force with a prediction horizon of one. Early prediction of force, implemented by using higher prediction horizons, is also done with adequate error levels. This figure also demonstrates that the effect of the prediction horizon on MLP's performance is much smaller than that of other recurrent networks.
Furthermore, Fig. 10 entails the boxplots of squared error distribution for each neural network with different prediction horizons. Recurrent networks outperform the non-recurrent network because they have much smaller maximum squared error values for each prediction horizon. The GRU has the smallest interquartile range, median and maximum error value in all cases, especially with a prediction horizon of 5, which depicts the superiority of the GRU network. Moreover, increasing the prediction horizon, which enables us to predict the gripping force in advance, decreases the prediction accuracy. This can be seen in the boxplots of each network.
Figure 9: Networks’ performance with prediction horizons of 1, 3, and 5 for 15 seconds: MLP (top left), simple RNN (top right), LSTM (bottom left), and GRU (bottom right)
## 4 Conclusions
In this paper, we proposed various neural networks for predicting the gripping force of the pinch-type grip of a fixed arm position. More specifically, a fully connected MLP, a simple RNN, a GRU, and an LSTM network are trained for this task. Using the frameworks implemented in this study, the gripping force is quickly and highly accurately predicted straight from processed raw sEMG signals and without any form of feature extraction. The performance of these four networks is compared, and hence, the superiority of the GRU neural network is demonstrated. The neural networks proposed in this paper can be utilized for controlling myoelectric-based grippers and prosthetic hands. For future work, the force and EMG data of different arm positions can be obtained to re-train all or parts of the proposed ANNs in this paper via transfer learning methods, so that the network's performance is modified for a wider range of arm positions. This can lead to developing a fast and robust gripper or hand prostheses capable of controlling gripping force in numerous gripping settings.
|
2302.06114 | A Comprehensive Survey on Graph Summarization with Graph Neural Networks | As large-scale graphs become more widespread, more and more computational
challenges with extracting, processing, and interpreting large graph data are
being exposed. It is therefore natural to search for ways to summarize these
expansive graphs while preserving their key characteristics. In the past, most
graph summarization techniques sought to capture the most important part of a
graph statistically. However, today, the high dimensionality and complexity of
modern graph data are making deep learning techniques more popular. Hence, this
paper presents a comprehensive survey of progress in deep learning
summarization techniques that rely on graph neural networks (GNNs). Our
investigation includes a review of the current state-of-the-art approaches,
including recurrent GNNs, convolutional GNNs, graph autoencoders, and graph
attention networks. A new burgeoning line of research is also discussed where
graph reinforcement learning is being used to evaluate and improve the quality
of graph summaries. Additionally, the survey provides details of benchmark
datasets, evaluation metrics, and open-source tools that are often employed in
experimentation settings, along with a detailed comparison, discussion, and
takeaways for the research community focused on graph summarization. Finally,
the survey concludes with a number of open research challenges to motivate
further study in this area. | Nasrin Shabani, Jia Wu, Amin Beheshti, Quan Z. Sheng, Jin Foo, Venus Haghighi, Ambreen Hanif, Maryam Shahabikargar | 2023-02-13T05:43:24Z | http://arxiv.org/abs/2302.06114v3 | # A Comprehensive Survey on Graph Summarization with Graph Neural Networks
###### Abstract
As large-scale graphs become more widespread, more and more computational challenges with extracting, processing, and interpreting large graph data are being exposed. It is therefore natural to search for ways to summarize these expansive graphs while preserving their key characteristics. In the past, most graph summarization techniques sought to capture the most important part of a graph statistically. However, today, the high dimensionality and complexity of modern graph data are making deep learning techniques more popular. Hence, this paper presents a comprehensive survey of progress in deep learning summarization techniques that rely on graph neural networks (GNNs). Our investigation includes a review of the current state-of-the-art approaches, including recurrent GNNs, convolutional GNNs, graph autoencoders, and graph attention networks. A new burgeoning line of research is also discussed where graph reinforcement learning is being used to evaluate and improve the quality of graph summaries. Additionally, the survey provides details of benchmark datasets, evaluation metrics, and open-source tools that are often employed in experimentation settings, along with a discussion on the practical uses of graph summarization in different fields. Finally, the survey concludes with a number of open research challenges to motivate further study in this area.
Graph summarization is a key task in managing large graphs, which are ubiquitous in modern applications such as social networks, knowledge graphs, recommendation systems, and bioinformatics. However, summarizing graphs is a challenging problem due to the intricate nature and structural variability of real-world graphs. The purpose of this article is to provide a detailed overview of the latest developments in graph summarization methods. The study covers a broad range of techniques, including both conventional and deep learning-based approaches, with a particular emphasis on GNNs. By combining and evaluating the advantages and limitations of various techniques, we highlight significant challenges and favourable avenues for future investigation in this field. Our survey will benefit researchers and practitioners working on graph summarization, enabling them to select appropriate methods and design new algorithms that can effectively summarize large and complex graphs for a variety of applications.
Deep Learning, Graph Neural Networks, Graph Summarization
## I Introduction
Large graphs are becoming increasingly ubiquitous. With the increasing amount of data being generated, large graphs are becoming more prevalent in modelling a variety of domains, such as social networks, proteins, the world wide web, user actions, and beyond. However, as these graphs grow in size, understanding and analyzing them is becoming more challenging. Additionally, performing fast computations with large graphs and visualizing the knowledge they can yield is also becoming more difficult. Many claim that faster and more effective algorithms are needed to overcome these obstacles [1, 2, 3]. However, a growing cohort of researchers believe that summarization might hold the answer to this unyielding problem. Summarization not only helps existing algorithms to parse the data faster, it can also compress the data, reduce storage requirements, and assist with graph visualization and sense-making [4, 5].
Graph summarization is the process of finding a condensed representation of a graph while preserving its key properties [2]. A toy example of a typical graph summarization process is shown in Figure 1. The process includes removing the original graph's objects and replacing them with fewer objects of the same type to produce a condensed representation of the original graph.
Most traditional approaches to graph summarization involve using a conventional machine learning method or a graph-structured query, such as degree, adjacency, or eigenvector centrality, to find a condensed graphical representation of the graph [2]. A popular summarization technique is to group structures in the input graph by aggregating the densest subgraphs [6]. For example, the GraSS model [7] focuses on accurate query handling and incorporates formal semantics for answering queries on graph structure summaries based on a random walk model, while Graph Cube [8] is a data warehousing model that integrates both network structure summarization and attribute aggregation. This model also supports OLAP queries on large multidimensional networks.
Notably, clustering methods follow a similar approach to summarization, partitioning a graph into groups of nodes that can be further summarized. Most traditional graph clustering methods use conventional machine learning and statistical inference to measure the closeness of nodes based on their connectivity and structural similarities [9]. For instance, Karrer et al. [10] used a stochastic block model to detect clusters or communities in large sparse graphs. However, another method of graph summarization focuses more on node selection and
identifying sparse graphs that can be used to derive smaller graphs [11]. As an example, Doerr et al. [12] introduced a sampling method based on traversing a graph that begins with a collection of starting points, e.g., nodes or edges, and then adds to the sample pool depending on recent information about the graph objects. However, despite the popularity of these approaches in the past, they are very computationally-intensive. They also require a great deal of memory to store, making them unsuitable for today's modern, complex, and large-scale datasets.
Deep learning on graphs is a powerful computational technique for graphs of all sizes that continues to interest both academics and industry. Graph neural networks (GNNs) are the most successful paradigm among the deep learning techniques for graphs. Their multilayer deep neural networks are not only able to reduce the dimensionality of tasks, they also support high-speed calculations with large-scale graphs [13, 14]. Several strategies that use a GNN as a summarization engine have been proposed over the last few years, and this line of research is expected to generate even more fruitful results in the future. These GNN-based methods have demonstrated promising performance with a diverse range of tasks, such as graph classification, node representation learning, and link prediction. Further, as research in this area continues to evolve, we can anticipate even more innovative and effective approaches to graph summarization that rely on GNNs.
### _Challenges with GNN-based Graph Summarization_
Although GNNs have helped to advance new techniques for summarizing complex graphs, such as social networks, citation networks, and informatics networks, applying these techniques is not without its challenges. Some of the main issues include scalability, graph heterogeneity, evaluation, and overfitting.
**Scalability.** A key hurdle in GNN-based graph summarization is scalability. GNN-based graph summarization methods are computationally expensive, and processing large graphs demands a substantial investment of time and resources [15]. But scaling these techniques to handle huge graphs, which are common in many real-world applications, is challenging. Researchers are working on developing more efficient GNN-based graph summarization algorithms to address this problem [16, 17, 18]. The ability to summarize graphs effectively using deep learning-based models, such as GNNs, is critical to reducing the scale of a graph while preserving its key features [19]. Achieving this requires effective training methodologies that can extract the most relevant information from the graph and represent it in a compact form. Consequently, graph summarization remains an active research area with significant challenges to be addressed [20].
**Graph heterogeneity.** Real-world graphs can be highly heterogeneous, and developing GNN-based graph summarization techniques that can handle heterogeneous graphs is a significant challenge. Often, more complex models and pre-processing steps are required to extract meaningful information from the various types of nodes and edges heterogeneous graphs contain [21]. Moreover, the inherent differences in the semantics of the diverse node and edge categories might necessitate unique feature representation and aggregation methodologies. To address this challenge, researchers are exploring new GNN architectures that can handle heterogeneous graphs, such as graph attention networks (GATs) and relation-aware graph convolutional networks (R-GCNs) [22, 23]. However, if we are to improve the quality of summaries with heterogeneous graphs, more research will be needed to develop preprocessing techniques and feature representations that effectively capture the properties and information contained in these complex graphs.
**Evaluation.** Evaluating how well a technique has summarized a graph is another challenge. There is currently no agreed-upon standard for evaluating the quality of graph summarization methods, so researchers often use different metrics and benchmarks [24]. This makes it difficult to compare different techniques and determine which methods perform better in different scenarios. New evaluation metrics and benchmarks have been outlined in some recent studies [25, 4, 26]; however, selecting the best approach for evaluation is a big challenge with GNN-based deep learning techniques.
**Overfitting.** A common problem in GNN-based graph summarization is overfitting, which occurs when the model becomes overly complex and starts fitting the noise present in the data rather than the underlying patterns. Overfitting can lead to poor generalization, where the model fails to perform well on unseen data [27]. To tackle this problem, researchers must create regularization techniques and model architectures that are both more effective at preventing overfitting and also help to improve generalization performance. Some of the more recently proposed techniques to address this challenge include dropout [28], graph pooling [29], and graph coarsening [17]. But more research in this area is needed.
### _Existing Surveys on Graph Summarization_
Over the past decade, numerous reviews have been conducted that acknowledge the importance of graph summarization. They generally cover a range of topics, like graph partitioning, graph sampling, graph coarsening, and graph embedding, along with specific use cases for graph summarization, such as pattern discovery, community detection, fraud detection, and link prediction. Additionally, several comprehensive surveys on graph summarization techniques have been conducted based on scientific computing: [30, 31, 32, 33, 34, 2, 5]. Yet only the most current work by Chen et
Fig. 1: An example of a graph summarization process. Objects are removed from the original graph and replaced with fewer objects of the same type to result in a condensed representation of the original graph.
al. [30] compares the most recent machine learning techniques to traditional methods.
There are also several surveys on graph sampling [35, 11, 36, 37], graph partitioning [38, 39, 40, 41], and graph clustering [42, 43, 44]. Other surveys focus on graph representation learning [45, 46, 47] and graph embedding [48, 49, 50]. As these streams of research look to create low-dimension versions of a graph, they are strongly connected to the concept of graph summarization.
There are also surveys that focus specifically on certain applications of graph summarization, such as pattern discovery, community detection, fraud detection, and link prediction, e.g., [51, 52, 3] and [53, 54, 55]. However, although these studies provide in-depth analyses of how graph summarization techniques are being used in important and high-demand domains, GNN-based graph summarization methods are not their main area of focus. Consequently, these surveys do not provide a comprehensive and structured evaluation of all available techniques for graph summarization.
### _Contributions_
In this survey, we review developments in graph summarization with GNNs. Overall, we have tried to provide researchers and practitioners with a comprehensive understanding of past, present, and future approaches to graph summarization using GNNs. Our specific contributions include:
* **The first deep GNN-based graph summarization survey.** To our best knowledge, this paper is the first thorough survey that is devoted to graph summarization with GNNs. The majority of pertinent surveys primarily center on traditional graph summarization approaches and do not include deep learning techniques. To date, no comprehensive and specialized study has been conducted on graph summarization using deep learning. Our research addresses this gap with the aim of using this thorough and structured survey to promote further advancements in the field.
* **Comprehensive review.** We present a comprehensive review of GNN-based graph summarization methods. We also highlight the benefits of different GNN techniques compared to traditional methods. Within each method, we present a review of the most notable approaches and their contributions to the field.
* **Resources and applications.** We have put together a set of resources that support graph summarization with GNNs, including cutting-edge models, standardized datasets for comparison, publicly available codes, and practical use cases. Our goal is to provide a resource for those seeking to understand and apply GNNs for graph summarization.
* **Future directions.** We identify and discuss the open challenges and new directions that lie ahead. Through an exploration of the existing challenges and potential avenues for progress, we aim to guide future research and development in GNNs for graph summarization, ultimately contributing to the development of new and innovative solutions for summarizing complex graphs.
The majority of the papers we reviewed were published at prominent international conferences (e.g., SIGKDD, NeurIPS, ICLR, ICML, KDD, WWW, IJCAI, and VLDB) or in peer-reviewed journals (e.g, ACM, IEEE, Elsevier, and Springer) in the domains of artificial intelligence, big data, data mining, and web services. In our investigations, we found that different fields referred to graph summarization using different terms, e.g., "coarsening", "reduction", "simplification", "abstraction", and "compression". While these concepts were used relatively interchangeably, the terms coarsening, simplification, and summarization were generally more common.
The remaining sections of this survey are structured as follows: Section II introduces the concept of graph summarization with primary definitions and background information. Section III provides an overview of the development of graph summarization. Section IV outlines the recently developed GNN-based graph summarization methods. Section V discusses graph reinforcement learning methods for graph summarization. Sections VI and VII outline the structure of widely adopted implementation resources and real-world applications. Finally, Section VIII explores a number of open research challenges that would motivate further study before the conclusion in Section IX.
## II Definitions and background
This section provides an overview of the key definitions and commonly used notations presented in Table I. It also includes some background information on graph summarization techniques.
* _Definition 1 (Graph):_ A graph G is represented as a tuple \((V,E)\), where \(V\) is the set of nodes or vertices \(\{v1,v2,...,vn\}\), and \(E\) is the set of edges or links \(E=\{e_{ij}\}_{i,j=1}^{n}\) that connect pairs of nodes. An \(nn\) dimensional adjacency matrix \(A=[a_{ij}]\) is used to represent the graph, where \(a_{ij}\) equals \(1\) if the edge \(a_{ij}\) is in \(E\); otherwise, \(a_{ij}\) equals \(0\). If \(a_{ij}\) is not equal to \(a_{ji}\), the graph is a directed network; otherwise, it is an
undirected network. If \(a_{ij}\neq a_{ji}\), If graph G has directed edges, it is considered a directed network. If it does not have directed edges, it is an undirected network. When the edges of \(G\) are weighted with weights from the set \(W\), the graph is called a weighted network. When the edges are not weighted, it is called an unweighted network. G is labelled if there is a label associated with e for every \(e\in E\). Additionally, the nodes may also be labelled if there exists a unique label for every node \(v\in V\). Otherwise, \(G\) is considered to be unlabelled.
* _Definition 2 (Graph Summary)_: Given a graph \(G\), a summary \(G(V_{S},E_{s})\) is a condensed representation of \(G\) that preserves its key properties. Graph summarization techniques involve either aggregation, selection, or transformation on a given graph and produce a graph summary as the output.
As outlined in Definition 2, graph summarization approaches fall into three main categories: aggregation, selection, and transformation. While selection approaches make graphs sparser by simply removing objects without replacing them, aggregation approaches replace those removed objects with similar objects only with fewer of them. For example, a supernode might replace a group of nodes. Similar to selection and aggregation, the transformation approaches also involve removing objects from the graph, but this time the objects removed are transformed into a different type of object, such as an embedding vector [56, 57].
**Aggregation.** Aggregation is one of the most extensively employed techniques of graph summarization. Aggregation methods can be divided into two main groups: those that involve node grouping and those that involve edge grouping. Node grouping methods group nodes into supernodes, whereas edge grouping methods reduce the number of edges in a graph by aggregating them into virtual nodes. Clustering and community detection are examples of a grouping-based approach. Although summarizing graphs is not explicitly the primary objective of these processes, the outputs can be modified into non-application-specific summaries [2].
**Selection.** There are two main groups of selection techniques: sampling and simplification. While sampling methods focus on picking subsets of nodes and edges from the input graph [58], simplification or sparsification methods involve removing less important edges or nodes. In this way, they tend to resemble solutions to dimensionality reduction problems [11, 5].
**Transformation.** Graph projection and graph embedding are two categories of this method. Generally, graph projection refers to the summarization techniques that transform bipartite graphs with various inter-layer nodes and edges into simple (single-layer) summarized graphs. Conversely, graph embedding refers to the techniques that transform a graph into a lower dimensional representation while preserving the original graph's topology [56].
## III Graph Summarization: An Evolution
Graph summarization has been playing an important role in areas such as network analysis, data mining, machine learning, and visualization for some time. The evolution of graph summarization is illustrated in Figure 3, which shows how it has progressed from traditional computing methods to multi-layer GNNs. Both of these paradigms are summarized in Figure 2. This section briefly overviews the three different traditional methods within this field and explains the advantages of GNN techniques over traditional ones.
### _Clustering-based Approaches_
Graph clustering can be thought of as a graph summarization technique since it involves grouping together nodes in a graph that are similar or related and, in so doing, the complexity and size of the original graph are reduced. In simpler terms, graph clustering provides a way to compress or summarize a large and complex graph into a smaller set of clusters, each of which captures some aspect of the structure or function of the original graph [1]. Graph summarization techniques using clustering can be classified into three main categories: structural-based, attribute-based, and structural-attribute-based approaches. This last approach, structural-attribute-based clustering, is currently considered to be the state-of-the-art [59, 60].
Clustering methods that rely on the structure of a graph group nodes together based on their structural characteristics and do not take any attribute data into account. The objective of these methods is to recognize groups of nodes that exhibit strong connections among themselves but weak connections with nodes that are not part of the group [61, 62]. Attribute-based clustering methods aggregate the nodes in a graph based on their associated attributes or features [63]. The clustering algorithm forms clusters by considering only the similarity of the attribute values of the nodes. The result is clusters that have abundant attribute information but fail to capture the topological structures of the network [64, 65]. Many real-world applications have demonstrated that combining both structural and attribute-based information within a clustering method is more effective than relying solely on either structural or attribute-based methods alone [59, 66]. For instance, Boolanal et al. [66] proposed a graph clustering method called k-Neighborhood Attribute Structural Similarity (k-NASS) that incorporates both the structural and attribute similarities of the graph nodes in the clustering process. The method uses a combination of k-neighborhood graph connectivity and attribute similarity measures to generate a weighted adjacency matrix, which improves clustering accuracy with complex graphs that have rich attributes.
_Remarks:_ Despite the numerous graph clustering methods suggested in previous studies, the task of performing a clustering analysis on a large graph that contains many attributes is still a difficult undertaking, primarily because it requires a significant amount of memory and computational resources, as well as quick access to disk-based storage [67].
### _Statistical Inference_
Statistical inference techniques for graph summarization make inferences about the original graph, yet they maintain the most significant characteristics of the graph and simplify
its complexity. These techniques can be divided into two groups: pattern mining and sampling. Pattern mining methods recognize the patterns within a graph and use them to create a condensed representation of the graph, while sampling methods select a proportion of the graph's nodes and/or edges at random [32].
Pattern mining techniques use statistical methods to identify the patterns or subgraphs that are most representative of the whole graph. These patterns [68] or subgraphs [69] can then be used to create a summary of the graph that preserves its essential characteristics.
Conversely, the basic idea of graph sampling is to select a random subset of nodes or edges from a graph and use this subset to estimate the properties of the entire graph. By analyzing the properties of the sampled subset, we can draw conclusions and derive insights based on the graph data [11]. For example, graph sampling can be used to summarize large and complex graphs where it may be impractical or impossible to analyze the entire graph. Sampling can also be used to estimate some of a graph's properties, such as the degree distribution, clustering coefficient, or other graph statistics. These estimates can be used to construct a summary of the graph that captures its key characteristics [36]. There are various sampling techniques that can be used for graph summarization, including random sampling [70, 71], stratified sampling [72], and snowball sampling [73]. For example, the Node2vec approach [71] generates random sequences of nodes within a graph, known as walks, which can be used to sample the graph. The Node2vec algorithm employs a biased random walk technique to produce these random walks. This method ensures a balance between breadth-first sampling and depth-first sampling. Once the random walks are generated, the Node2vec algorithm uses the Skip-gram model to learn node embeddings from the random walks. The learned node embeddings capture the local graph structure and can be used to compute various graph summaries statistics, such as the clustering coefficient, the degree distribution, and the structures of communities.
_Remarks:_ There are benefits and drawbacks associated with each of these techniques, and the choice of technique depends on the specific problem and data at hand. For instance, one major drawback of sampling methods based on random walks is that the quality of the graph summary can be affected by the choice of the walk's starting node. Given different starting nodes, the random walk might cover different parts of the graph, which can lead to different graph summaries. This can make the results of the graph summarization less reliable and reproducible.
### _Goal-Driven_
Goal-driven techniques for graph summarization involve constructing a graph summary that is tailored to a specific application or task. They are a powerful tool for capturing specific features or relationships in a graph that are relevant to a specific application or task. By optimizing the graph summary to a specific goal, it is possible to create a more effective and efficient summary that can be used to derive better insights and make better decisions [74]. Significant goal-driven techniques for graph summarization include utility-driven [75] and query-driven [76] techniques.
Utility-driven techniques focus on the challenge of summarizing large graphs in a way that preserves their essential properties and structure, while also maximizing their utility for downstream tasks. In other words, the aim is to create a compact summary while preserving the "useful" information in the graph [75]. In this category, human reviewers evaluate the utility of the summary [77] against the specific downstream task the summarization is intended to support. Example downstream tasks include node classification and link prediction [78].
Query-driven techniques summarize graphs by identifying subgraphs or patterns in the larger graph that are relevant to the target downstream task [79]. Querying a graph involves formulating queries in a query language that specifies the particular subgraph or pattern to look for in the larger graph. The result is a subgraph that matches the query, which can then be used as a building block for the graph summary [76]. For example, in a node classification task [80], a graph query might be used to identify a subgraph of nodes that have comparable characteristics or connections. This subgraph can then be used as a summary representation of the larger graph to support the node classification task. Graph querying can also be combined with other graph summarization techniques, such as clustering [81] and feature selection [82], to create a more comprehensive summary that captures the essential properties of the original graph, while also being tailored to a specific downstream task.
_Remarks:_ Although goal-driven graph summarization can be helpful for improving the interpretability of large graphs because they reduce complexity, the choice of summarization technique will depend on the specific goals of the analysis. For example, some techniques may be better at preserving the global properties of the graph, while others may be better at capturing local structures. The choice of technique will also depend on the available computational resources and the complexity and size of the original graph.
### _Why GNNs for Graph Summarization?_
In recent times, deep learning has gained significant prominence and is now considered one of the most effective forms of AI due to its high accuracy. Conventional deep learning methods have shown that they perform extremely well with Euclidean data (e.g., images, signals, and text), and now there are a growing number of applications in non-Euclidean domains (e.g., graphs and manifold structures). As a deep learning approach, GNNs are multi-layer neural networks that learn on graph structures to ultimately perform graph-related tasks like classification, clustering, pattern mining, and summarization [83].
As mentioned, traditional graph summarization approaches are mostly based on conventional machine learning or graph-structured queries, such as degree, adjacency, and eigenvector centrality, where the aim is to find a condensed graphical representation of the whole graph [7, 8]. However, the
pairwise similarity calculations involved in these approaches demand a considerably high level of computational power. The explicit learning capabilities of GNNs skirt this problem. Additionally, powerful models can be built from even low-dimensional representations of attributed graphs [84]. Unlike standard machine learning algorithms, with a GNN, there is no need to traverse all possible orders of the nodes to represent a graph. Instead, GNNs consider each node separately without taking the order of the input nodes into account. This avoids redundant computations.
The major advantage of GNN models for graph summarization over traditional methods is the ability to use low-dimensional vectors to represent the features of large graphs [3]. Additionally, the message-passing mechanism used by GNNs to communicate information from one node to the next has been the most successful learning framework for learning the patterns and neighbours of nodes and the sub-graphs in large graphs [13]. It is also easy to train a GNN in semi- or unsupervised way to aggregate, select, or transform graphs into low dimensional representations [14]. In this regard, recent successes in graph summarization with GNNs point to promising directions for new research. For example, the GNN models developed by Brody et al. [85] and Goyal et al. [86] represent dynamic graphs in low dimensions, providing a good foundation for popularizing GNNs to more complex dynamic graphs.
## IV Graph Summarization with GNNs
This section provides an overview of recent research into graph summarization with GNNs. Each subsection covers four main categories of approach - these being: Recurrent Graph Neural Networks (RecGNNs), Convolutional Graph Neural Networks (ConvGNNs), Graph Autoencoders (GAEs), and Graph Attention Networks (GATs). Within each subsection, we review the most notable approaches and the contributions each has made to the field.
### _RecGNN-based Approaches_
RecGNNs are early implementations of GNNs, designed to acquire node representations through a generalized recurrent neural network (RNN) architecture. Within these frameworks, information is transmitted between nodes and their neighbours to reach a stable equilibrium [83, 87]. A node's hidden state is continuously updated by
\[h_{i}^{t}=\sigma(x_{i}^{t},h_{j}^{(t-1)},e_{ij}) \tag{1}\]
where \(h_{i}^{t}\) represents the hidden state of the RecGNN cell at time t, \(x_{i}^{t}\) is the feature vector of node \(u_{i}\) at time step \(t\), \(h_{j}^{(t-1)}\) is the hidden state vector of a neighboring node \(u_{j}\) in the previous time step (\(t-1\)), \(e_{ij}\) is the edge embedding between the nodes \(u_{i}\) and \(u_{j}\), and \(\sigma\) is a non-linear function that combines the inputs and produces the new embedding. RecGNNs update the hidden state of each node by considering the input and hidden states of neighbouring nodes. Message passing operations are computed based on: the input feature vector of each node; the hidden state vectors of its neighbouring nodes in the previous time step; and the edge feature vectors between the node and its neighbouring nodes [88].
RecGNN-based approaches for graph summarization mostly focus on graph sampling and embedding by generating sequences from graphs and embedding those sequences into a continuous vector space at lower dimensions. The architectures based on RNNs generally comprise long short-term memory units (LSTMs) and/or gated recurrent units (GRUs).
#### Iv-A1 LSTM-based Approaches
LSTMs are a class of RNNs that use a sequence of gates to concurrently model long- and short-term dependencies in sequential data [89]. The modified architecture of an LSTM to handle graph-structured data is known as a graph LSTM [90]. Typically, the input to the model consists of a sequence of either graph nodes or edges, which are processed in order using LSTM units. At each time step, the model updates its internal state based on the input node or edge and the previous state, as shown in Equation 2.
\[h_{t}=LSTM(x_{t},\sigma(h_{t-1},N(h_{t-1}))) \tag{2}\]
where \(h_{t}\) is the hidden state of the \(LSTM\) at time step t, \(x_{t}\) is the input feature vector of the graph at time step t, and \(\sigma\) is an aggregation function, such as a sum, mean, or a max pooling function, that combines the hidden states of neighbouring nodes. \(N(h_{t-1})\) represents the hidden states of the neighbouring nodes in the previous time step \(t-1\), and \(LSTM\) is a standard LSTM cell that updates the hidden state of the LSTM based on its input features and the previous hidden state.
By repeating this process over multiple time steps, the model captures the dependencies in the graph and generates a final hidden state that summarizes all the information contained in the entire graph. This makes it possible to preserve a record of both the previous inputs and the structure of the graph. After processing all the nodes and edges, the final state of the graph LSTM is used as the summary representation of the graph. This summary vector can then be used as input to downstream machine learning models or as a feature for other graph analysis tasks [91].
LSTM-based approaches for graph summarization have proven to be effective with a range of tasks, such as graph clustering, graph classification, and link prediction. For example, Taheri et al. [92] leveraged various graph-LSTM-based methods to generate sequences from a graph structure, including breadth-first search, random walk, and shortest path. Finally, they trained a graph LSTM to embed those graph sequences into a continuous vector space. The hidden states at time step \(t\) of the encoder are as follows:
\[h_{t}^{enc}=LSTM_{enc}(x_{t},h_{t-1}^{enc}) \tag{3}\]
where \(x_{t}\) is the input vector at time \(t\) and \(h_{t}^{enc}\) denotes the hidden state at time step t in a given trained encoder \(LSTM_{enc}\). Similarly, the hidden state at time step \(t\) of the decoder is defined in Equation 4.
\[h_{t}^{dec}=LSTM_{dec}(x_{t-1},h_{t-1}^{dec}) \tag{4}\]
Li et al. [93] proposed a graph summarization technique that uses a graph LSTM and a ConvGNN to improve question
answering with knowledge graphs. In this approach, the questions, entities, and relations are represented as vectors with very few dimensions, but the key properties of the relations are well preserved. Jin et al. [94] also developed an approach to learn representations of graphs based on a graph LSTM. Here, graph representations of diverse sizes are encoded into low-dimensional vectors.
Several studies have also focused on evolving node patterns in dynamic graphs. For instance, Zhang et al. [91] applied a graph LSTM, proposing a one-stage model called DynGNN. The model embeds an RNN into a GNN model to produce a representation in compact form. Similarly, Ma et al. [95] introduced a dynamic RecGNN model that relies on a graph LSTM to model the dynamic information in an evolving graph while reducing the graph's dimensionality and learning manifold structures. Node information is continuously updated by: recording the time intervals between edges; recording the sequence of edges; and coherently transmitting information between nodes. Another work by Goyal et al. [86] also presents a method for learning temporal transitions in dynamic graphs. This framework is based on a deep architecture that mainly consists of dense and recurrent layers. Model size and the number of weights to be trained can be a problem during training, but the authors overcome this issue with a uniform sampling of nodes.
#### Iii-B2 GRU-based Approaches
GRUs are a variant of graph LSTMs that include a gated RNN structure and have fewer training parameters than a standard graph LSTM. The key distinction between a GRU and an LSTM is the number of gates in each model. LSTMs have three gates, "input", "output", and "forget", while GRU units are less complex with only two gates, "reset", and "update" [96]. GRUs are typically formulated as:
\[h_{t}=GRU(x_{t},\sigma(h_{t-1},N(h_{t-1}))) \tag{5}\]
where \(h_{t}\) is the hidden state of a GRU-based RecGNN at time step t, \(x_{t}\) is the input feature vector of the graph at time step t, and \(\sigma\) is an aggregation function, such as sum, mean, or a max pooling function, that combines the hidden states of neighbouring nodes. \(N(h_{t-1})\) represents the hidden states of the neighbouring nodes in the previous time step t-1, and \(GRU\) is a GRU cell that updates the previous hidden state and the hidden state of the input features.
Again, by repeating this process over several time steps, the model learns the dependencies that exist between the nodes in the graph, allowing it to construct a final hidden state that summarizes all the graph's information. Taheri et al. [97] proposed the DyGrAE model, which is able to learn the structure and temporal dynamics of a dynamic graph while condensing its dimensions. A GRU model captures the graph's topology, while an LSTM model learns the graph's dynamics. Ge et al. [98] developed a gated recursive algorithm that not only solves some node aggregation problems but also extracts deeply dependent features between nodes. The resulting model, called GR-GNN, is based on a GRU which performs the aggregation and structure. Li et al.'s [99] GRU model encodes an input graph into a fixed-size vector representation, which is then fed into a sequence decoder to generate the summary as the output. The model effectively captures the structural information and dependencies among the nodes and edges in the input graph, which is crucial for producing a coherent and informative graph summary.
_Remarks_: RecGNN-based techniques can capture a substantial amount of information during their recursive neigh
Fig. 2: Traditional graph summarization approaches and the taxonomy of GNN-based methods.
bourhood expansions by using recurrent units to identify the long-term dependence across layers [100]. Further, this process is quite efficient. Notably, RecGNN models can also improve numerical stability during training if they incorporate convolutional filters.
### _ConvGNN-based Approaches_
The general idea of ConvGNN-based approaches is to generalize the CNNs on graph-structured data [83]. The primary distinction between a ConvGNN and a RecGNN is the way information is propagated. While ConvGNNs apply various weights at each timestep, RecGNNs apply the same weight matrices in an iterative manner until an equilibrium is reached [101].
In other words, ConvGNN models are a form of neural network architecture that supports graph structures and aggregates node information from the neighbourhood of each node in a convolutional manner. ConvGNN models have demonstrated a strong expressive capacity for learning graph representations, resulting in superior performance with graph summarization [101].
ConvGNN-based approaches fall into two categories: spectral-based and spatial-based methods [102].
#### Iii-B1 Spectral-based Approaches
Spectral-based methods describe graph convolutions based on spectral graph theory and graph signal filtering. In spectral graph theory, the multiplication of the graph with a filter (the convolution) is defined in a Fourier domain [103].
Although the computation contains well-defined translational properties, it is relatively expensive, and the filters are not generally localized. Since the level of complexity grows with the scale of the graphs, one solution is to only check a limited number of neighbours using Chebyshev's theory [104]. This theory has led to several studies that explore the idea of applying approximation to the spectral graph convolution. For example, Defferrard et al. [105] generalized the CNNs to graphs by designing localised convolutional filters on graphs. They also introduced a graph summarization procedure that groups similar vertices together and a graph pooling procedure that focuses on producing a higher filter resolution. This work has been used as the basis of several studies that draw on Chebyshev polynomials to speed up convolution computations. As a variant, Kipf et al. [106] introduced several simplifications to the original framework to improve the model's classification performance and scalability to large networks. These simplifications formed the basis of the GCN (Graph Convolutional Network) model, which has since become a popular choice for various graph-related tasks. Their propagation rule is expressed in Equation 6:
\[h_{i}^{(k+1)}=\sigma(W^{(k+1)}\sum_{j\in N_{i}}\frac{1}{c_{ij}}h_{j}^{k}) \tag{6}\]
where \(h_{i}^{(k+1)}\) represents the \(i\)-th node's hidden state at the \((k+1)\)-th iteration of the GCN, \(N_{i}\) is the set of neighboring nodes of the \(i\)-th node, and \(c_{ij}\) is a normalization constant that represents the number of edges that node \(i\) shares with its neighbor \(j\). \(W^{(k+1)}\) is the weight matrix for the \((k+1)\)-th layer of the GCN. The propagation rule computes the new feature vector \(h_{i}^{(k+1)}\) for node \(u\) in the \((k+1)\)-th layer by taking the sum of the feature vectors \(h_{j}^{k}\) of all its neighbors \(j\in N_{i}\)
Fig. 3: A timeline of graph summarization and reviewed techniques.
which is weighted by \(1/c_{ij}\). This sum is then multiplied by the weight matrix \(W^{(k+1)}\), and an activation function \(\sigma\) is applied.
More recently, several successful variations of the spectral method have been proposed, e.g., S-GCN [107] and later SIGN [108]. S-GCN is a simplified GCN model but does not come with any performance compromises in terms of graph summarization. The idea behind the S-GCN model is to first convert large convolutional filters into smaller ones and then to remove the final non-linear layers. Inspired by previous ConvGNN models, Rossi et al. [108] subsequently proposed SIGN, which scales ConvGNNs to extremely large graphs by combining various amendable graph convolutional filters for faster training and sampling purposes.
Another prominent line of research in spectral-based ConvGNN approaches revolves around transforming graph objects, e.g., embedding. For example, Jiang et al. [109] introduced a hierarchical ConvGNN for graph embedding. This team built upon a spectral ConvGNN to provide an effective representation learning scheme for end-to-end graph classification tasks. More specifically, they proposed a framework for learning graph feature embeddings while also taking the network architecture and relationships between subjects into account. Deng et al. [110] introduced a multilevel framework to enhance the scalability and accuracy of embedding in an unsupervised manner. The model initially generates a new, efficient graph that contains information about both the node properties and the topology of the original graph. It then repeatedly aggregates the nodes with high spectral similarity, breaking the graph down into numerous smaller graphs.
#### Iii-A2 Spatial-based Approaches
Spatial-based methods work on the local neighborhood of nodes, aggregating node representations from neighboring nodes to understand their properties. ConvGNNs of this kind imitate the convolution operations of CNNs by defining convolutions directly in the graph domain. Unlike spectral-based approaches, which are relatively expensive and time-consuming to compute, the structure of the spatial-based approaches is simple and has generated cutting-edge outcomes with graph summarization challenges [111].
As a closely-related approach to Kipf and Welling's model [106], GraphSAGE extends their framework to the inductive setting [112]. GraphSAGE was the first approach to introduce node-wise sampling coupled with minibatch training for node embeddings using spatial convolutions. The updated propagation rule, which uses a mean aggregator function, is formulated as follows:
\[h_{i}^{(k+1)}=\sigma(W^{(k+1)}CONCAT(h_{i}^{(k)},\frac{1}{|N_{i}|}\sum_{j\in N _{i}}h_{j}^{k})) \tag{7}\]
Here, a mean aggregator function computes the average of the feature vectors of the neighbors of node \(u_{i}\) in the \(k\)-th layer. The concatenation operation concatenates the feature vector of node \(u_{i}\) in the \(k\)-th layer with the average of the feature vectors of its neighbors. The resulting vector has twice the dimensionality of the input feature vector, as it contains both the original node features and the aggregated neighbor features. GraphSAGE is a more flexible model that allows for different types of aggregator functions to be used. This makes it a good choice for graphs with heterogeneous node types, where different types of information need to be aggregated differently. GraphSAGE has also been shown to perform well on tasks such as graph classification and node classification, which are related to graph summarization.
Chen et al. [113] proposed a similar node-wise sampling approach, named FASTGCN, which incorporates an importance sampling method to produce better summaries. Later, Huang et al. [114] and Zeng et al. [115] respectively introduced layer-wise and graph-wise sampling methods to further enhance performance. While Huang et al. [114] focused on the issue of redundancy in node-wise sampling, Zeng et al.'s GraphSAINT [115] looks to correct bias and variance in minibatch estimators when sampling subgraphs for training. However, the most recent variation of GraphSage was developed by Li et al. [116]. This implementation, called bipartite GraphSAGE, is designed for bipartite graphs that contain different node types and inter-layer edges. The framework contains a user-item graph that supports both user and item embedding, while the model embeds the nodes into two different feature spaces - one corresponding to information about the users, the other corresponding to the items.
Many aggregation-based methods have also been introduced to summarize graphs without sacrificing too much information within the original graphs. The summarized graph can then be used to assist in further network analysis and graph mining tasks. For example, Yan et al. [17] proposed a summarization within a ConvGNN for faster training, data denoising, and interpretability. They used an end-to-end neural network model with a node grouping layer to summarize the graph by minimizing its dimensionality. Hu et al. [117] took structural similarity into consideration, aggregating nodes that share a similar structure into hypernodes. The summarized graph is then refined to restore each node's representation. To this end, a deep hierarchical ConvGNN (H-GCN) architecture with several coarsening operation layers followed by several refinement operation layers performs semi-supervised node classification. The refinement layers are designed to restore the structure of the original graph for graph classification tasks.
The most recent developments in convGNNs demonstrate the exciting potential graph summarization holds for a range of applications in healthcare and human motion analysis [118, 119]. Wen et al. [118], for example, presented a promising approach to diagnosing autism spectrum disorder by parsing brain structure information through a multi-view graph convolution network. Dang et al. [119] introduced a new type of graph convolution network, called a multi-scale residual graph convolution network, that shows superior performance in predicting human motion compared to other state-of-the-art models.
_Remarks_: Most existing ConvGNN models for graph summarization simply presume the input graphs are static. However, in the real world, dynamically evolving graphs/networks are more common. For instance, in a social network the number of users, their connected friends, and their activities are constantly changing. To this end, learning ConvGNNs on static graphs may not yield the best results. Hence, more research on dynamic ConvGNN models is needed to increase
the quality of summaries with large-scale dynamic graphs.
### _GAE-based Approaches_
An autoencoder is a neural network that consists of an encoder and a decoder. Generally, the encoder transforms the input data into a condensed representation, while the decoder reconstructs the actual input data from the encoder's output [120]. Graph autoencoders, or GAEs, are a type of GNN that can be applied over graph structures, allowing the model to learn a compact and informative representation of a graph. Lately, GAEs have garnered increasing interest for their ability to summarize graphs due to their significant potential with dimensionality reduction [84]. The key formulation of a GAE is:
\[z=encoder(G,A) \tag{8}\]
where \(z\) is the latent representation of the graph, \(G\) is the graph structure, \(A\) is the adjacency matrix of the graph, and the \(encoder\) is a function that maps the graph and adjacency matrices to the latent space. A decoder function then maps the latent representation back into the original graph structure:
\[G^{\prime}=decoder(z) \tag{9}\]
The goal of a GAE is to learn an encoder and decoder that reduces the gap between the original graph and the reconstruction error of the decoded graph, while also encouraging the latent representation to capture meaningful information about the graph's structure [121]. GAEs can be trained using various loss functions, such as binary cross-entropy or mean squared error. They can also be extended to incorporate additional constraints or regularization techniques to improve the quality of the learned graph representation [122].
The majority of GAE-based approaches for graph summarization use combined architectures that include ConvGNNs or RecGNNs [123, 26, 124]. For example, Kipf et al. [26] proposed a variational graph autoencoder (VGAE) for undirected graphs based on their previous work on spectral convolutions [106]. VGAE incorporates a two-layer ConvGNN model based on the variational autoencoder in [123]. The main concept of VGAE is to embed the input into a distribution rather than a point and, from there, to choose a random sample from the distribution rather than having it created directly by the encoder. As an extension, Hajiramezanali et al. [124] constructed a variational graph RNN by integrating a RecGNN and a GAE to model the dynamics of the node attributes and the topological dependencies. The aim of the model is to learn an interpretable latent graph representation as well as to model sparse dynamic graphs.
There are also several aggregation-based approaches built on GAEs. These are generally designed to formulate the challenges with graph clustering tasks as a summarization problem [125, 126, 127, 128]. For example, Cai et al. [125] suggested a graph recurrent autoencoder model for use in clustering attributed multi-view graphs. The fundamental concept behind the approach is to consider both the characteristics that all views have in common and those that make each graph view special. To this end, the framework includes two separate models: the Global Graph Autoencoder (GGAE) and the Partial Graph Autoencoder (PGAE). The purpose of the GGAE is to learn the characteristics shared by all views, while the PGAE captures the distinct features. The cells are grouped into clusters using a soft K-means clustering algorithm after the output is obtained. Fan et al. [127] introduced the One2Multi Graph Autoencoder (OMGAE) for multi-view graph clustering. OMGAE leverages a shared encoder to learn a common representation from multiple views of a graph and uses multiple decoders to reconstruct each view separately. Additionally, OMGAE introduces a new attention mechanism that assigns different weights to each view during the clustering process based on their importance. The model is trained to minimize a joint loss function that considers both the reconstruction error and the clustering performance. Mrabah et al. [128] devised a new graph autoencoder model for attributed graph clustering called GAE-KL. The model uses a new formulation of the objective function, which includes a KL-divergence term, to learn a disentangled representation of the graph structure and the node attributes. The disentangled representation is then used to cluster the nodes based on their similarity in terms of both structure and attributes. The authors also introduced a new evaluation metric called cluster-based classification accuracy (CCA) to measure clustering performance.
Recently, Salha et al. [129] proposed a graph autoencoder architecture that uses one-hop linear models to encode and decode graph information. The approach simplifies the model while still achieving high performance with graph summarization tasks, such as node clustering and graph classification. Uniquely, this paper presents a direction for designing graph autoencoder models that balances performance with simplicity.
_Remarks_: Most GAE-based approaches are typically unregularized and mainly focus on minimizing the reconstruction error while ignoring the data distribution of the latent space. However, this might lead to poor graph summarization when working with sparse and noisy real-world graph data. Although there are a few studies on GAE regularization [130], more research is needed in this regard.
### _GAT-based Approaches_
The idea of an attention mechanism was first proposed by Bahdanau and his colleagues in 2014 [139]. The goal was to allow for modelling long-term dependencies in sequential data and to improve the performance of autoencoders. Essentially, attention allows the decoder to concentrate on the most relevant part of the input sequence with the most relevant vectors receiving the highest weights. Graph attention networks or GATs [140] are based on the same idea. They use attention-based neighborhood aggregation, assigning different weights to the nodes in a neighborhood. This type of model is one of the most popular GNN models for node aggregation, largely because it reduces storage complexity along with the number of nodes and edges. The key formulation for a GAT is:
\[h_{i}=\sigma(\sum_{i\in N(j)}\alpha_{ij}Wh_{j}) \tag{10}\]
where \(h_{i}\) is the hidden feature vector of node \(u_{i}\), \(N(j)\) is the set of neighbouring nodes of \(u_{i}\), \(h_{j}\) is the hidden state of
neighbouring node \(u_{j}\), \(W\) is a weight matrix, and \(\alpha_{ij}\) is the attention coefficient that measures the importance of node \(u_{j}\) to node \(u_{i}\). The attention coefficients are computed as:
\[\alpha_{ij}=softmax(e_{ij}) \tag{11}\]
where \(e_{ij}\) is a scalar energy value computed as:
\[e_{ij}=LeakyReLU(a^{T}.[Wh_{i}||Wh_{j}]) \tag{12}\]
where \(a\) is a learnable parameter vector, and \(||\) denotes concatenation. The \(LeakyReLU\) function introduces non-linearity into the model and helps prevent vanishing gradients. The softmax function normalizes the energy values across all neighboring nodes so that the attention coefficients sum to one.
By computing attention coefficients for neighboring nodes, GATs are able to selectively focus on the most important parts of the graph for each node. This allows the model to adaptively adjust to different graph structures and tasks. The attention mechanism also means GATs can incorporate node and edge features into the model, making them well-suited to summarization tasks, such as node classification with complex graphs [131].
Today, GATs are considered to be one of the most advanced models for learning with large-scale graphs. However, recently Brody et al. [85] argued that GATs do not actually compute dynamic attention; rather, they only compute a restricted form of static attention. To support their claim, they introduced GATv2, a new version of this type of attention network, which is capable of expressing problems that require computing dynamic attention. Focusing on the importance of dynamic weights, these authors argue that the problem of only supporting static attention can be fixed by changing the sequence of internal processes in the GAT equations, as shown in Equation 13.
\[e_{ij}=a^{T}LeakyReLU(W.[h_{i}||h_{j}]) \tag{13}\]
As another variation of GAT, Xie et al. [25] proposed a novel multi-view graph attention network named MGAT, to support low-dimensional representation learning based on an attention mechanism in a multi-view manner. The authors focus on a view-based attention approach that not only aggregates view-based node representations but also integrates various types of relationships into multiple views.
Tu et al. [134] explored the benefits of using graph summarization and refining bipartite user-item graphs for recommendation tasks. They applied a conditional attention mechanism to task-based sub-graphs to determine user preferences, which emphasizes the potential of summarizing and enhancing knowledge graphs to support recommender systems. Salehi et al. [132] defined a model based on an autoencoder architecture
with a graph attention mechanism that learns low-dimensional representations of graphs. The model compresses the information in the input graph into a fixed-size latent vector, which serves as a summary of the entire graph. Through the use of attention, the model is able to discern and prioritize critical nodes and edges within the graph, making it more effective at capturing the graph's structural and semantic properties.
More recent works on GATs conducted by Chen et al. [135] and Li et al. [141] demonstrate the potential of graph attention networks for summarizing and analyzing complex graph data in various domains. Chen et al. proposed a multi-view graph attention network for travel recommendations. The model takes several different types of user behaviors into account, such as making hotel reservations, booking flights, and leaving restaurant reviews, and, in the process, learns an attention mechanism to weigh the importance of different views for a recommendation. Li et al. developed a multi-relational graph attention network for knowledge graph completion. The model integrates an attention mechanism and edge-type embeddings to capture the complex semantic relations between entities in a knowledge graph.
_Remarks_: GAT-based approaches use attention layers to overcome the limitations of prior studies, particularly those that rely on graph convolutions. While convolutional-based approaches use predefined weights for the neighbors of a node, GAT-based approaches modify the aggregation process through attention. Given the recent advancements in this area, we expect to see more research in the future on using GATs to create condensed representations of both static [140] and dynamic [85] graphs.
## V Graph Reinforcement Learning
Reinforcement learning (RL) is a mathematical model based on sequential decisions that allows an agent to learn via trial and error in an interactive setting through feedback on its actions. Due to the success and fast growth of reinforcement learning in interdisciplinary fields, scholars have recently been inspired to investigate reinforcement learning models for graph-structured data, i.e., graph reinforcement learning or GRL [142]. GRL is largely implemented based on the Bellman theory [143], where nodes represent states and edges represent actions. The goal is to learn a policy that optimizes the expected total reward across a series of actions. This can be done using various algorithms, such as Q-learning, or through a policy gradient method, which updates the Q-values or policy parameters based on the observed rewards and transitions [144].
There is also a flourishing line of GRL research that seeks to use this paradigm to evaluate and improve the quality of graph summaries. For example, Amiri et al. [4] introduced a task-based GRL framework to automatically learn how to generate a summary of a given network. To provide an optimal solution for finding the best task-based summary, the authors made use of CNN layers in combination with a reinforcement learning technique. To improve the quality of the summary, the authors later proposed NetReAct [138], an interactive learning framework for graph summarization. The model uses human feedback in tandem with reinforcement learning to improve the summaries, while visualizing the document network. NetReAct also provides summaries at a range of sizes and allows the user to zoom in on each document group. In this way, the user can examine numerous summaries and, in so doing, comprehend the data at multiple levels. In another study, Wickman et al. [133] presented a graph sparsification framework empowered by a GRL to be used for any edge sparsification assignment with a specific target for reduction. The model takes an edge reduction ratio as its input, and a learning model decides how best to prune the edges. Notably, the graph's structural attributes are maintained throughout the process. Yan et al. [137] introduced a GRL approach to summarize geographic knowledge graphs. To obtain a more thorough understanding of the summarizing process, the model exploits components with spatial specificity and includes both the extrinsic and the intrinsic information in the graph. The authors also discuss the effectiveness of spatial-based models and compare the results of their model with models that include non-spatial entities.
Recently, many articles have discussed the potential of using GRLs to summarize and analyze complex graph data in domains like neuroscience and computer vision [136, 145]. For example, Zhao et al. [136] suggested a deep reinforcement learning scheme guided by a GNN as a way to analyze brain networks. The model uses a reinforcement learning framework to learn a policy for selecting the most informative nodes in the network and combines that with a GNN to learn the node representations. Also, Goyal et al. [145] presented a GNN-based approach to image classification that relies on reinforcement learning. The model represents images as graphs and learns graph convolutional filters to extract features from the graph representation. They showed that their model outperforms several state-of-the-art methods on benchmark datasets with both image classification and reinforcement learning tasks.
_Remarks_: GRLs for graph summarization have attracted increasing attention owing to their remarkable performance at improving the quality of graph summaries. Unlike other sequential data, such as images, text, or videos, the evolving patterns in graph-structured data usually have higher heterogeneity. Designing customized deep GRL architectures for the purpose of graph summarization stands to be a promising direction in the future.
## VI Published Resources
This section provides an overview of the benchmark datasets, evaluation metrics, and open-source implementations that are discussed in Sections IV and V of the surveyed literature.
### _Datasets_
Both synthetic and real-world datasets are used in the development of the field. Synthetic datasets are created by models based on manually-designed rules, while real-world datasets are collected from actual applications and used to evaluate the performance of proposed methods for practical use. The popular real-world datasets are divided into six
categories: citation networks, social networks, user-generated networks, bio-informatics networks, image/neuroimages, and knowledge graphs.
### _Evaluation Metrics_
GNNs are typically evaluated through tasks like node classification, graph classification, graph clustering, recommendation, and link prediction. To provide an overview of the evaluation criteria used in each study, we categorized each of the articles we reviewed based on the metrics they used to evaluate their methods. These metrics include accuracy, precision, recall, F1-score, and AUC-ROC. Table III lists the most-used evaluation metrics and their calculation formulas or descriptions.
### _Open-source Implementations_
Table IV provides a summary of the open-source implementations by model, language, platform, type of input graph, and repository link. The majority of these implementations are written in Python 3.x using popular frameworks such as PyTorch, TensorFlow, or Keras. The remaining implementations mostly use MATLAB.
## VII Applications
Graph summarization has several applications and spans a range of domains and tasks, as shown in Figure 4. In this section, we will delve into specific examples of how graph summarization might be used in typical applications in these areas.
### _Graph Partitioning_
Graph summarization is an important technique used in graph partitioning algorithms like clustering algorithms and community detection algorithms. This is because graph summarization reduces the size of large graphs while preserving their structural properties. Here, the basic idea behind graph summarization is to construct a smaller, coarser version of the original graph by merging nodes or edges together based on certain criteria, such as degree, attribute, or connectivity [146]. For example, in the field of biochemistry, where graph nodes represent atoms/proteins and edges represent their connections, a graph partitioning algorithm like community detection can be helpful for identifying chemical compounds [147] or the complexity of new proteins [148].
### _Graph Visualization_
Summarizing graphs is a key part of visualizing graphic information. This is because the summaries produced essentially provide an overview of a graph's structure, which can be visualized to help users explore and analyze large graphs. Additionally, because summarized graphs yield a more comprehensible representation of complex graphs, users can focus on the parts of the graph they deem most relevant [2]. This has been found useful in a wide range of applications, including social network analysis [149], bioinformatics [150], and cybersecurity [151]. Moreover, graph summarization can also be used to generate interactive visualizations that allow users explore different levels of detail [152].
### _Pattern Discovery_
Graph summarization has several applications in pattern discovery, i.e., finding meaningful patterns and relationships in large datasets. In the context of graph data, graph summarization can be used to identify frequent and/or meaningful subgraphs or motifs [153]. This process can involve identifying clusters of similar subgraphs or identifying important nodes or edges. The discovered patterns can then be interpreted to gain insights into the underlying data. Social network analysis [154], bioinformatics [155], and recommendation systems [156] are just a few of the areas that have benefitted from pattern discovery through graph summaries. For example, in bioinformatics, Koyuturk et al. [155] used graph summarization techniques to identify recurring patterns in protein-protein interaction networks (PPINs), yielding important insights into the structure and function of these networks.
### _Graph Compression_
Summarizing graphs to compress data can help address some of the problems associated with storing, processing, and visualizing large and complex graphs [157]. Large graphs can take up significant amounts of memory, making them difficult to store and retrieve. Thus, by compressing a graph using summarization, the resulting smaller graph can be stored and retrieved more efficiently. Plus, summarizing a graph can help reduce the computational complexity of analyzing and processing it. For example, Kang et al. [158] proposed a novel approach for summarizing large-scale graphs that takes user preferences and interests into account, as well as the structural properties of the graph.
### _Graph Querying and Matching_
Graph querying involves finding subgraphs or patterns in a larger graph that match a given query. Graph summarization can be used to speed up querying graphs by reducing the size of the graph and by creating a summary graph that captures the essential features of the original graph [1]. For instance, Chen et al. [159] proposed a graph stream summarization structure that efficiently handles temporal range queries over streaming graph data. The authors focused on the problem of summarizing and querying large-scale graph data that are constantly changing over time, such as social or traffic networks.
Conversely, graph matching is the problem of finding a correspondence between the vertices or edges of two graphs that maximizes some similarity metric. Here, graph summarization can also be used to enable approximate matching, where the query graph is matched against the summary graph with some tolerance for differences in the graph structure. This can be useful in applications where exact matching is not feasible or desirable, such as in natural language processing [160] or image recognition [161]. For example, Li et al. [162] proposed a two-step approach for evolving subgraph matching on temporal graphs where the graph structure changes over time. The paper presents a method for constructing a summary graph to identify candidate matches for a subgraph within a temporal graph. Graph summarization can also enable approximate matching with tolerance for differences in the graph structure.
## VIII Future Directions
Despite the significant progress that graph summarization has made with the advent of GNNs, there are still unresolved issues that require additional exploration, as summarized in Figure 4. This section proposes six potential areas for future research.
### _Multi-layer Graphs_
Multi-layer graphs are made up of multiple layers of interconnected entities. They have recently been used to represent complex graphs where the entities and their connections are of heterogeneous types. Working with real multi-layer graphs can be challenging because they often contain large amounts of redundant or irrelevant data. Moreover, the traditional methods are not practically suitable for analyzing large-scale multi-layer graphs. To make these graphs easier to analyze, a number of GNN-based methods have been developed to support transformation-based summarization [56]. But these techniques do not provide an interpretable representation the properties relating to the nodes, edges, actors, and layers. Hence, more focus needs to be given to developing effective and comprehensive summarization techniques for multi-layer graphs.
### _Dynamic Graphs_
In the real world, the data that graphs represent can evolve over time, creating changes in a graph's topology, such as new edges that appear, nodes that disappear, or attributes that change over time. These dynamics can cause fundamental changes to the entire graph. Summarizing dynamic graphs typically involves boiling the graph down into a series of snapshots taken at various time increments. The model is then trained over these snapshots, yet a number of challenges can arise when attempting to extract dynamic features. To
date, the approaches developed to tackle these problems have leveraged the methods and theories of attention mechanisms and GATs [85], and, currently, these are the most advanced models for learning with dynamic large-scale graphs and manifolds. However, this research is still in its infancy and further study is required to popularize these models to more complex graph networks.
### _Multi-label Graphs_
Deep graph convolutional models, particularly aggregation-based models, have shown great performance with single-label graph summarization tasks, such as classification. However, graphs in the real world are often naturally multi-label. Hence, it is necessary for researchers to move beyond single-label classification and consider the graphs where the nodes and edges have multiple labels. Despite a few recent works that manage to achieve some scalability [163], methods of dealing with multi-label graphs have not been fully investigated. In this learning scenario, identifying the information shared between multiple labels so as to reduce dimensionality is still a challenging task. Furthermore, the lack of labeled data for multi-label graph tasks poses a significant challenge in developing effective and scalable models for real-world applications. One potential solution for handling multi-label graphs is to explore the use of attention mechanisms, which can selectively emphasize relevant information from different labels and reduce the noise caused by irrelevant labels.
### _Task-based Summarization_
As the size of graphs continues to increase, our ability to understand them is suffering. Summarizing graphs is therefore essential to exposing important information, helping humans with sensemaking, and speeding up graph analysis. However, intuitively, we know that different purposes and tasks will require different summaries. For example, a graph summarization strategy to detect communities will almost certainly be different from the strategy taken to find the most influential nodes. This indicates that we should develop unique approaches to identifying similar patterns for each task we wish to undertake. Moreover, although task-based summarization is critical, this field of study has seen few successes [138, 4] and is still an underexplored area. Open research problems include how to perform task-based summarization on streaming and heterogeneous graphs and how to leverage human feedback in the learning process.
### _Evaluation Benchmarks_
The optimal outcome of a graph summarization process is a "good" summary of the original graph. However, evaluating the "goodness" of a summary is an application-specific task that depends on the task at hand. For example, sampling-based methods are evaluated based on the quality of the sampling, while aggregation-based methods are evaluated based on the quality of classification, and so on. Current studies commonly use comparisons between their method and one or more established methods to measure the quality of their results. For instance, metrics that have been used in the literature include information loss, ease of visualization, and sparsity [2]. However, more and different evaluation metrics are required for cases where the validation process becomes complex and more elements are involved, such as visualization and multi-resolution summaries.
Fig. 4: Applications and future directions.
### _Generative AI_
In the context of generative AI, graph summarization can be used to generate new graphs that have similar properties to the original ones. One application of graph summarization in generative AI is in the generation of molecules for drug discovery [164]. Molecules can be represented as graphs, where the nodes represent atoms and the edges represent chemical bonds. However, the space of possible molecules is vast, and generating all of them is computationally expensive. By using graph summarization techniques, researchers can generate a smaller set of representative molecules. In turn, this smaller dataset can be used to train a generative AI model to generate new molecules that have similar properties. Another application of graph summarization in generative AI is generating 3D shapes [165]. 3D shapes can be represented as graphs, where the vertices represent points in space, and the edges represent the connections between them. By summarizing the graphs of existing 3D shapes, a generative AI model can be trained to generate new shapes that are similar to the original ones in terms of their geometric properties.
Graph summarization can also be used to support personalized learning [166]. This can be a powerful tool for summarizing the progress of a learner, identifying gaps in their knowledge, and generating personalized learning paths for them. For example, suppose a student is learning a new subject and has completed several lessons. A graph summarization algorithm can be used to identify the essential concepts covered in those lessons and represent them in a condensed form. This condensed representation can then be used to generate a personalized learning path that is tailored to the student's current knowledge and learning goals, helping them to focus on the areas where they need the most support.
## IX Conclusion
New advancements in deep learning with multi-layer deep neural networks have made it possible to quickly and effectively produce a condensed representation of a large and complex graph. In this paper, we surveyed the technical trends and the most current research in graph summarization with GNNs. We provided an overview of different graph summarization techniques and categorized and described the current GNN-based approaches of graph summarization. We also discussed a new line of research focusing on reinforcement learning methods to evaluate and improve the quality of graph summaries.
To advance research in this field, we also outlined several frequently-used benchmarking tools, including datasets, open-source codes, and techniques for generating synthetic datasets. In addition, we identified six promising directions for future research based on our findings from the survey. We strongly believe that using GNNs for graph summarization is not just a passing trend. Rather, it has a bright future in a wide range of applications across different domains. As a potential area of focus for our future work, we aim to explore the capabilities of generative models such as variational autoencoders, generative adversarial networks, and graph reinforcement learning, generating new graphs based on their summarized representations.
|
2301.12356 | Exploiting High Performance Spiking Neural Networks with Efficient
Spiking Patterns | Spiking Neural Networks (SNNs) use discrete spike sequences to transmit
information, which significantly mimics the information transmission of the
brain. Although this binarized form of representation dramatically enhances the
energy efficiency and robustness of SNNs, it also leaves a large gap between
the performance of SNNs and Artificial Neural Networks based on real values.
There are many different spike patterns in the brain, and the dynamic synergy
of these spike patterns greatly enriches the representation capability.
Inspired by spike patterns in biological neurons, this paper introduces the
dynamic Burst pattern and designs the Leaky Integrate and Fire or Burst (LIFB)
neuron that can make a trade-off between short-time performance and dynamic
temporal performance from the perspective of network information capacity. LIFB
neuron exhibits three modes, resting, Regular spike, and Burst spike. The burst
density of the neuron can be adaptively adjusted, which significantly enriches
the characterization capability. We also propose a decoupling method that can
losslessly decouple LIFB neurons into equivalent LIF neurons, which
demonstrates that LIFB neurons can be efficiently implemented on neuromorphic
hardware. We conducted experiments on the static datasets CIFAR10, CIFAR100,
and ImageNet, which showed that we greatly improved the performance of the SNNs
while significantly reducing the network latency. We also conducted experiments
on neuromorphic datasets DVS-CIFAR10 and NCALTECH101 and showed that we
achieved state-of-the-art with a small network structure. | Guobin Shen, Dongcheng Zhao, Yi Zeng | 2023-01-29T04:22:07Z | http://arxiv.org/abs/2301.12356v1 | # Exploiting High Performance Spiking Neural Networks with Efficient Spiking Patterns
###### Abstract
Spiking Neural Networks (SNNs) use discrete spike sequences to transmit information, which significantly mimics the information transmission of the brain. Although this binarized form of representation dramatically enhances the energy efficiency and robustness of SNNs, it also leaves a large gap between the performance of SNNs and Artificial Neural Networks based on real values. There are many different spike patterns in the brain, and the dynamic synergy of these spike patterns greatly enriches the representation capability. Inspired by spike patterns in biological neurons, this paper introduces the dynamic Burst pattern and designs the Leaky Integrate and Fire or Burst (LIFB) neuron that can make a trade-off between short-time performance and dynamic temporal performance from the perspective of network information capacity. LIFB neuron exhibits three modes, resting, Regular spike, and Burst spike. The burst density of the neuron can be adaptively adjusted, which significantly enriches the characterization capability. We also propose a decoupling method that can losslessly decouple LIFB neurons into equivalent LIF neurons, which demonstrates that LIFB neurons can be efficiently implemented on neuromorphic hardware. We conducted experiments on the static datasets CIFAR10, CIFAR100, and ImageNet, which showed that we greatly improved the performance of the SNNs while significantly reducing the network latency. We also conducted experiments on neuromorphic datasets DVS-CIFAR10 and NCALTECH101 and showed that we achieved state-of-the-art with a small network structure.
## 1 Introduction
The spiking neural networks (SNNs) use discrete spike sequences to convey information, which is more consistent with how the brain processes information. Although the binarized sequences bring high energy efficiency [35] and robustness [53], they also reduce the representation ability of the spiking neural networks. The non-differential nature of the spikes also makes it challenging to apply the backpropagation algorithm directly to the training of SNNs. Therefore, training high-performance SNNs has been a pressing problem for researchers.
In addition to the conversion-based method [2; 25], which converts the well-trained deep neural networks into SNNs, the proposal of surrogate gradient makes it possible to train a high-performance SNNs [41; 51]. Researchers have tried to close the performance gap in several ways. Some researchers have borrowed mature techniques from deep learning and applied techniques such as
normalization [17; 55; 42; 45] and attention [56; 46; 47], etc. to the training of SNNs. This greatly improved the performance of SNNs but ignored the characteristics of SNNs. Some researchers have tried to improve and enhance the learning ability of SNNs structurally by borrowing more complex connections in the brain. BackEISNN [54] took inspiration from the autapses in the brain and introduced the self-feedback connection to regulate the precision of the spikes. LISNN [5] modeled the lateral interactions between the neurons and greatly improved the performance and robustness. However, these methods have improved the learning ability of SNNs to some extent, but they are still far from artificial neural networks (ANNs).
Spiking neurons have rich spatio-temporal dynamics and are highly capable of information processing. [1] found that it takes a multilayer neural network to simulate the complexity of a single biological neuron. Realizing the computational power of spiking neurons, researchers tried to build more adaptive neurons. [3] introduced the neural oscillation and spike-phase information to construct a resonate spiking neuron. [9; 52] introduced the learnable time constant of the spiking neurons to boost the performance of SNNs on different challenging tasks. [48; 32] introduced an adaptive threshold mechanism to control the firing of spiking neurons. These works have greatly enriched the dynamics of spiking neurons, but the binarized representation creates a performance gap between them and the float-based ANNs.
Other researchers tried to design a better surrogate gradient function to reduce the information mismatch caused by inaccurate gradients in backpropagation. [4] proposed a gradual surrogate gradient learning algorithm to ensure the precision as well as the effectiveness of the gradient during backpropagation. [48] proposed activity-regularizing surrogate gradients, which exceeded the state-of-the-art performance for SNNs on the challenging temporal benchmarks. [24] introduced the adaptively evolved Differentiable Spike functions to find the optimal shape and smoothness for gradient estimation based on their finite difference gradients. However, the binarized information transfer method still limits the representation ability of SNN.
As a result, some researchers try to enrich the representation ability of SNNs. [49; 38] introduced the negative spikes to cooperate with the regular positive spikes. However, the behavior of releasing negative spikes below the threshold is not consistent with the human brain. [43] proposed the leaky integrate and analog fire neuron model to transmit the analog values among neurons, bringing performance improvements and significantly increasing energy consumption. The brain does not maintain a single spiking pattern for the same input. The coupling of different spiking patterns greatly enriches the representation ability of the spiking neurons and will adaptively cooperate to complete different cognitive functions. As the most commonly observed pattern in different brain regions, bursts might improve the selective communication between neurons [15], the number of spikes of the high-frequency bursts is highly robust to noise [16]. Although there exist some works with the burst spikes [29; 23], their burst intensities are fixed and do not change dynamically according to the input.
In this paper, we introduce the modeling of Leaky Integrate and Fire or Burst (LIFB) neurons with three spiking patterns: resting, regular spike, and burst spike. Experiments show that our algorithm not only dramatically improves the performance of the current SNNs, but also significantly reduces the latency and energy consumption. Our contributions are summarized as follows:
* We propose the Leaky Integrate and Fire or Burst neuron, as shown in Fig. 1, which greatly improves the representation ability of the SNNs.
* The burst intensity of our LIFB neuron is learnable and can be dynamically adjusted according to the input.
* We conduct experiments on the static image datasets CIFAR10, CIFAR100, and ImageNet and the neuromorphic datasets DVS-CIFAR10 and NCALTECH101 to verify the superiority of our model. We achieve state-of-the-art performance on these datasets and achieve excellent performance using only minor simulation steps.
## 2 Our Method
### Leaky Integrate and Fire model
The spiking neuron is the basic computational unit of SNNs. Neuroscientists have established various mathematical models such as the Hodgkin-Huxley spiking neuron (H-H) [12], the Izhikevich spiking
neuron [13], Leaky Integrate and Fire spiking neuron (LIF) [6] to describe the dynamic characteristics of biological neurons. More complex mathematical models can also better describe the computational process of biological neurons. However, they also require more computational resources, while the overly complex properties are challenging to apply to the modeling of large-scale SNNs. As the most common spiking neuron model, the LIF neuron model is widely used in deep SNNs.
\[\tau\frac{\mathrm{d}v}{\mathrm{d}t}=-v+I, \tag{1}\] \[\qquad\text{if }v>v_{th}\text{, then }v\gets v_{rst}\] \[s=H(v-v_{th}) \tag{2}\]
In the Eq. 1, \(v\) is the membrane potential, \(I\) is the input current, and \(v_{th}\) is the threshold. When the neuron reaches the threshold, it will deliver a spike, and the membrane potential is reset to the resting potential \(v_{rst}\). \(\tau\) is the membrane time constant, which controls the rate of decay of the membrane potential over time. \(s\) denotes the neuronal spikes, \(H(\cdot)\) denotes the heaviside step function.
The LIF model can be regarded as an integrator, capable of firing regular spikes at a constant rate and adjusting the firing rate according to the input current. To facilitate the calculation, we obtain the discrete form of Eq. 1:
\[v(t+1)=v(t)+\frac{1}{\tau}(-v(t)+I(t)) \tag{3}\]
Although there are many improvements for spiking neurons, they are limited to LIF neurons. The over-simplified computational characteristics of LIF neurons make it only possible to characterize regular spikes and cannot describe complex spiking patterns. There is a big gap between the LIF model and real biological neurons.
### Information Capacity for SNNs
Spiking neuron converts continuous membrane potential into discrete spikes \(s\in S=\{0,1\}\), transmitting information through the spike train of \(T\) steps. This different way of information processing from ANNs also brings differences in performance and resource costs. This paper analyzes the effect of simulation length and other properties on SNNs, and explores the relationship between SNNs and ANNs from the perspective of information capacity.
Consider a neuron with a spike train taking \(t\)-dimensional Boolean cubes denoted by \(S=\{0,1\}^{t}\). Then the effect of this neuron on a postsynaptic neuron at any step can be expressed as a (linear) threshold function on \(S\) as \(f:S\rightarrow\{0,1\}\), if there exists \(a\in\mathbb{R}^{t}\) and \(b\in\mathbb{R}\):
\[f(x)=H(\langle a,s\rangle+b),\;s\in S \tag{4}\]
Figure 1: Illustration of the spiking neural network with our Leaky Integrate and Fire or Burst Neuron. After the network receives the input, the LIBF neurons show three spiking states: regular spike, burst spike, and resting, which significantly improve the representation ability of SNNs.
The set of all threshold functions on \(S\) is denoted by \(T(S)\). In this way, the set of threshold functions can be used to define the capacity of the spike train:
\[C(S)=\log_{2}|T(S)| \tag{5}\]
As shown in Eq. 5, the capacity of a spike train is the binary logarithm of the number of all threshold functions on \(S\). Considering the binary assignment associated with each partition, \(C(S)\) is equivalent to the binary assignment ways the set \(S\) partitioned by the hyperplane in \(\mathbb{R}^{n}\).
The number \(L(m,n)\) of connected regions created by \(m\) hyperplanes through the origin in \(\mathbb{R}^{n}\) is satisfied [40]:
\[L(m,n)\leq 2\sum_{k=0}^{n-1}\binom{m-1}{k} \tag{6}\]
According to Eq. 6, the upper bound of \(T(S)\) can be expressed as \(2\sum_{k=0}^{t-1}\binom{|S|-1}{k}\).
\[C(S) =\log_{2}|T(S)| \tag{7}\] \[\leq\log_{2}\left(2\sum_{k=0}^{t-1}\binom{|S|-1}{k}\right)\] (8) \[\leq\log_{2}\left(2\sum_{k=0}^{t-1}\left(\frac{e\,|S|}{t}\right) ^{t}\right)\] (9) \[=1+t\log_{2}\left(\frac{e\,|S|}{t}\right) \tag{10}\]
Eq 9 can be deduced from the elementary constraints on the binomial sum [39].
For a spiking neuron of simulation length \(t\), \(|S|=2^{t}\), its information capacity can be expressed as:
\[C(S)\leq 1+t^{2}-t\log_{2}(\frac{1}{e}t) \tag{11}\]
ANNs use floating-point numbers to convey information. The activation value, which characterizes the output of a neuron, can also be thought of as a spike train of bits. Therefore, SNNs and ANNs with the same length have the same information capacity. In terms of information capacity, SNNs and ANNs differ only in the way they organize their spike trains. ANNs compress the spike trains into a floating-point number, while SNNs treat the spike trains as neural activity processes with temporal relationships.
By compressing the spike train into floating-point numbers, it can achieve higher training and inference efficiency on general hardware, and it can be directly trained by gradient descent, which has better performance. In contrast, establishing the temporal relationship of spikes through spiking neurons enables better temporal data processing with more complex dynamic features. However, this also makes SNNs challenging to optimize by gradient descent, resulting in performance gaps. This approach converts Multiply-Accumulate Operations (MACs) into Accumulate Operations (ACs) and has a better energy performance on neuromorphic hardware.
It is a challenging problem to organize spike sequences efficiently to achieve higher performance for a limited information capacity. Inspired by neuroscience, we found that current neuronal models can only achieve limited biological neuronal features. Biological neurons can exhibit diverse spiking patterns, which are difficult to represent by very short binary spike sequences. Therefore, combining a more biologically plausible neuron model from the perspective of information capacity, we designed an efficient neuron model by appropriately organizing the spike sequences.
### Leaky Integrate and Fire or Burst model
Burst is a vital pattern in neurons, which contributes to gamma frequency oscillations in the brain, helps reduce neuronal noise [26], and facilitates selective communication between neurons [15]. The Leaky Integrate and Fire or Burst (LIFB) model [37] is an extension of the LIF model, which retains the function of membrane potential accumulation with input current while introducing a calcium T-current parameter, bringing new properties of the burst pattern.
\[\tau\frac{\mathrm{d}v}{\mathrm{d}t}=-v+I+gH(v-v_{h})h(v_{T}-v), \tag{12}\] \[\text{if }v>v_{th}\text{, then }v\gets v_{rst}\] \[s=H(v-v_{th}) \tag{13}\]
\(H(\cdot)\) is the Heaviside step function. The constants \(g\), \(v_{T}\), and \(v_{h}\) and the function \(h(\cdot)\) describe the deactivation of the calcium T-current, which changes according to the membrane potential.
\[\frac{\mathrm{d}h}{\mathrm{d}t}=\begin{cases}\dfrac{-h}{\tau^{-}},\text{ if }v>v_{h}\\ \dfrac{h}{\tau^{+}},\text{ if }v<v_{h}\end{cases} \tag{14}\]
Eq. 13 and Eq. 14 describe how the T-current affects the behavior of the neuron. If the membrane potential is more significant than \(v_{T}\) (generally \(v_{T}>v_{rst}\)), then the neuron becomes sensitive and achieves a burst mode. In contrast, Eq. 14 regulates the neuronal spiking pattern, and if the neuron is bursting, then the neuron becomes less sensitive to the T current and stops burst spikes.
The original LIFB model can better describe the spiking pattern of neurons, but it also brings about three times the computational costs of the LIF model [14]. Meanwhile, since the current directly trained SNNs often have a small simulation step (4 or even shorter), it is not easy to show the difference between regular spikes and burst spikes, which also limits the application of the LIFB model to large-scale SNNs. In order to introduce burst mode in a small simulation step, we propose a simplified LIFB model.
\[\tau\frac{\mathrm{d}v}{\mathrm{d}t}=-v+I, \tag{15}\] \[\text{if }v>v_{th}\text{, then }v\gets v_{rst}\] \[s=H(v-v_{th})+(\kappa-1)H(v-v_{h}) \tag{16}\]
In order to represent the effect of burst spike on neurons in rough time simulations, we consider the macroscopic effect of burst on neurons in a shorter time, superimposing the effect of T-currents on the neuron output. To efficiently implement burst spiking, we use the parameter \(\kappa\) instead of \(gH(v-v_{h})h(v_{T}-v)\) in Eq. 13. This allows the intensity of bursts to be adaptively adjusted while training and ensures an efficient hardware implementation of the simplified LIFB model.
### How to Represent the Burst State
Biological neurons can transmit a large amount of information in a short period by burst spikes. The introduction of the burst spike mechanism has dramatically expanded the representation ability of LIF neurons but also brings additional computational overhead. Therefore, it is an attractive problem to define the trade-off between computational overhead and the performance of LIFB neurons.
We consider the problem of representing burst spikes from the perspective of the information capacity of neural networks. A sequence of spikes with bursts of length \(t\), can be represented as \(s=\{0,1,\kappa_{1},\kappa_{2}\ldots,\kappa_{n-2}\}^{t}\). where \(n\) is the possible states at the moment. According to Eq 10, for such a sequence of spikes with burst states, the upper bound of its information capacity is:
\[C(S)\leq 1+t^{2}\log_{2}(n)-t\log_{2}(\frac{1}{e}t) \tag{17}\]
As shown in Fig 4, the neuron with only two states (LIF neuron) has the lowest performance. The introduction of the burst mechanism has tremendously improved the neuron's performance. It performs better if the neuron can represent multiple states at one moment. However, as the simulation time increases, too many spike states will affect the overall performance of the neuron, while \(n=3\) is a trade-off between the short-time performance of the neuron and the overall performance with temporal information. Therefore, we will use \(S=\{0,1,\kappa\}\) to define the output states of the LIFB neuron.
It is worth mentioning that to introduce very few parameters and increase the support of neurons for burst spiking, we consider a learnable, channel-sharing burst intensity of \(\kappa\). All neurons of the same channel use the same \(\kappa\), which is negligible compared to the number of parameters of the network. To allow the burst intensity to be optimized, \(\kappa\) is adjusted during the training process using gradient descent along with other parameters. We use the momentum method to ensure the stability of the burst intensity:
\[\Delta\kappa:=\mu\Delta\kappa+\epsilon\frac{\partial\mathcal{E}}{\partial\kappa} \tag{18}\]
In Eq. 18, \(\frac{\partial\mathcal{E}}{\partial\kappa}\) denotes the gradient propagated from the deep layer. \(\mu\) is the momentum, and \(\epsilon\) is the learning rate. We do not restrict the range of \(\kappa\), and use \(\kappa=1\) as the initial value.
#### 2.4.1 Decoupling of LIFB Neurons
We present a method that can efficiently implement LIFB neurons on neuromorphic hardware that is fully compatible with existing hardware without any modification. The LIFB neuron exhibit two different spiking patterns, so it can also be decoupled into two neurons with the same input current, as shown in Fig. 4. This makes it easy to deploy LIFB neurons on hardware designed for LIF neurons while achieving better performance.
## 3 Experiment
In this section, we evaluate the performance of the proposed LIFB Neuron on the image datasets CIFAR10 [19], CIFAR100 [44] and ImageNet [34] and the neuromorphic datasets DVS-CIFAR10 [21] and NCALTECH101 [28] with BrainCog [50]. The model structures used in this paper include
Figure 2: (a) Relationship between information capacity and performance of number of states on CIFAR10. (b) A LIFB neuron can be decoupled into two LIF neurons. From top to bottom: LIF neuron, LIFB neuron, and LIF neuron decoupled by LIFB neuron.
VGG16 [36], ResNet20 [11], ResNet19 [55], ResNet18-sew [8], and SNN6 (64C3-128C3-AP2-256C3-AP2-512C3-AP2-512C3-AP2-FC).
### Comparison with Other Methods
To verify the effectiveness of our algorithm, we compare it with several current best SNNs, including conversion-based and backpropagation-based. The results for the static image data and the neuromorphic data classification task are listed in Tab. 1 and Tab. 2. The results for the static image dataset are at simulation steps 1, 2, 4, and 6.
For CIFAR10 and CIFAR100, LIFB achieves higher accuracy than previous work at a simulation length of \(1\). In particular, using the same network structure and simulation step length, our LIFB
\begin{table}
\begin{tabular}{c c c c c c} \hline
**Dataset** & **Model** & **Methods** & **Architecture** & **Simulation Length** & **Accuracy** \\ \hline \hline \multirow{8}{*}{CIFAR10} & Bu et al. [2] & ANN-SNN Conversion & ResNet-18 & 4 & 90.43 \\ & Rathi et al. [33] & Hybrid training & ResNet-20 & 250 & 92.22 \\ & Rathi \& Roy [31] & Diet-SNN & ResNet-20 & 10 & 92.54 \\ & Wu et al. [41] & STBP & CIFARNet & 12 & 89.83 \\ & Wu et al. [42] & STBP NeuNorm & CIFARNet & 12 & 90.53 \\ & Zhang \& Li [51] & TSSL-BP & CIFARNet & 5 & 91.41 \\ & Shen et al. [35] & STBP & 7-layer-CNN & 8 & 92.15 \\ & Kim et al. [18] & STBP & NAS & 5 & 92.73 \\ & Na et al. [27] & STBP & NAS & 16 & 93.15 \\ & Zheng et al. [55] & STBP-tdBN & ResNet-19 & 6 & 93.16 \\ & Deng et al. [7] & TET & ResNet-19 & 6 & 94.50 \\ & Guo et al. [10] & Rec-Dis & ResNet-19 & 6 & 95.55 \\ \cline{2-6} & \multirow{4}{*}{**Our Method**} & LIFB & ResNet-19 & 1 & **95.94**\(\pm\)0.09 \\ & & LIFB & ResNet-19 & 2 & **96.01**\(\pm\)0.07 \\ & & LIFB & ResNet-19 & 4 & **96.21**\(\pm\)0.10 \\ & & LIFB & ResNet-19 & 6 & **96.32**\(\pm\)0.06 \\ \hline \multirow{8}{*}{CIFAR100} & Bu et al. [2] & ANN-SNN Conversion & ResNet-18 & 8 & 75.67 \\ & Rathi et al. [33] & Hybrid training & VGG-11 & 125 & 67.87 \\ & Rathi \& Roy [31] & Diet-SNN & ResNet-20 & 5 & 64.07 \\ & Shen et al. [35] & STBP & ResNet34 & 8 & 69.32 \\ & Na et al. [27] & STBP & NAS & 16 & 69.16 \\ & Kim et al. [18] & STBP & NAS & 5 & 73.04 \\ & Deng et al. [7] & TET & ResNet-19 & 6 & 74.72 \\ & Guo et al. [10] & Rec-Dis & ResNet-19 & 4 & 74.10 \\ \cline{2-6} & \multirow{4}{*}{**Our Method**} & LIFB & ResNet-19 & 1 & **77.86**\(\pm\)0.43 \\ & & LIFB & ResNet-19 & 2 & **78.04**\(\pm\)0.37 \\ & & LIFB & ResNet-19 & 4 & **78.12**\(\pm\)0.51 \\ & & LIFB & ResNet-19 & 6 & **78.31**\(\pm\)0.58 \\ \hline \multirow{8}{*}{ImageNet} & Bu et al. [2] & ANN-SNN Conversion & ResNet-34 & 16 & 59.35 \\ & Rathi et al. [33] & Hybrid training & ResNet-34 & 250 & 61.48 \\ & Zheng et al. [55] & STBP-tdBN & Spiking-ResNet-34 & 6 & 63.72 \\ & Deng et al. [7] & TET & SEW-ResNet-34 & 4 & 68.00 \\ & Fang et al. [9] & SEW ResNet & SEW-ResNet-152 & 4 & 69.26 \\ \cline{2-6} & \multirow{4}{*}{**Our Method**} & LIFB & SEW-ResNet-18 & 1 & **65.60** \\ & & LIFB & SEW-ResNet-34 & 1 & **69.34** \\ & & LIFB & SEW-ResNet-34 & 4 & **70.02** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Compare with existing works on static image datasets.
\begin{table}
\begin{tabular}{c c c c c c} \hline
**Dataset** & **Model** & **Methods** & **Architecture** & **Simulation Length** & **Accuracy** \\ \hline \hline \multirow{8}{*}{DVS-CIFAR10} & Zheng et al. [55] & STBP-tdBN & ResNet-19 & 10 & 67.8 \\ & Kugele et al. [20] & Streaming Rollout & DenseNet & 10 & 66.8 \\ & Wu et al. [43] & Conv3D & LIAF-Net & 10 & 71.70 \\ & Wu et al. [43] & LIAF & LIAF-Net & 10 & 70.40 \\ & Na et al. [27] & STBP & NAS & 16 & 72.50 \\ & Shen et al. [35] & STBP & 5-layer-CNN & 16 & 78.95 \\ & Guo et al. [10] & Rec-Dis & ResNet-19 & 10 & 72.42 \\ & Deng et al. [7] & TET & VGGSNN & 10 & 83.17 \\ & **Our Method** & LIFB & SNN7 & 10 & **83.83**\(\pm\)0.70 \\ \hline \multirow{8}{*}{N-Caltech101} & Kugele et al. [20] & STBP & VGG11 & 20 & 55.0 \\ & Ramesh et al. [30] & N/A & N/A & N/A & 66.8 \\ \cline{1-1} & **Our Method** & LIFB & SNN7 & 10 & **81.74**\(\pm\)0.81 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Compare with existing works on neuromorphic datasets.
also has a significant advantage, improving \(0.59\%\) and \(3.31\%\) on CIFAR10 and CIFAR100 compared with Rec-Dis [10].
For the more challenging ImageNet dataset, we achieve \(65.60\%\) accuracy using only the lightweight ResNet18 structure. Moreover, we achieve better performance than SEW-ResNet152. [9] when using the SEW-ResNet34 structure only at a simulation length of \(1\). Our LIFB achieves \(2.02\%\) improvements using the same structure and simulation length as previous work.
For the neuromorphic dataset DVS-CIFAR10, our LIFB achieves state-of-the-art performance by using only the SNN7 structure with less than half the parameters of VGGSNN [7]. For the N-Caltech101 dataset, we achieved \(83.44\%\) top-1 accuracy, achieving a performance far beyond previous work.
To further illustrate the advantages of our LIFB, we show the comparison with previous methods at different simulation lengths. As shown in Fig. 3, we compared LIFB with directly trained SNNs and converted SNNs. Our LIFB shows a significant advantage at shorter simulation lengths due to its more vital representation ability.
### Ablation Studies
Compared with other advanced methods, LIFB allows models to exhibit better performance and achieve better top-1 accuracy on classification tasks. We conducted ablation studies to further verify the contribution of LIFB for different network structures and simulation lengths.
As shown in Tab. 3, LIFB maintained its advantage over LIF neurons for all models and all simulation lengths. LIFB also has higher accuracy than LIF neurons with longer simulation time only at the simulation length \(t=1\). Fig. 4 shows a comparison of the impact of neuron type in terms of the
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline \multirow{2}{*}{Architecture} & \multirow{2}{*}{Neuron} & \multicolumn{5}{c}{CIFAR10} & \multicolumn{5}{c}{CIFAR100} \\ \cline{3-10} & & 1 & 2 & 4 & 6 & 1 & 2 & 4 & 6 \\ \hline \multirow{2}{*}{VGG16} & LIF & 92.02 & 93.41 & 94.13 & 94.08 & 66.40 & 69.18 & 71.15 & 71.99 \\ & LIFB & 94.53 & 95.02 & 95.28 & 95.36 & 73.00 & 73.90 & 74.34 & 75.02 \\ \hline \multirow{2}{*}{ResNet19} & LIF & 93.76 & 94.44 & 95.07 & 95.51 & 73.70 & 74.34 & 75.01 & 75.62 \\ & LIFB & 95.94 & 96.01 & 96.21 & 96.32 & 77.86 & 78.04 & 78.12 & 78.31 \\ \hline \multirow{2}{*}{ResNet20} & LIF & 83.65 & 86.47 & 88.09 & 89.16 & 49.93 & 53.90 & 57.40 & 58.17 \\ & LIFB & 89.72 & 90.88 & 91.30 & 91.65 & 60.63 & 62.95 & 63.28 & 64.33 \\ \hline \multirow{2}{*}{SEW-ResNet18} & LIF & 94.27 & 95.10 & 95.51 & 95.60 & 72.59 & 74.16 & 75.68 & 76.52 \\ & LIFB & 95.87 & 96.12 & 96.39 & 96.42 & 75.88 & 77.38 & 78.41 & 78.67 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablation studies on different network structures and simulation lengths.
Figure 3: Relationship between simulation length and accuracy on the CIFAR10/100 dataset.
information capacity of neurons. It can be seen that our LIFB still maintains a higher accuracy than LIF with the same information capacity.
In the original LIFB, the switching of neuronal spiking patterns is achieved by adjusting the conductance of the calcium T-current. However, this slow adjustment is challenging to be effective at short simulation lengths and is also difficult to be applied to large-scale SNNs due to the high computational costs. Therefore, we propose a simplified LIFB neuron. We directly apply the effect of T-current conductance to the neuron output and optimize the burst intensity by the learnable parameter \(\kappa\). This channel-sharing burst intensity ensures flexible spiking characterization with very few additional parameters. Fig 5 shows the distribution of burst intensity of different layers of the SEW-ResNet18 trained on CIFAR10.
We conducted an ablation study on learnable burst intensity, as shown in Tab. 4. We compared the top-1 accuracy of neurons with fixed burst intensity and neurons with learnable synaptic strengths on the CIFAR10 dataset.
The learnable channel-sharing burst intensity enables neurons to learn the appropriate burst from the data, and neurons can achieve bursts of arbitrary strength compared to regular spikes, which is more biologically plausible and enhances the performance of SNNs.
### Comparison with Other Triple-value Neurons
By considering the different spiking patterns of neurons, we design LIFB neurons that can exhibit triple neuronal states: rest, regular spike, and burst spike. Our LIFB neuron expands the representation ability of neurons, is more biologically plausible, and exhibits better energy efficiency and performance than other binary-value neurons.
In addition to our LIFB neuron, there are many works that enable neurons to fire positive and negative spikes (PosNeg) to achieve triple-value representation [49, 38]. Here we consider the same approach.
We compare our LIFB neuron with the PosNeg at different simulation lengths on CIFAR10 dataset as shown in Tab. 5. Our LIFB neuron shows better performance at different simulation lengths.
\begin{table}
\begin{tabular}{c c c c c c} \hline \(\kappa\) & LIF & 0.5 & 1 & 1.5 & 2 & learnable \\ \hline & 92.02 & 92.13 & 94.17 & 94.11 & 93.52 & **94.53** \\ \hline \end{tabular}
\end{table}
Table 4: Comparison of fixed/learnable burst intensity of VGG16 on CIFAR10.
Figure 4: Effect of neuron type on information capacity and performance.
### Loss Landscape around Local Minima
We further show the 2D landscapes of SNNs with different types of neurons around their local minima [22] to verify the effect of LIFB neurons on the generalization ability. As shown in Fig. 6, we show the local 2D landscape of the VGG16 model on CIFAR10/100, using different neurons. It can be seen that LIFB Neuron finds flatter local minima and more minor losses. This further demonstrates the ability of the LIFB neuron to enhance the representation and generalization of the model.
### Comparison of LIFB with decoupled LIF
LIFB neuron achieves much better performance than LIF Neuron by better modeling biological neurons, but this also entails additional computational overhead. Although the above experiments have demonstrated that LIFB neurons perform better than LIF neurons with longer simulation times, this cannot be achieved by increasing computational resources. To further illustrate that the performance gain of LIFB neurons comes from more reasonable modeling rather than more computational resources, we performed a fairer comparison.
As discussed in Sec. 2.4.1, a LIFB neuron can be decoupled into two LIF neurons with the same input current and different threshold voltages. We, therefore, compared the LIFB Neuron with its equivalent decoupled LIF neuron trained from scratch, as shown in Tab. 6. The results directly indicate that
Figure 5: Distribution of burst intensity of well-trained SEW-ResNet18 on CIFAR10.
\begin{table}
\begin{tabular}{l c c c c} \hline Neuron & 1 & 2 & 4 & 6 \\ \hline PosNeg & 93.87 & 94.16 & 94.33 & 94.43 \\
**LIFB** & 94.53 & 94.91 & 95.17 & 95.15 \\ \hline \end{tabular}
\end{table}
Table 5: Comparison of LIFB neuron with PosNeg neuron on CIFAR10.
most of the performance gains from LIFB neurons come from well-formulated spiking pattern design rather than from the higher computational costs.
### Visualization of Neural Activity
The neural activity of VGG7 on CIFAR10 dataset with LIFB neurons at different layers is shown in Fig. 7. We randomly selected 50 neurons in each layer, with cyan color indicating that the neuron is at a regular spike state and dark cyan indicating that the neuron is at a burst spike state. Although neurons are rarely in burst mode, this biologically plausible neuron model is essential for the performance of SNNs.
## 4 Conclusion
Inspired by the multi-spike delivery form of the brain, we design an efficient Leaky Integrate and Fire or Burst neuron model with triple-valued output from the perspective of network information capacity, while the burst density in LIFB can be adaptively adjusted. This multi-spike issuing form of synergistic neurons greatly enriches the characterization capability of the SNNs. Experimental results on static datasets CIFAR10, CIFAR100, and ImageNet show that we only need one simulation step to achieve a very high accuracy, which significantly reduces the latency of the SNNs. Also, we achieve state-of-the-art performance on the neuromorphic datasets DVS-CIFAR10 and NCALTECH101.
\begin{table}
\begin{tabular}{c c c c c} \hline Neuron & 1 & 2 & 4 & 6 \\ \hline LIF & 92.02 & 93.41 & 94.13 & 94.08 \\ Scratch & 93.78 & 94.36 & 94.83 & 95.02 \\
**LIFB** & 94.53 & 95.02 & 95.28 & 95.36 \\ \hline \end{tabular}
\end{table}
Table 6: Comparison of LIFB neurons and equivalent decoupled LIF neurons trained from scratch.
Figure 6: Comparison of loss landscapes of different neurons on VGG16. |
2308.01621 | A Novel Convolutional Neural Network Architecture with a Continuous
Symmetry | This paper introduces a new Convolutional Neural Network (ConvNet)
architecture inspired by a class of partial differential equations (PDEs)
called quasi-linear hyperbolic systems. With comparable performance on the
image classification task, it allows for the modification of the weights via a
continuous group of symmetry. This is a significant shift from traditional
models where the architecture and weights are essentially fixed. We wish to
promote the (internal) symmetry as a new desirable property for a neural
network, and to draw attention to the PDE perspective in analyzing and
interpreting ConvNets in the broader Deep Learning community. | Yao Liu, Hang Shao, Bing Bai | 2023-08-03T08:50:48Z | http://arxiv.org/abs/2308.01621v4 | # A Novel Convolutional Neural Network Architecture with a Continuous Symmetry
###### Abstract
This paper introduces a new Convolutional Neural Network (ConvNet) architecture inspired by a class of partial differential equations (PDEs) called quasi-linear hyperbolic systems. With comparable performance on the image classification task, it allows for the modification of the weights via a continuous group of symmetry. This is a significant shift from traditional models where the architecture and weights are essentially fixed. We wish to promote the (internal) symmetry as a new desirable property for a neural network, and to draw attention to the PDE perspective in analyzing and interpreting ConvNets in the broader Deep Learning community.
Keywords:Convolutional Neural Networks Partial Differential Equations Continuous Symmetry
## 1 Introduction
With the tremendous success of Deep Learning in diverse fields from computer vision [11] to natural language processing [23], the model invariably acts as a black box of numerical computation [2], with the architecture and the weights largely fixed, i.e., they can not be modified without changing the output (for a fixed input), except by permuting neurons or units from the same layer. One would say that the **symmetry** of the neural network, the set of transformations that do not affect the model's prediction on any input, is
\[\text{Sym}(\text{model})=\prod_{i}S_{n_{i}}\,,\]
the product of symmetric groups on \(n_{i}\) "letters," where \(n_{i}\) is the number of _interchangeable_ neurons in the \(i\)-th layer.
Although quite large as a group, it does not permit us to modify the model in any meaningful way. In the case of Convolutional Neural Networks (ConvNets), the channels are essentially fixed and frozen in place, due to the presence of coordinate-wise activation functions [26] (such as ReLU), which arguably build certain semantic contents in the channels [17], thus mixing them would destroy the model. The _nonlinear_ activation unit is generally thought to be an essential component for the neural network to fit arbitrary _nonlinear_ functions.
With inspiration and guidance from partial differential equations (PDEs), specifically _first-order quasi-linear hyperbolic systems_[1], we introduce a new architecture of ConvNet with a different type of nonlinearity, which allows us to remove most of the activation functions without degrading performance. As a result, the new architecture admits a _continuous_ group of symmetry (i.e., a Lie group, as opposed to a discrete group) that allows mixing of the channels; in one version, it's the full _general linear_ (GL) _group_, the set of all invertible \(n_{i}\times n_{i}\) matrices:
\[\mathrm{Sym}(\mathrm{model})=\prod_{i}GL(n_{i},\mathbb{R})\,.\]
With a judicious choice of such transformations, we may alter the weights so that the connections become more sparse, resulting in a smaller model (a kind of lossless pruning). Since the group is continuous, one might use the method of gradient descent to search for it. In addition, it may also lead to a better understanding of the inner workings of the neural network, much like how matrix diagonalization leads to the decoupling of a system of (linear) differential equations into different "modes", which are easier to interpret.
We primarily present the simplest version of our model based on ResNet50, illustrated in Fig. 1 alongside the corresponding PDE. The nonlinearity is at the element-wise multiplication of the two branches (3x3 and 1x1 convs), and we apply activation functions _only_ at the end of 4 of the 16 blocks. See SS5.1 for details.
Figure 1: Schematic of a single block of our ConvNet architecture based on Eq. (3), to replace the bottleneck block of ResNet50. The trapezoids represent the increase/decrease in the number of channels. The corresponding components of the PDE are illustrated. (Best viewed in color.)
This is a preliminary report on our new architecture, and the relevant parts of partial differential equations behind it. It is our hope that the research community builds upon and analyzes the properties of this new architecture, and to take the PDE perspective seriously in designing, analyzing, or simply describing _all_ aspects of ConvNets. Moreover, given that the Transformer architecture [23] also involves a similar kind of nonlinearity apart from softmax and activation functions, it could potentially be made to admit a continuous symmetry as well.
## 2 Related Work
The link with **differential equations** has been recognized, and well exploited [8, 16, 20, 4, 10], soon after the introduction of ResNet [11] by He _et al._ in 2015, if not known implicitly before; see also [21] for a more recent analysis. With a few exceptions [16, 20], most discussions do not make an emphasis on _partial_ differential equations, and to the best of our knowledge, little activity has been devoted to _designing_ new architecture from this perspective. Even though a PDE can be regarded as an ODE in which the state space is "infinite-dimensional", or upon discretization, a finite but large system with interactions only between neighboring "pixels," we find the PDE perspective, specifically of hyperbolic systems, more illuminating and _fruitful_ (albeit limited to ConvNets), and deserves more attention and further study in the broader Deep Learning community.
A related but distinct field of research is using Deep Learning methods to solve various PDEs of interests to physicists, applied mathematicians, and engineers. We shall only mention two pioneering works that have attracted the most attention: Fourier Neural Operators (FNO) [15] and Physics-informed Neural Networks (PINN) [19].
**Symmetry** and **equivariance** often appear in the theoretical discussions of ConvNets and Graph Neural Networks (GNN) [6, 3], though it's worth pointing out the distinction from our usage: More often, we say a neural network has a (translational or permutation) _symmetry_, or is _equivariant_ or _invariant_ (under translations or permutations), if when we transform the input in a certain way, the output is also transformed accordingly, or does not change at all; the model itself remains fixed. In our scenario, we are directly transforming the weights of the model, which incidentally does not need to be trained. Nevertheless, much of our work involves finding a good enough model that achieves comparable performance on standard training sets. (It shall be apparent that, as with conventional ConvNets, our model is also equivariant under translations.)
One type of operation, known under the term **structural reparametrization**[7], can claim to modify the model after training. However, it can only merge consecutive layers or operations that are both linear; the basic example is conv followed by batchnorm. As such, it is better regarded as a trick in training: for whatever reason, it is better to train with a deeper and more complicated architecture than is necessary for the model, and is fundamentally different from the kind of symmetry that our model has.
## 3 Designing ConvNets from the PDE perspective
Given that ResNet is numerically solving a particular system of PDEs,
\[\frac{\partial u_{i}}{\partial t} = \sigma\biggl{(}\sum_{j}L_{ij}u_{j}\biggr{)}\,,\quad\mbox{for}\quad i =1,\ldots,n\] \[L_{ij} := \alpha_{ij}\frac{\partial}{\partial x}+\beta_{ij}\frac{\partial} {\partial y}+\gamma_{ij}\frac{\partial^{2}}{\partial x\partial y}+\cdots\]
of \(n\) unknowns \(u_{i}\equiv u_{i}(x,y,t)\), with initial condition at \(t=0\), wherein the coefficients are learned (for background, see Appendix), it is natural to take inspiration from other PDEs as found in mathematics and physics, and see what new ConvNet architecture would come out. Here are some natural changes that one could make:
* Make the constant coefficients _variables_ (of \(x\) and \(y\)), e.g., simply as linear or polynomial functions. The equation would still be linear, but now the space of PDEs would include this special equation: (\(n=1\)) \[\frac{\partial u}{\partial t}=-y\,\frac{\partial u}{\partial x}+x\,\frac{ \partial u}{\partial y}\,,\] which is solved by simply _rotating_ the initial data \(f(x,y)\) by angle \(t\) (around the origin): \[u(x,y,t)=f(x\cos t-y\sin t,x\sin t+y\cos t)\,,\] as can be readily verified. It is reasonable to expect that such a variation on ResNet would allow the model to make rotations and dilations -- in addition to translations -- on the input image.
* One might think that the standard "zero padding" of conv layer corresponds to the Dirichlet boundary condition: the value of \(u\) on the boundary being fixed at a prescribed value for \(t>0\). On closer inspection, it is slightly different. Furthermore, one could also experiment with other boundary conditions; the other natural one in physics is the Neumann condition, that the _normal derivative_ of \(u\) on the boundary is prescribed. The different conditions have the effect that the signals would "bounce back" off the boundary differently.
* In a typical PDE, the matrix of coefficients is constant or slowly varying with time \(t\), while in neural networks the weights from different layers are initialized independently, drawn from a (normal) distribution. One could try to force the weights from neighboring layers to correlate, either by weight-sharing or by introducing a term in the loss function that penalizes large variations between layers.
Having experimented with some of these ideas on small datasets, we did not find a specific variation that yields convincing results on the full ImageNet. We then looked into ways that the coefficients may depend on \(u\) itself, which makes
the equation nonlinear (apart from the activation functions). It may be viewed as a kind of "dynamic kernel," but we draw inspiration from a class of PDEs called _quasi-linear hyperbolic systems_, which may be the simplest, well-studied nonlinear systems for which the number of equations can be arbitrary.
In two spatial and one time dimensions, a first-order **quasi-linear system** is typically of the form
\[\frac{\partial u_{i}}{\partial t}=\sum_{j}\mathcal{A}_{ij}(u)\frac{\partial u_ {j}}{\partial x}+\sum_{j}\mathcal{B}_{ij}(u)\frac{\partial u_{j}}{\partial y}\,, \tag{1}\]
where the coefficient matrices may depend on \(u\) (but not derivatives of \(u\)), and it is **hyperbolic3** if any linear combination of \(\mathcal{A}\) and \(\mathcal{B}\) is diagonalizable with only _real_ eigenvalues (e.g., \(\mathcal{A}\) and \(\mathcal{B}\) are symmetric) for _any_ value of \(u\). Leaving aside the latter condition, the simplest example is to make each entry a linear function of \(u\):
Footnote 3: The designation may sound cryptic. It originates from the classic wave equation, which has some semblance in form with the equation of a hyperbola or hyperboloid; see Appendix §A.3. To avoid any confusion, it is _not_ related to hyperbolic plane/space/geometry that has made into machine learning
\[\mathcal{A}_{ij}(u)=\sum_{k}\mathcal{A}_{ijk}u_{k}\,, \tag{2}\]
and similarly for \(\mathcal{B}\). By dimension count, such a tensor would be very large (for large \(n\)), and it would deviate too much from typical ConvNets. Instead, we shall restrict to
\[\frac{\partial u_{i}}{\partial t}=\sum_{j}A_{ij}\sum_{k}C_{jk}u_{k}\frac{ \partial u_{j}}{\partial x}+\sum_{j}B_{ij}\sum_{k}D_{jk}u_{k}\frac{\partial u_ {j}}{\partial y}\,, \tag{3}\]
which is straightforward to turn into a ConvNet (see SS5.1 and Fig. 1 for details), and the number of parameters is kept at a reasonable level. Since nonlinearity is already built-in, we thought it would not be necessary to add activation functions, at least not at every turn; and much to our surprise, the model trains just as well, if not better. With this simple change, the model now has a _continuous_ symmetry, from mixing of the channels, that is not present in conventional ConvNets with coordinate-wise activation functions after every conv layer, and we believe it is a more significant contribution than matching or breaking the state of the art (SOTA). It is likely that, with enough compute, ingenuity, and perhaps techniques from Neural Architecture Search, variations of this architecture could compete with the best image models of comparable size. (We have not tried to incorporate the modifications listed earlier, as they are all linear, and we wish to explore the new nonlinearity on its own.)
It is observed that, once we remove all the activation functions, or use the standard ReLU, the training is prone to breaking down: it would fail at a particular epoch, with one or more samples (either from the training or validation sets) causing the network to output NaN, and the model could not recover from it. To mitigate this, we add activation functions such as hardtanh that clip off large
values, only once every few blocks. It is also observed that resuming training with a smaller learning rate may get around the "bad regions" of the parameter space. More analyses are needed to determine the precise nature and cause of this phenomenon (it might be related to the formation of "shock waves" in nonlinear hyperbolic equations [1]), and perhaps other ways to avoid it.
We provide here the details of the activation functions. In standard PyTorch [18], nn.Hardtanh is implemented as
\[\text{hardtanh}(x):=\begin{cases}\text{max\_val}&x>\text{max\_val}\\ \text{min\_val}&x<\text{min\_val}\\ x&\text{otherwise}\,.\end{cases}\]
We typically use \(\pm 1\) as the clip-off values. We also introduce two multi-dimensional variants that we call "hardball" and "softball," which appear to give better performance. Hardball is so defined that it takes a vector \(\boldsymbol{x}\in\mathbb{R}^{n}\) and maps it into the ball of radius \(R\),
\[\text{hardball}(\boldsymbol{x}):=\begin{cases}\boldsymbol{x}&|\boldsymbol{x}| <R\\ R\boldsymbol{x}/|\boldsymbol{x}|&|\boldsymbol{x}|\geq R\,,\end{cases}\]
where \(|\boldsymbol{x}|\) is the Euclidean norm. We set \(R\) to be the square root of \(n\) (the number of channels), though other choices may be better. Softball is a soft version,
\[\text{softball}(\boldsymbol{x}):=\frac{\boldsymbol{x}}{\sqrt{1+|\boldsymbol{x }|^{2}/R^{2}}}\,,\]
and they are both spherically symmetric. (One may perhaps regard them as normalization layers rather than activation functions.)
## 4 Symmetry of the model
The symmetry of our model would depend on the specific implementation, and may be more complicated than one would naively expect. We shall first consider it on the level of the PDE.
With a change of coordinates \(\tilde{u}_{i}=\sum_{j}T_{ij}u_{j}\) for an invertible matrix \(T\), the general equation (1) with (2)
\[\frac{\partial u_{i}}{\partial t}=\sum_{j,k}\mathcal{A}_{ijk}u_{k}\frac{ \partial u_{j}}{\partial x}+\sum_{j,k}\mathcal{B}_{ijk}u_{k}\frac{\partial u_{ j}}{\partial y}\]
would transform _only_ in the coefficient tensors \(\mathcal{A}_{ijk}\) and \(\mathcal{B}_{ijk}\). Indeed,
(For clarity, we omit the second half involving \(\mathcal{B}\) and \(\frac{\partial}{\partial y}\).)
\[\frac{\partial\tilde{u}_{i}}{\partial t} =\sum_{j}T_{ij}\frac{\partial u_{j}}{\partial t}\] \[=\sum_{j}T_{ij}\sum_{k,l}\mathcal{A}_{jkl}u_{l}\frac{\partial u_{ k}}{\partial x}\] \[=\sum_{j}T_{ij}\sum_{k,l}\mathcal{A}_{jkl}\sum_{m}T_{lm}^{-1} \tilde{u}_{m}\sum_{r}T_{kr}^{-1}\frac{\partial\tilde{u}_{r}}{\partial x}\] \[=\sum_{m,r}\Biggl{(}\underbrace{\sum_{j,k,l}T_{ij}\mathcal{A}_{jkl }T_{lm}^{-1}T_{kr}^{-1}}_{\tilde{\mathcal{A}}_{irm}}\Biggr{)}\tilde{u}_{m} \frac{\partial\tilde{u}_{r}}{\partial x}\,.\]
We note in passing that similar calculations are commonplace in _classical_ differential geometry when making a change of coordinates _on the base space_. Here, we are making a change of coordinates on the "dependent" variables. From a more abstract point of view, this is the induced representation of \(GL(V)\) on the tensor product \(V\otimes V^{*}\otimes V^{*}\cong\mathrm{Hom}(V\otimes V,\,V)\) for a vector space \(V\cong\mathbb{R}^{n}\).
On the level of the neural network, we only need to make sure that the \(T^{-1}\) comes from the previous layer, i.e., it is the inverse of the \(T\) that appears in transforming the previous block (if different). With such a transformation at each block, we find that the overall symmetry of the model is
\[\mathrm{Sym}(\mathrm{model})=\prod_{i}G_{i}\,,\]
with
\[G_{i}=\begin{cases}S_{n}&\sigma=\text{relu, or any element-wise activation function}\\ O(n)&\sigma=\text{hardball, softball, etc.}\lx@note{footnote}{The group $O(n):=\{M\in GL(n,\mathbb{R})\mid MM^{T}=I_{n}\}$ is known as the _orthogonal group_, and it preserves the Euclidean norm on $\mathbb{R}^{n}$.}\end{cases}\]
As noted before, this "fully connected" block would be too costly to train, if we are to match ResNet in which the last stage uses as many as \(n=512\) channels. One simple way to reduce the number of parameters is to make the tensor "block diagonal", and the transformation would only mix channels from the same block. The bottleneck block of ResNet addresses this by shrinking the number of channels before applying the 3x3 conv, but a similar approach would introduce additional layers that shield the main operations that we would like to transform.
If we are to take \(\mathcal{A}_{ijk}\) to be of the special form as in Eq. (3), i.e., as the product of two matrices \(A_{ij}C_{jk}\) (no summation), then it is not guaranteed that
the transformed tensor would still factorize in the same way, for a generic \(T\). By simple dimension count, the set of tensors that are factorizable in the prescribed way is a subvariety (submanifold) of dimension at most \(2n^{2}\), and one wishes to find a \(T\in GL(n,\mathbb{R})\) that keeps the resulting \(\tilde{\mathcal{A}}_{ijk}\) on the subvariety. Given that the dimension of \(GL(n,\mathbb{R})\) is \(n^{2}\) and that \(2n^{2}+n^{2}\ll n^{3}\), it is not _a priori_ obvious that such transformations exist, apart from simple scalings and \(S_{n}\), that are universal for all \(\mathcal{A}_{ijk}\).
What one may hope for is that, for a specific \(\mathcal{A}_{ijk}\) that factors, we can find such a \(T\). For example, if for some pair of indices \(j,j^{\prime}\), we have \(A_{ij}C_{jk}=A_{ij^{\prime}}C_{j^{\prime}k}\) for all \(i,k\), then we can perform a rotation in the plane of the \(j\) and \(j^{\prime}\) directions:
\[\begin{cases}\tilde{u}_{j}=\alpha u_{j}+\beta u_{j^{\prime}}\\ \tilde{u}_{j^{\prime}}=\gamma u_{j}+\delta u_{j^{\prime}}\end{cases}\qquad \alpha\delta-\beta\gamma\neq 0\,.\]
Further investigation, either theoretical or numerical, may be needed to answer this question satisfactorily. It may be the case that there exists a symmetry that preserves the output _not_ for all inputs, but only those inputs that are "similar" to the dataset. It would be a weaker form of symmetry, but no less useful in practice.
Lastly, it should be remarked that there is a trivial "symmetry" in the tensor \(\mathcal{A}_{ijk}\) in the last two indices, i.e., \(\mathcal{A}_{ijk}\) and \(\mathcal{A}_{ikj}\) can be interchanged (so long as their sum is fixed). One may regard this as a redundancy in the parameter space, for we can force \(\mathcal{A}_{ijk}=\mathcal{A}_{ikj}\), or \(\mathcal{A}_{ijk}=0\) for \(j<k\) (and reduce the dimension roughly by half), and not due to mixing of the channels. We have not exploited this in the present work.
## 5 Experimental Results
### Details of the Architecture
How do we turn Eq. (3) into a ConvNet? We first make the differential operators \(\partial/\partial x\) and \(\partial/\partial y\) into 3x3 convolutional kernels, but we allow the weights to be trainable instead of fixed. Each incoming channel (each \(u_{j}\)) would split into 2 -- or better, 4 -- channels, and this is conveniently implemented in nn.Conv2d by setting groups to equal the number of input channels, as in "depthwise convolution" [5]. The matrices are simply 1x1 conv layers, with \(A\) and \(B\) stacked into one, and \(C\) and \(D\) stacked into one. Batchnorm is applied after the 3x3 and at the end. As with standard ResNet, the time derivative turns into the skip connection, and we arrive at the architecture of a single block as illustrated in Fig. 1. We do not enforce symmetry of these matrices to make the equation hyperbolic in the technical sense. As a general rule, we need not strictly follow the equation, but take the liberty in relaxing the weights whenever convenient.
One novelty is to make the weights of the 3x3 conv shared across the groups, which would make the transformations easier to implement. One may achieve this by making an nn.Conv2d with 1 input channel and 4 output channels, and at forward pass, we "repeat" the \(4\times 1\times 3\times 3\) weights in the
acting on the input tensor. Most of our experiments are with this "minimalist" 3x3 conv, except the ones marked "no ws" (no weight-sharing) in Table 1.
It may be possible to implement this kind of "variable-coefficient convolution" _natively_, instead of using the existing torch.nn layer which is tailored to conventional convolutions.
For the full design of the neural network, we simply take the classic ResNet50 with [3,4,6,3] as the numbers of blocks in the four stages. No activation function is applied except once only in each stage (e.g., at each downsampling), and we use nn.Hardtanh or our variants, hardball and softball (see SS3 for definitions), instead of ReLU, test the training would fail completely or the resulting model would not work as well.
### Experiments
As is standard in computer vision since the 2012 Deep Learning revolution, we have trained our model as an image classification task on the ImageNet dataset. The ImageNet-1k contains 1000 classes of labeled images, and for the sake of faster iterations, we primarily trained on a 100-class subset on a single GPU, while maintaining the standard image size of 224x224.
We use the timm library of PyTorch image models [24] for best practices in implementing ResNet and its training [25]. We took the official training script for ResNeXt-50 (SGD, cosine learning rate, with a warmup of 5 epochs, batch size of 192, etc.) except that the peak learning rate is set to 0.3 instead of 0.6, and the total number of epochs is set to 50. The results are in Table 1, where we mainly record two ways of modifying the model: changing only the activation function, and altering the placements of the conv layers within the block, corresponding to modifying Eq. (3) into Eqs. (4)-(7):
\[\frac{\partial u_{i}}{\partial t} =\sum_{k}C_{ik}u_{k}\sum_{j}A_{ij}\frac{\partial u_{j}}{\partial x }+\sum_{k}D_{ik}u_{k}\sum_{j}B_{ij}\frac{\partial u_{j}}{\partial y} \tag{4}\] \[\frac{\partial u_{i}}{\partial t} =\sum_{k}C_{ik}u_{k}\frac{\partial u_{i}}{\partial x}+\sum_{k}D_{ ik}u_{k}\frac{\partial u_{i}}{\partial y}\] (5) \[\frac{\partial u_{i}}{\partial t} =\sum_{j}A_{ij}\frac{\partial}{\partial x}\sum_{k}C_{jk}u_{j}u_{k} +\sum_{j}B_{ij}\frac{\partial}{\partial y}\sum_{k}C_{jk}u_{j}u_{k}\] (6) \[\frac{\partial u_{i}}{\partial t} =\sum_{j}A_{ij}\frac{\partial}{\partial x}\sum_{k}C_{jk}u_{j}u_{k} +\sum_{j}B_{ij}\frac{\partial}{\partial y}\sum_{k}D_{jk}u_{j}u_{k} \tag{7}\]
Note that Eq. (5) only has each \(u_{i}\) depending on its own derivatives, and the model is thus smaller and limited in expressivity or capacity. Eqs. (6) and (7) have the derivative acting on the product \(u_{j}u_{k}\), and it is often called a system of _conservation laws_. They can easily be rewritten in the form of Eq. (3), and the difference in performance may be attributable simply to the model size.
It is expected that, when going to the full ImageNet-1k, and allowing for longer training and hyperparameter tuning, the best-performing model may be different from the ones we found in Table 1.
We refrain from making assertions on _why_ -- or _if_ -- this class of PDEs is superior as a base model for ConvNets, or _how_ the theory of PDE can provide the ultimate answer to the _effectiveness_ of neural networks as universal function approximators by means of gradient descent. Whatever mechanisms that make ResNet work, also make our model work.
## 6 Conclusion
We present a new ConvNet architecture inspired by a class of PDEs called quasi-linear hyperbolic systems, and with preliminary experiments, we found a simple implementation that showed promising results. Even though it is known, within small circles, the close connection between PDE and ConvNet, we made the first architecture design directly based on a nonlinear PDE, and as a result, we are able to remove most of the activation functions which are generally regarded as indispensable. The new architecture admits a continuous symmetry that could be exploited, hopefully in future works. We expect that this work opens up a new direction in neural architectural design, demonstrates the power of the PDE perspective for ConvNets, and opens the door for other concepts and techniques in nonlinear PDE, both theoretical and numerical, for improved understanding of neural networks.
\begin{table}
\begin{tabular}{c c c c} model & \#parameters & top1-acc & activation \\ \hline ResNet50 & 23.7M & 84.52 & \\ MobileNet\_v3\_large & 4.33M & 82.91 & \\ \hline Eq. (3) & 8.61M & 82.06 & relu@all \\ Eq. (3) & 8.61M & 82.34 & hardtanh@ds \\ Eq. (3) & 8.61M & 83.50 & hardball@ds \\ Eq. (3) & 8.61M & 83.66 & softball@ds \\ Eq. (3) (no ws) & 8.73M & 84.24 & softball@ds \\ Eq. (3) (no ws, x6) & 13.0M & 84.58 & softball@ds \\ \hline Eq. (4) & 5.70M & 81.88 & \\ Eq. (5) & 4.26M & 78.64 & \\ Eq. (6) & 5.61M & 82.52 & softball@ds \\ Eq. (7) (no ws, x6) & 13.0M & **84.96** & \\ \end{tabular}
\end{table}
Table 1: Performance on a 100-class subset of ImageNet-1k, trained for 50 epochs with identical training strategy. For our model, activation is applied either at the end of each block (@all), or only at downsampling (@ds). Inside the block, the number of channels increases by a factor of 4, except when indicated with “x6”.
## Acknowledgements
This work is supported in part by Provincial Key R&D Program of Zhejiang under contract No. 2021C01016, in part by Young Elite Scientists Sponsorship Program by CAST under contract No. 2022QNRC001.
## Addendum
It has come to our attention that the recent work [27] and related ones are highly relevant and largely complementary to ours, in terms of exploiting the kind of continuous symmetry presented here. Specifically, our activation functions, hardball and softball, are examples of _radial_ activation functions, and the symmetries are more aptly described as symmetries _in the parameter space_. We apologize for our omission in the published version.
|
2302.07579 | Semi-Supervised Deep Regression with Uncertainty Consistency and
Variational Model Ensembling via Bayesian Neural Networks | Deep regression is an important problem with numerous applications. These
range from computer vision tasks such as age estimation from photographs, to
medical tasks such as ejection fraction estimation from echocardiograms for
disease tracking. Semi-supervised approaches for deep regression are notably
under-explored compared to classification and segmentation tasks, however.
Unlike classification tasks, which rely on thresholding functions for
generating class pseudo-labels, regression tasks use real number target
predictions directly as pseudo-labels, making them more sensitive to prediction
quality. In this work, we propose a novel approach to semi-supervised
regression, namely Uncertainty-Consistent Variational Model Ensembling (UCVME),
which improves training by generating high-quality pseudo-labels and
uncertainty estimates for heteroscedastic regression. Given that aleatoric
uncertainty is only dependent on input data by definition and should be equal
for the same inputs, we present a novel uncertainty consistency loss for
co-trained models. Our consistency loss significantly improves uncertainty
estimates and allows higher quality pseudo-labels to be assigned greater
importance under heteroscedastic regression. Furthermore, we introduce a novel
variational model ensembling approach to reduce prediction noise and generate
more robust pseudo-labels. We analytically show our method generates higher
quality targets for unlabeled data and further improves training. Experiments
show that our method outperforms state-of-the-art alternatives on different
tasks and can be competitive with supervised methods that use full labels. Our
code is available at https://github.com/xmed-lab/UCVME. | Weihang Dai, Xiaomeng Li, Kwang-Ting Cheng | 2023-02-15T10:40:51Z | http://arxiv.org/abs/2302.07579v1 | Semi-Supervised Deep Regression with Uncertainty Consistency and Variational Model Ensembling via Bayesian Neural Networks
###### Abstract
Deep regression is an important problem with numerous applications. These range from computer vision tasks such as age estimation from photographs, to medical tasks such as ejection fraction estimation from echocardiograms for disease tracking. Semi-supervised approaches for deep regression are notably under-explored compared to classification and segmentation tasks, however. Unlike classification tasks, which rely on thresholding functions for generating class pseudo-labels, regression tasks use real number target predictions directly as pseudo-labels, making them more sensitive to prediction quality. In this work, we propose a novel approach to semi-supervised regression, namely Uncertainty-Consistent Variational Model Ensembling (UCVME), which improves training by generating high-quality pseudo-labels and uncertainty estimates for heteroscedastic regression. Given that aleatoric uncertainty is only dependent on input data by definition and should be equal for the same inputs, we present a novel uncertainty consistency loss for co-trained models. Our consistency loss significantly improves uncertainty estimates and allows higher quality pseudo-labels to be assigned greater importance under heteroscedastic regression. Furthermore, we introduce a novel variational model ensembling approach to reduce prediction noise and generate more robust pseudo-labels. We analytically show our method generates higher quality targets for unlabeled data and further improves training. Experiments show that our method outperforms state-of-the-art alternatives on different tasks and can be competitive with supervised methods that use full labels 1.
Footnote 1: Code is available at [https://github.com/xmed-lab/UCVME](https://github.com/xmed-lab/UCVME)
## Introduction
Deep learning has achieved state-of-the-art results on a variety of tasks such as classification [10], segmentation [14], image generation [1], and others. These methods tend to require large amounts of labeled data for training, however, which can be costly to annotate. State-of-the-art image classifiers such as ViT are trained on the JFT-300M dataset, for example, which consists of 300 million images [15]. Labeling can also be prohibitively expensive for medical image analysis, where life-saving tasks such as medical disease diagnoses [11] and tumor segmentation [11] require domain expertise. The ability to train neural networks with reduced labels is therefore highly valuable and an active research area.
Semi-supervised learning uses unlabeled data together with a smaller labeled dataset for model training. These methods reduce reliance on labeled data and sometimes outperform state-of-the-art techniques on fully labeled datasets. Chen _et al_. [14] propose CPS, a semi-supervised algorithm for image segmentation, which is time-consuming to label. Li _et al_. [11] enforce con
Figure 1: Differences between standard pseudo-labeling approaches and our method (UCVME). Classification tasks typically apply thresholding functions to probability predictions, \(\tilde{p}^{i}\), to obtain one-hot pseudo-labels, \(\tilde{y}^{i}\). Regression tasks use real number target predictions, \(\tilde{y}\), directly as pseudo-labels and are therefore more sensitive to prediction quality. Our UCVME improves pseudo-labels for regression by considering pseudo-label uncertainty, \(\sigma^{2}\), and robustness. We use a novel uncertainty consistency loss to improve uncertainty-based loss weighting and a variational model ensembling method to improve pseudo-label quality.
sistency between transformed inputs for medical diagnosis, which requires specialist knowledge for annotation. However, comparatively less attention has been paid to deep regression problems, which cover practical applications such as age estimation [1] and pose estimation [20] from images. Deep regression is particularly important in the medical field as it is used to obtain measurements for disease diagnosis and progression tracking, such as bone mineral density estimation for osteoporosis [14] and ejection fraction estimation for cardiomyopathy [21].
Regression problems are fundamentally different from classification problems because they generate real number predictions instead of class probabilities. Existing semi-supervised classification techniques _cannot be applied to semi-supervised regression_ because they rely on class probabilities and thresholding functions to generate pseudo-labels [15, 16] (see Fig. 1). Limited efforts have been devoted to exploring semi-supervised approaches for deep regression. Recent works by Jean _et al_. [1] propose deep kernel learning for semi-supervised regression, but their method is designed for tabular data. Pretrained feature extractors are used for image inputs, which prevents task-specific feature learning and limits performance. Wetzel _et al_. [20] propose TNNR, which estimates the difference between inputs with deep networks and uses loop consistency for unlabeled data. Loop consistency regulates training, but poor-quality predictions can still reduce the effectiveness of the constraints (see Tables 1 and 4).
Unlike classification tasks, which can smooth predictions using thresholding functions for class pseudo-labeling, regression tasks directly use real number target predictions as pseudo-labels. Therefore, model performance highly depends on the quality of pseudo-labels, _i.e._ predictions. In this paper, we propose a novel Uncertainty-Consistent Variational Model Ensembling method, namely UCVME, that adjusts for the uncertainty of pseudo-labels during training and increases pseudo-label robustness. Our method is based on two key ideas: enforcing uncertainty consistency between co-trained models to improve uncertainty-based loss weighting, and using ensembling techniques to reduce prediction variance for obtaining higher quality pseudo-labels.
We make use of Bayesian neural networks (BNNs), which predict aleatoric uncertainty of observations jointly with the target value. The uncertainty estimates are used for heteroscedastic regression, which assigns sample weightings based on uncertainty to reduce the impact of noisier samples [1]. We observe that aleatoric uncertainty, which by definition is dependant only on input data, _should be equal for the same input,_ and propose a novel consistency loss for uncertainty predictions of co-trained models. Our proposed loss notably improves aleatoric uncertainty estimates on unlabeled data, such that higher quality pseudo-labels are given greater importance through heteroscedastic regression (see Fig. 5). This is non-trivial since unreliable uncertainty estimates can lead to adverse loss-weighting and unstable training. Our proposed method is _the first to address uncertainty estimation quality for regression_.
BNNs also use variational inference during prediction to approximate the underlying distribution of estimates. To improve robustness of pseudo-labels, we introduce variational model ensembling, which uses ensembling methods with variational inference to reduce prediction noise. We analytically show our approach generates higher quality targets for unlabeled data and validate results experimentally (see Table 2). The combined improvements in uncertainty estimation and pseudo-label quality lead to state-of-the-art performance. Fig. 2 illustrates the overall framework.
We demonstrate our method on two regression tasks: age estimation from photographs and ejection fraction estimation from echocardiogram videos. Results show our method outperforms state-of-the-art alternatives and is competitive with supervised approaches using full labels (see Tables 1 and 4). Ablations demonstrate individual contributions from uncertainty consistency and variational model ensembling (Table 2). We summarize our main contributions as follows:
* We propose UCVME, a novel semi-supervised method that improves uncertainty estimates and pseudo-label robustness for deep regression tasks.
* We introduce a novel consistency loss for aleatoric uncertainty predictions of co-trained models, based on the insight that estimates should be equal for the same input.
* We introduce variational model ensembling for generating pseudo-labels on unlabeled data, which we analytically show is more accurate than deterministic methods.
* Results show our method outperforms existing state-of-the-art alternatives on two separate regression tasks.
## Related Works
In this section, we review works on learning from unlabeled data, general approaches for semi-supervised learning, state-of-the-art methods for semi-supervised regression, and existing methods for uncertainty estimation.
### Unsupervised Representation Learning
One way to learn from unlabeled data is to learn unsupervised feature representations, which can then be finetuned for specific tasks using classifiers. Techniques such as
Figure 2: Semi-supervised deep regression framework for our UCVME method. UCVME improves overall pseudo-label quality and assigns greater sample weights to pseudo-labels with low uncertainty.
PCA [1] and data clustering [12] learn intermediate features by reducing input dimensionality. With the increasing effectiveness of deep learning, pre-text tasks such as input reconstruction [13], augmentation prediction [14], and order prediction [15] have been explored for unsupervised training of deep feature extractors. Current state-of-the-art approaches are based on contrastive learning, which has been shown in some cases to outperform supervised learning [16, 17].
### Semi-Supervised Learning
Semi-supervised learning uses both labeled and unlabeled data for training. This reflects realistic settings where raw data is easy to obtain but annotations can be costly. State-of-the-art methods include enforcing consistency on augmented inputs and using pseudo-labels for unlabeled samples. For example, CCT [1] applies prediction consistency after perturbing intermediate features. CPS [17] enforces consistency of segmentation predictions between co-trained models. Temporal ensembling [18] and mean-teacher [19] methods use prediction and model-weight ensembling respectively to generate pseudo-labels. FixMatch [10] and FlexMatch [14] use class probability thresholding for pseudo-labeling to achieve state-of-the-art results on semi-supervised classification. Similar techniques have been applied to video action recognition [20], image generation [10, 11], medical image segmentation [12, 13, 21, 22], and other tasks.
### Semi-Supervised Regression
Regression problems are fundamentally different from classification as they involve predicting real numbers in IR instead of class probabilities. Semi-supervised classification methods, which use thresholding functions to select high-probability class pseudo-labels [10, 14], cannot be adapted to regression tasks _because there is no equivalent to probability thresholding for real number predictions_. Different formulations must be used instead to quantify prediction uncertainty for regression tasks.
Less attention has been paid to semi-supervised deep regression despite its importance [1, 14, 15, 16]. Semi-supervised regression is especially valuable in medical image analysis, since regression tasks are widely used and annotation costs are high [15, 16]. COREG [17] is a semi-supervised regression technique originally proposed in 2005 but still commonly used today. Two KNN regressors are co-trained and used to generate pseudo-labels for unlabeled data. Co-training schemes have also been extended to support vector regression by Xu _et al_. [20]. Graph-based methods proposed in [18] make use of input proximity for pseudo-labeling. More recent works by Jean _et al_. [14] and Mallick _et al_. [15] make use of deep kernel learning for regression.
One major disadvantage of these methods is that they are primarily designed for structured inputs, where samples consist of one-dimensional tabular data. Feature extractors cannot be trained end-to-end for unstructured inputs such as images and video. Jean _et al_. [14] for example rely on feature extractors pretrained on ImageNet [14] to obtain one dimensional embeddings from images. This limits performance as task-specific features cannot be learned (see results in Tables 1 and 4). TNNR [16] is an alternative method that uses deep networks to predict differences between input pairs. Loop consistency is applied to ensure looped differences sum to zero. Although loop consistency helps regularize training on unlabeled data, inaccurate predictions can limit its effectiveness (see Tables 1 and 4).
Unlike previous works, we address semi-supervised deep regression by improving uncertainty estimation and pseudo-label quality for real number targets. Our UCVME method, which proposes a novel uncertainty consistency loss and variational model ensembling, allows training to be focused on high-quality, robust pseudo-label targets and achieves state-of-the-art results on different regression tasks.
### Uncertainty Estimation
Uncertainty estimation is commonly used in semi-supervised learning to adjust for pseudo-label quality of unlabeled samples. UA-MT [21] and UMCT [15] both use Monte Carlo dropout to estimate pseudo-label uncertainty, which is then used to filter pseudo-labels or weight unlabeled samples for segmentation tasks. Yao _et al_. [21] and Lin _et al_. [16] estimate uncertainty based on prediction differences between co-trained models for segmentation of medical images. Semi-supervised classification methods such as FixMatch [10] and FlexMatch [14] implicitly filter out uncertain pseudo-labels by setting confidence thresholds for predictions.
Uncertainty estimation approaches designed for semi-supervised deep regression have not been explored in existing works however. Although methods such as heteroscedastic regression can be used to estimate uncertainty, it can only be done through joint prediction with the target label [15]. Naive implementation using pseudo-labels give unreliable estimates, which can leads to inaccurate pseudo-labels being assigned larger weights (see Fig. 5). In this work, we propose a novel uncertainty consistency loss that significantly improves the quality of uncertainty estimates on unlabeled data. This results in more effective uncertainty-based sample weighting and leads to state-of-the-art performance on different semi-supervised deep regression tasks.
## Methodology
UCVME is based on two novel ideas: enforcing aleatoric uncertainty consistency to improve uncertainty-based loss weighting, and variational model ensembling for generating high-quality pseudo-labels. We make use of Bayesian
neural networks, which differ from regular neural networks by their usage of aleatoric uncertainty prediction and variational inference [2]. We denote \(\mathcal{D}\coloneqq\{(x_{i},y_{i})\}_{i=1}^{N}\) as the labeled dataset consisting of \(N\) samples, where \(x_{i}\) is the input data and \(y_{i}\) is its corresponding label. We denote \(\mathcal{D}^{\prime}\coloneqq\{x_{i^{\prime}}^{\prime}\}_{i^{\prime}=1}^{N^{ \prime}}\) as the unlabeled dataset consisting of input data only. We train two BNNs, \(f_{m}\) where \(m\in\{a,b\}\), in a co-training framework and use Monte Carlo dropout for training and inference. We denote \(\hat{y}_{i,m}\) as model \(m\)'s prediction for target label \(y_{i}\). We denote \(\sigma_{i}^{2}\) as aleatoric uncertainty but predict log-uncertainty \(\ln\sigma_{i}^{2}\) in practice, which is always done to avoid obtaining negative predictions for variance. We denote predicted log-uncertainty using \(\hat{z}_{i,m}\).
### Aleatoric Uncertainty Consistency Loss for Improved Heteroscedastic Regression
Aleatoric uncertainty, \(\sigma_{i}^{2}\), refers to uncertainty relating to input data. It is used in BNNs as the variance parameter for heteroscedastic regression loss:
\[\mathcal{L}_{reg}=\frac{1}{N}\sum_{i=0}^{N}\frac{(y_{i}-\hat{y}_{i})^{2}}{2 \sigma_{i}^{2}}+\frac{\ln\sigma_{i}^{2}}{2}. \tag{1}\]
where \(\hat{y}_{i}\) is the prediction for target label \(y_{i}\). Intuitively, the loss function weighs error values dynamically based on aleatoric uncertainty. Samples with high uncertainty are regarded as having lower quality labels with higher noise, and these are given less importance compared to those with greater certainty [2]. Its formal derivation is based on maximum likelihood estimation, assuming observation errors are distributed with different levels of variance [2]. In contrast, standard mean squared error (MSE) loss assumes homoscedastic errors, _i.e._ uncertainty values \(\sigma_{i}^{2}\) have equal variance, which is a more restrictive and unrealistic assumption. We refer interested readers to Sup-1 of the supplementary materials for a review of formal derivations and comparisons.
Heteroscedastic regression can be beneficial for unlabeled data as it allows samples to be weighted based on pseudo-label uncertainty. In practice however, uncertainty prediction is difficult because uncertainty has no ground truth label and must be jointly predicted with the target value. Unstable predictions that do not reflect label quality can adversely affect training by assigning noisier samples with larger weights. _Stable training is even more difficult for unlabeled data_ because the target ground truth value is also unavailable, which is why heteroscedastic regression has not been successfully used in existing semi-supervised works. We show this effect in Fig. 5, where we see uncertainty predictions obtained using heteroscedastic regression only can be unreliable.
We observe that aleatoric uncertainty for the same input data _should be equal by definition_ and introduce a novel consistency loss to enforce consistent uncertainty predictions between co-trained models. Prediction consistency is known to be an effective regularizer [2] and can be applied to both labeled and unlabeled data to improve estimates. By ensuring uncertainty predictions from co-trained models are consistent, we provide an extra training signal in addition to joint estimation with the target label, which helps the model learn more reliable predictions. For labeled inputs, we introduce consistency loss, \(\mathcal{L}_{unc}^{lb}\):
\[\mathcal{L}_{unc}^{lb}=\frac{1}{N}\sum_{i=1}^{N}(\hat{z}_{i,a}-\hat{z}_{i,b})^ {2}\, \tag{2}\]
which is based on L2 distance. Heteroscedastic regression loss is calculated using the uncertainty predictions:
\[\mathcal{L}_{reg}^{lb}=\frac{1}{N}\sum_{m=a,b}\ \sum_{i=1}^{N}\left(\frac{(\hat{y}_{i,m }-y_{i})^{2}}{2\exp(\hat{z}_{i,m})}+\frac{\hat{z}_{i,m}}{2}\right). \tag{3}\]
For unlabeled data, ground truth target labels for \(y\) are unavailable, which makes joint uncertainty prediction challenging. We instead make use of variational model ensembling to obtain pseudo-labels for log-uncertainty, \(\widetilde{z}_{i}\), which is used as the training target. We describe variational model ensembling for unlabeled samples in the subsection below.
### Variational Model Ensembling for Pseudo-label Generation
BNNs use Monte Carlo dropout and variational inference to estimate the distribution of the predictor \(\hat{y}\). To reduce prediction noise, we can use ensembling techniques that reduce predictor variance, which can be demonstrated through bias-variance decomposition. The performance of predictor \(\hat{y}\) can be evaluated using expected MSE, which we decompose using bias-variance decomposition as follows:
\[E[(\hat{y}_{i}-y_{i})^{2}]=(E[\hat{y}_{i}]-y_{i})^{2}+E[(\hat{y}_{i}-E[\hat{y} _{i}])^{2}]\, \tag{4}\]
where the first right-hand side term is the bias and the second is the variance. If we take individual sample predictions from variational inference, \(\hat{y}_{i}^{\ t}\), and obtain an ensemble to form a new predictor \(\widetilde{y}_{i}\), we have:
\[\widetilde{y}_{i}=\frac{1}{T}\sum_{t=1}^{T}\hat{y}_{i}^{\ t}\, \tag{5}\]
where \(T\) is the number of samples used. The expected MSE loss of the predictor \(\widetilde{y}_{i}\) is then:
\[E[(\widetilde{y}_{i}-y_{i})^{2}]=(E[\widetilde{y}_{i}]-y_{i})^{2}+E[( \widetilde{y}_{i}-E[\widetilde{y}_{i}])^{2}]. \tag{6}\]
The bias terms of the predictors are equal, but the variance term in equation 6 cannot be greater than in equation 4 because more samples are observed (see Sup-2 of supplementary materials for more detailed derivations). This means predictor \(\widetilde{y}_{i}\) will have expected MSE lower than or equal to \(\hat{y}_{i}\) and will always have higher quality.
Based on this effect, we propose variational model ensembling for generating pseudo-labels on both target value \(\widetilde{y}_{i}\) and log aleatoric uncertainty \(\widetilde{z}_{i}\). Whereas pseudo-labels for co-trained models typically rely on cross-supervision in state-of-the-art approaches [2, 2]. The
2021), we ensemble the average estimate of the co-trained models and apply variational inference:
\[\widetilde{y}_{i}=\frac{1}{T}\sum_{t=1}^{T}\frac{\hat{y}_{i,a}^{\ t}+\hat{y}_{i,b}^ {\ t}}{2}\, \tag{7}\]
\[\widetilde{z}_{i}=\frac{1}{T}\sum_{t=1}^{T}\frac{\hat{z}_{i,a}^{\ t}+\hat{z}_{i,b }^{\ t}}{2}. \tag{8}\]
and use this as the pseudo-label for training. Compared to cross-supervision, pseudo-labels calculated using variational model ensembling are _more accurate because of reduced predictive variance_ and better reflect the true target and uncertainty values. This is especially important for regression targets because pseudo-labels directly use real number predictions and do not rely on thresholding functions for smoothing. Uncertainty consistency on unlabeled data is then calculated using \(\widetilde{z}_{i}\) as the training target:
\[\mathcal{L}^{ulb}_{unc}=\frac{1}{N^{\prime}}\sum_{m=a,b}\ \sum_{i=1}^{N^{\prime}}( \hat{z}_{i,m}-\widetilde{z}_{i})^{2}. \tag{9}\]
Heteroscedastic regression loss for unlabeled data is calculated using \(\widetilde{y}_{i}\) as the target and \(\widetilde{z}_{i}\) as the log-uncertainty:
\[\mathcal{L}^{ulb}_{reg}=\frac{1}{N^{\prime}}\sum_{m=a,b}\ \sum_{i=1}^{N^{\prime}} \left(\frac{(\hat{y}_{i,m}-\widetilde{y}_{i})^{2}}{2\exp(\widetilde{z}_{i})}+ \frac{\widetilde{z}_{i}}{2}\right). \tag{10}\]
The improved pseudo-labels lead to more stable heteroscedastic regression, which generates better training signals on unlabeled data.
### Overall Semi-Supervised Framework
During training, we calculate heteroscedastic regression loss \(\mathcal{L}^{lb}_{reg}\) and aleatoric uncertainty consistency loss \(\mathcal{L}^{lb}_{unc}\) using labeled samples. Pseudo-labels for unlabeled data are generated using Eq. 7 and 8 at the start of every training iteration with the most current model weights. The loss values for \(\mathcal{L}^{ulb}_{reg}\) and \(\mathcal{L}^{ulb}_{unc}\) are calculated for the unlabeled data and jointly optimized with labeled data using the total loss:
\[\mathcal{L}= \mathcal{L}^{lb}_{reg}+\mathcal{L}^{lb}_{unc}+w_{ulb}\ (\ \mathcal{L}^{ulb}_{reg}+\mathcal{L}^{ulb}_{unc}\ )\, \tag{11}\]
where \(w_{ulb}\) is the weighting parameter for unlabeled data. Variational model ensembling is also used for test-time inference to obtain \(\widetilde{y}_{i}\) as the final prediction. Pseudo-code is given in S-Algorithm 1 of the supplementary materials.
## Experiments
We demonstrate our method on two semi-supervised deep regression problems: age estimation from photographs and ejection fraction estimation from echocardiogram videos. 2
Footnote 2: Code is available at [https://github.com/xmed-lab/UCVME](https://github.com/xmed-lab/UCVME).
### Age Estimation from Photographs
Age estimation involves predicting a person's age based on their photograph, and is commonly used as a benchmark task for deep regression. Facial images can be easily obtained, but accurate age labels may not always be available given concerns over data privacy. Semi-supervised deep regression methods can provide label-efficient approaches for training.
#### Dataset
We use the UTKFace dataset [15] and follow the train-test split in previous works [11]. A total of 13,144 images are available for training and 3,287 images for testing. We use a subset of the training dataset for validation. Faces have been pre-cropped and age labels range from 21 to 60 (see Figure 3 for examples). For our semi-supervised setting, we use subsets of the training data as labeled data and the remaining as unlabeled data. Label distributions are shown in S-Fig. 1 of the supplementary materials.
#### Settings
We use ResNet-50 [14] as our encoder and add additional dropout layers after each of the four main residual blocks. The model is trained for 30 epochs using learning rate \(10^{-4}\), weight decay \(10^{-3}\), and the Adam optimizer. We use a batch size of 32 for both labeled and unlabeled data. We set dropout probability as 5% and use \(T=5\) for variational inference. We set \(w_{ulb}=10\) which we choose empirically (see S-Table 1 in supplementary materials). Mean absolute error (MAE) and \(R^{2}\) are used for evaluation on the test set. Experiments are run six times and mean results are reported with standard deviation.
#### Comparison with state-of-the-art
We compare our method with alternative state-of-the-art approaches for semi-supervised regression, specifically COREG [13], SSDPKL [12], and TNNR [20], and also adapt mean-teacher [16] and temporal ensembling [16] methods for regression. To highlight the impact of our proposed components, we introduce a baseline method (_Baseline_) that uses two co-trained BNNs with heteroscedastic regression loss, but _without_ aleatoric uncertainty consistency loss and variational model ensembling. We perform training under different semi-supervised settings using only 5%, 10%, and 20% of the available training labels. The remaining samples are treated as unlabeled data. For reference, we also show results using the supervised state-of-the-art method by Berg _et al_. [13] (_RNDB_) for the same settings using reduced labels, as well as on the fully labeled dataset.
Figure 3: Sample data from UTKFace dataset [15] for age estimation. Pre-cropped images are paired with age labels for training.
The same ResNet-50 encoder [16] is used in all methods for fair comparison. We also modify COREG to use co-trained deep regression models instead of KNN regression and use a pretrained feature encoder for SSDPKL to obtain image features. Additional implementation details are included in Sup-3 of supplementary materials. We show results in Table 1 and visually plot them in Fig. 4.
We can see from Fig. 4 that the supervised approach (blue) under-performs semi-supervised approaches in general. Our method (red) gives the best results and achieves the lowest MAE values for all settings. We also note that our method achieves performance competitive with fully supervised results using only 20% of available training labels (MAE 4.85 _v.s._ 4.83). UCVME therefore effectively reduces reliance on labeled data for deep regression.
since higher uncertainty means more prediction noise and lower quality labels. We obtain pseudo-labels and uncertainty predictions for unlabeled samples using the _Baseline_ and _Baseline_ + _Con._ models. Samples are grouped into ten equal bins based on sorted uncertainty predictions. Pseudo-label quality is measured using MSE against the ground truth target value. Average aleatoric uncertainty is calculated for each group. The two values are plotted against each other in Fig. 5 using models trained after five and twenty epochs.
For the _Baseline_ + _Con._ model, we see overall MSE and uncertainty both decrease after training for more epochs (solid red line _v.s._ solid blue line). Pseudo-labels with lower uncertainty have lower MSE and are of higher quality. In contrast, uncertainty predictions from the _Baseline_ model are more extreme (dashed lines) and are not significantly reduced with training (dashed red line _v.s._ dashed blue line). The relationship between uncertainty and pseudo-label quality is not strong, which can lead to noisier samples being assigned higher loss weightings. Aleatoric uncertainty consistency loss therefore _improves uncertainty estimates significantly and helps prevent adverse sample weighting_.
Computational CostWe show in Table 3 the computational cost of different semi-supervised deep regression approaches in gigaFLOPS per image (G). We note that the cost of UCVME is dependant on \(T\), the number of iterations used for variational model ensembling, which is set to 5. For reference, we also show the cost from using a single iteration, \(T=1\), which is equivalent to enforcing uncertainty consistency only without using variational model ensembling.
We can see our UCVME method with \(T=1\) incurs the same cost as COREG. Regression performance outperforms COREG however due to the use of uncertainty consistency, which can be seen from results in Tables 1 and 2 (MAE 5.30 _v.s._ 5.39). Using variational model ensembling by setting \(T=5\) leads to more computation but better predictions. Although mean-teacher, temporal ensembling, and SSDPKL require less computation, they do not perform as well.
data and remaining samples as unlabeled data. Label distributions are given in S-Fig. 2 of the supplementary materials.
SettingsWe use the R2+1D ResNet encoder Tran et al. (2018) pretrained on Kinetics 400 Kay et al. (2017) and add additional dropout layers between the four main residual blocks. We set dropout probability as 5% and use \(T=5\) for variational inference. The model is trained using SGD with \(10^{-4}\) learning rate and 0.9 momentum for 25 epochs. Learning rate is decayed by 0.1 at epoch 15. Clips of 32 frames are sampled from videos at a rate of 1 in every 2 frames for input. Batches of 10 clips are used for labeled and unlabeled videos. We set \(w_{ulb}=10\) which we choose empirically (see S-Table 2 of supplementary materials). We evaluate performance using MAE and \(R^{2}\). Experiments are run five times and mean results with standard deviation are reported.
Comparison with state-of-the-artsWe compare our method with mean-teacher Tarvainen and Valpola (2017), temporal ensembling Laine and Aila (2016), COREG Zhou et al. (2005), SSDPKL Mallick et al. (2021), TNNR Wetzel et al. (2021), and our baseline model (_Baseline_). We perform training under settings where one-sixteenth, one-eighths, and one-quarter of the training labels are used, with the remainder treated as unlabeled data. For reference, we also show results using the supervised method by Ouyang _et al_. Ouyang et al. (2020) on the reduced labels as well as on the fully labeled dataset. The Kinetics pre-trained R2+1D ResNet encoder Tran et al. (2018) is used in all methods for fair comparison. Additional implementation details are given in Sup-4 of the supplementary materials. Results are shown in Table 4 and plotted in Fig. 7.
Our method consistently achieves the best results for all settings by significant margins. We are also able to achieve an MAE of 4.37 using a quarter of the labels, which is only a relative 5.8% higher than the 4.13 MAE achieved by Ouyang _et al_. on fully labeled data. Our proposed method therefore reduces the number of labels required for training which is highly valuable for medical regression tasks.
## Conclusion
In this work, we introduce a novel Uncertainty-Consistent Variational Model Ensembling (UCVME) method for semi-supervised deep regression. Our method improves training on unlabeled data by adjusting for pseudo-label quality and improving pseudo-label robustness. We introduce a novel consistency loss on uncertainty estimates, which we demon
\begin{table}
\begin{tabular}{l|l|l l l l|c} & \multicolumn{4}{c}{MAE Values \(\downarrow\)} \\ \hline Type & Method & Encoder & 1/16 labeled & 1/8 labeled & 1/4 labeled & All labels \\ \hline Supervised & Ouyang _et al_. Ouyang et al. (2020) & R2+1D & 6.04 \(\pm\) 0.20 & 5.57 \(\pm\) 0.21 & 4.78 \(\pm\) 0.11 & **4.13 \(\pm\) 3.85** \\ \hline \multirow{8}{*}{\begin{tabular}{} \end{tabular} } & Mean-teacher Tarvainen and Valpola (2017) & R2+1D & 6.01 \(\pm\) 0.09 & 5.51 \(\pm\) 0.06 & 4.71 \(\pm\) 0.07 & - \\ & Temporal ensembling Laine and Aila (2016) & R2+1D & 5.97 \(\pm\) 0.08 & 5.52 \(\pm\) 0.06 & 4.67 \(\pm\) 0.06 & - \\ & SSDPKL Jean et al. (2018) & R2+1D & 6.01 \(\pm\) 0.04 & 5.47 \(\pm\) 0.01 & 4.68 \(\pm\) 0.07 & - \\ & TNNR Wetzel et al. (2021) & R2+1D & 5.90 \(\pm\) 0.11 & 5.46 \(\pm\) 0.08 & 4.79 \(\pm\) 0.08 & - \\ & COREG Zhou et al. (2005) & R2+1D & 5.94 \(\pm\) 0.07 & 5.31 \(\pm\) 0.02 & 4.57 \(\pm\) 0.02 & - \\ & Baseline & R2+1D & 5.93 \(\pm\) 0.10 & 5.36 \(\pm\) 0.05 & 4.58 \(\pm\) 0.03 & - \\ & Ours & R2+1D & **5.77 \(\pm\) 0.04** & **5.10 \(\pm\) 0.05** & **4.37 \(\pm\) 0.05** & - \\ \hline \multicolumn{7}{c}{\(\mathbf{R^{2}}\) values \(\uparrow\)} \\ \hline Type & Method & Encoder & 1/16 labeled & 1/8 labeled & 1/4 labeled & All labels \\ \hline Supervised & Ouyang _et al_. Ouyang et al. (2020) & R2+1D & 55.3\% \(\pm\) 2.6 & 62.5\% \(\pm\) 2.2 & 71.6\% \(\pm\) 1.4 & **80.4\% \(\pm\) 1.2** \\ \hline \multirow{8}{*}{
\begin{tabular}{} \end{tabular} } & Mean-teacher Tarvainen and Valpola (2017) & R2+1D & 55.1% \(\pm\) 1.4 & 62.9\% \(\pm\) 0.7 & 72.5\% \(\pm\) 0.4 & - \\ & Temporal ensembling Laine and Aila (2016) & R2+1D & 55.2\% \(\pm\) 1.3 & 62.9\% \(\pm\) 0.7 & 73.2\% \(\pm\) 0.3 & - \\ & SSDPKL Jean et al. (2018) & R2+1D & 56.3\% \(\pm\) 1.0 & 61.2\% \(\pm\) 0.3 & 74.1\% \(\pm\) 1.0 & - \\ & TNNR Wetzel et al. (2021) & R2+1D & 55.9% \(\pm\) 1.2 & 63.4\% \(\pm\) 0.8 & 73.7\% \(\pm\) 0.6 & - \\ & COREG Zhou et al. (2005) & R2+1D & 55.1% \(\pm\) 0.7 & 64.5\% \(\pm\) 0.4 & 74.1\% \(\pm\) 0.1 & - \\ & Baseline & R2+1D & 55.2\% \(\pm\) 1.4 & 64.9\% \(\pm\) 0.3 & 74.5\% \(\pm\) 0.1 & - \\ & Ours & R2+1D & **57.8\% \(\pm\) 0.6** & **66.6\% \(\pm\) 0.5** & **76.3\% \(\pm\) 0.6** & - \\ \hline \end{tabular}
\end{table}
Table 4: Comparison with state-of-the-art methods for ejection fraction estimation from echocardiogram video. We use settings where only 1/16, 1/8, and 1/4 of training labels are available. “Supervised” methods are only able to use labeled data while “Semi-supervised” methods can use labeled and remaining unlabeled data. “Baseline” method uses two co-trained BNNs without uncertainty consistency loss and variational model ensembling. **Bold** numbers represent the best result.
Figure 7: MAE of different state-of-the-art methods for ejection fraction estimation. Our proposed method (red) consistently outperforms alternatives by significant margins.
strate significantly improves heteroscedastic loss weighting, especially for unlabeled samples. We also use variational model ensembling to reduce prediction noise and generate better training targets for unlabeled data. Our method has strong theoretical support and can be applied to different tasks and datasets. We demonstrate this using two deep regression tasks based on image and video data and achieve state-of-the-art performance for both. Results are also competitive with supervised methods using full labels. UCVME is therefore a valuable method for reducing the amount of labels required for deep regression tasks.
## Acknowledgement
The work described in this paper was supported by a grant from Hong Kong Research Grants Council General Research Fund (GRF) (16203319), a grant from HKUST-BICI Exploratory Fund under HCIC-004, and a grant from Hong Kong Innovation and Technology Commission (Project no. ITS/030/21).
|
2310.12403 | Cooperative Minibatching in Graph Neural Networks | Training large scale Graph Neural Networks (GNNs) requires significant
computational resources, and the process is highly data-intensive. One of the
most effective ways to reduce resource requirements is minibatch training
coupled with graph sampling. GNNs have the unique property that items in a
minibatch have overlapping data. However, the commonly implemented Independent
Minibatching approach assigns each Processing Element (PE, i.e., cores and/or
GPUs) its own minibatch to process, leading to duplicated computations and
input data access across PEs. This amplifies the Neighborhood Explosion
Phenomenon (NEP), which is the main bottleneck limiting scaling. To reduce the
effects of NEP in the multi-PE setting, we propose a new approach called
Cooperative Minibatching. Our approach capitalizes on the fact that the size of
the sampled subgraph is a concave function of the batch size, leading to
significant reductions in the amount of work as batch sizes increase. Hence, it
is favorable for processors equipped with a fast interconnect to work on a
large minibatch together as a single larger processor, instead of working on
separate smaller minibatches, even though global batch size is identical. We
also show how to take advantage of the same phenomenon in serial execution by
generating dependent consecutive minibatches. Our experimental evaluations show
up to 4x bandwidth savings for fetching vertex embeddings, by simply increasing
this dependency without harming model convergence. Combining our proposed
approaches, we achieve up to 64% speedup over Independent Minibatching on
single-node multi-GPU systems, using same resources. | Muhammed Fatih Balin, Dominique LaSalle, Ümit V. Çatalyürek | 2023-10-19T01:15:24Z | http://arxiv.org/abs/2310.12403v3 | # Cooperative Minibatching in Graph Neural Networks
###### Abstract
Significant computational resources are required to train Graph Neural Networks (GNNs) at a large scale, and the process is highly data-intensive. One of the most effective ways to reduce resource requirements is minibatch training coupled with graph sampling. GNNs have the unique property that items in a minibatch have overlapping data. However, the commonly implemented Independent Minibatching approach assigns each Processing Element (PE) its own minibatch to process, leading to duplicated computations and input data access across PEs. This amplifies the Neighborhood Explosion Phenomenon (NEP), which is the main bottleneck limiting scaling. To reduce the effects of NEP in the multi-PE setting, we propose a new approach called Cooperative Minibatching. Our approach capitalizes on the fact that the size of the sampled subgraph is a concave function of the batch size, leading to significant reductions in the amount of work per seed vertex as batch sizes increase. Hence, it is favorable for processors equipped with a fast interconnect to work on a large minibatch together as a single larger processor, instead of working on separate smaller minibatches, even though global batch size is identical. We also show how to take advantage of the same phenomenon in serial execution by generating dependent consecutive minibatches. Our experimental evaluations show up to 4x bandwidth savings for fetching vertex embeddings, by simply increasing this dependency without harming model convergence. Combining our proposed approaches, we achieve up to 64% speedup over Independent Minibatching on single-node multi-GPU systems.
## 1 Introduction
Graph Neural Networks (GNNs) have become de facto deep learning models for unstructured data, achieving state-of-the-art results on various application domains involving graph data such as recommendation systems (Wu et al., 2020; Ying et al., 2018), fraud detection (Liu et al., 2022; Patel et al., 2022), identity resolution (Xu et al., 2019), and traffic forecasting (Jiang and Luo, 2022). However, as the usage of technology continues to increase, the amount of data generated by these applications is growing exponentially, resulting in large and complex graphs that are infeasible or too time-consuming to train on a single processing element (Ying et al., 2018; Zhu et al., 2019). For example, some social media graphs are reaching billions of vertices and trillions of interactions (Ching et al., 2015). Efficient distributed training of GNNs is essential for extracting value from large-scale unstructured data that exceeds the cost of storing and processing such data.
Due to the popularity of Deep Neural Networks (DNNs) and the need to support larger models and datasets, a great deal of research has focused on increasing the scale and efficiency of distributed
DNN training. Techniques such as data parallelism (Ginsburg et al., 2017; Goyal et al., 2018), pipelining (Narayanan et al., 2019), and intra-layer parallelism (Dean et al., 2012) have been employed. Following the success of traditional distributed DNN training, the same techniques have also been adapted to GNN training, such as data parallelism (Gandhi and Iyer, 2021; Lin et al., 2020; Zheng et al., 2021; Zhu et al., 2019) and intra-layer parallelism (Tripathy et al., 2020).
The parallelization techniques mentioned earlier are used to scale both full-batch and minibatch training in a distributed setting. Minibatch training (Bertsekas, 1994) is the go-to method to train DNN models as it outperforms full-batch training in terms of convergence (Allen-Zhu and Hazan, 2016; Li et al., 2014; Keskar et al., 2016; Wilson and Martinez, 2003), and more recently has been shown to also offer the same benefit for GNNs (Zheng et al., 2021). In the distributed setting, minibatch training for DNNs using data parallelism is straightforward. The training samples are partitioned across the Processing Elements (PE) and they compute the forward/backward operations on their minibatches. The only communication required is an all-reduce operation for the gradients.
Unfortunately, minibatch training a GNN model is more challenging than a usual DNN model. GNNs turn a given graph encoding relationships into computational dependencies. Thus in an \(L\)-layer GNN model, each minibatch computation has a different structure as it is performed on the \(L\)-hop neighborhood of the minibatch vertices. Real-world graphs usually are power law graphs (Artico et al., 2020) with small diameters, thus it is a challenge to train deep GNN models as the \(L\)-hop neighborhood grows exponentially w.r.t. \(L\), reaching almost the whole graph within a few hops.
Very large GNN datasets necessitate storing the graph and node embeddings on slower storage mediums. To enable GNN training efficiently in such a setting, several techniques have been proposed (Park et al., 2022; Waleffe et al., 2022). These studies assume that the graph and its features are stored on disks or SSDs and design their systems to reduce data transfers. The methods proposed in this paper directly apply to these settings by reducing bandwidth requirements, as seen in Section 4.
A single epoch of full-batch GNN training requires computation proportional to the number of layers \(L\) and the size of the graph. However, minibatch training requires more operations to process a single epoch due to repeating calculations in the \(2\)nd through \(L\)th layers. As the batch size decreases, the number of repeated calculations increases. This is because the vertices and edges have to be processed each time they appear in the \(L\)-hop neighborhood. Thus, it is natural to conclude that using effectively larger batch sizes in GNNs reduces the number of computations and data accesses of an epoch in contrast to regular DNN models. Our contributions in this work utilizing this important observation can be listed as follows:
* Investigate work vs. batch size relationship and present theorems stating the cost of processing a minibatch is a concave function of the batch size (Theorems 3.1 and 3.2).
* Utilize this relationship by combining data and intra-layer parallelism to process a minibatch across multiple PEs for reduced work (Section 3.1), with identical global batch size. We call this new approach _Cooperative Minibatching_.
* Use the same idea to generate consecutive dependent minibatches to increase temporal vertex embedding access locality (Section 3.2). This approach can reduce the transfer amount of vertex embeddings up to \(4\times\), without harming model convergence.
* Show that the two approaches are orthogonal. Together, the reduced work and decreased cache miss rates result in up to \(1.64\times\) speedup over Independent Minibatching with identical global batch size.
## 2 Background
A graph \(\mathcal{G}=(V,E)\) consists of vertices \(V\) and edges \(E\subset V\times V\) along with optional edge weights \(A_{ts}>0,\forall(t\to s)\in E\). Given a vertex \(s\), the \(1\)-hop neighborhood \(N(s)\) is defined as \(N(s)=\{t|(t\to s)\in E\}\), and it can be naturally expanded to a set of vertices \(S\) as \(N(S)=\cup_{s\in S}N(s)\).
GNN models work by passing previous layer embeddings (\(H\)) from \(N(s)\) to \(s\), and then combining them using a nonlinear function \(f^{(l)}\) at layer \(l\), given initial vertex features \(H^{(0)}\):
\[H_{s}^{(l+1)}=f^{(l)}(H_{s}^{(l)},\{H_{t}^{(l)}\mid t\in N(s)\}) \tag{1}\]
If the GNN model has \(L\) layers, then the loss is computed by taking the final layer \(L\)'s embeddings and averaging their losses over the set of training vertices \(V_{t}\subset V\) for _full-batch_ training. In \(L\)-layer full-batch training, the total number of vertices that needs to be processed is \(L|V|\).
### Minibatching in GNNs
In minibatch training, a random subset of training vertices, called _Seed Vertices_, is selected, and training is done over the (sampled) subgraph composed of \(L\)-hop neighborhood of the seed vertices. On each iteration, minibatch training computes the loss on seed vertices, which are random subsets of the training set \(V_{t}\).
Given a set of vertices \(S\), we define \(l\)-th layer expansion set, or the \(l\)-hop neighborhood \(S^{l}\) as:
\[S^{0}=S,\ \ \ S^{(l+1)}=S^{l}\cup N(S^{l}) \tag{2}\]
For GNN computations, \(S^{l}\) would also denote the set of the required vertices to compute (1) at each layer \(l\). Using the same notation, \(\{s\}^{l}\) denotes \(l\)-layer expansion set starting from single vertex \(s\in V\).
For a single minibatch iteration, the total number of vertices that need to be processed is \(\sum_{l=1}^{L}|S^{l}|\). There are \(\frac{|V|}{|S^{0}|}\) minibatches assuming \(V_{t}=V\). Since each \(|S^{l}|\geq|S^{0}|\), and a single epoch of minibatch training needs to go over the whole dataset, the work \(W(|S^{0}|)\) for a single epoch is:
\[W(|S^{0}|)=\frac{|V|}{|S^{0}|}\sum_{l=1}^{L}E[|S^{l}|]\geq\frac{|V|}{|S^{0}|} \sum_{l=1}^{L}|S^{0}|=L|V|\]
where \(E[|S^{l}|]\) is the expected number of sampled vertices in layer \(l\) and \(|S^{0}|\) is the batch size. That is, the total amount of work to process a single epoch increases over full-batch training. The increase in work due to minibatch training is thus encoded in the ratios \(\frac{E[|S^{l}|]}{|S^{0}|},1\leq l\leq L\).
Next, we will briefly present some of the sampling techniques. When sampling is used with minibatching, the minibatch subgraph may potentially become random. However, the same argument for the increasing total amount of work holds for them too, as seen in Figure 2.
### Graph Sampling
Below, we review three different sampling algorithms for minibatch training of GNNs. Our focus in this work is samplers whose expected number of sampled vertices is a function of the batch size. All these methods are applied recursively for GNN models with multiple layers.
#### 2.2.1 Neighbor Sampling (NS)
Given a fanout parameter \(k\) and a batch of seed vertices \(S^{0}\), NS by (Hamilton et al., 2017) samples the neighborhoods of vertices randomly. Given a batch of vertices \(S^{0}\), a vertex \(s\in S^{0}\) with degree \(d_{s}=|N(s)|\), if \(d_{s}\leq k\), NS uses the full neighborhood \(N(s)\), otherwise it samples \(k\) random neighbors for the vertex \(s\).
One way to sample \(k\) edges from \(d_{s}\) options is to use the reservoir algorithm (Vitter, 1985). In this algorithm, the first \(k\) edges are included in the reservoir. For the rest of the \(d_{s}-k\) edges, the \(i\)th edge \((t\to s)\) rolls the uniform random integer \(0\leq r_{ts}\leq i\) and replaces the item in slot \(r_{ts}\) if \(r_{ts}<k\).
#### 2.2.2 LABOR Sampling
Given a fanout parameter \(k\) and a batch of seed vertices \(S^{0}\), LABOR-0 (Baln & Catalyurek, 2023) samples the neighborhoods of vertices as follows. First, each vertex rolls a uniform random number \(0\leq r_{t}\leq 1\). Given batch of vertices \(S^{0}\), a vertex \(s\in S^{0}\) with degree \(d_{s}=|N(s)|\), the edge \((t\to s)\) is sampled if \(r_{t}\leq\frac{k}{d_{s}}\). Since different seed vertices \(\in S^{0}\) end up using the same random variate \(r_{t}\) for the same source vertex \(t\), LABOR-0 samples fewer vertices than NS in expectation.
The LABOR-* algorithm is the importance sampling variant of LABOR-0 and samples an edge \((t\to s)\) if \(r_{t}\leq c_{s}\pi_{t}\), where \(\pi\) is importance sampling probabilities optimized to minimize the
expected number of sampled vertices and \(c_{s}\) is a normalization factor. LABOR-* samples fewer vertices than LABOR-0 in expectation.
Note that, choosing \(k\geq\max_{s\in V}d_{s}\) corresponds to training with full neighborhoods for both NS and LABOR methods.
#### 2.2.3 RandomWalk Sampling
Given a walk length \(o\), a restart probability \(p\), number of random walks \(a\), a fanout \(k\), and a batch of vertices \(S^{0}\), a vertex \(s\in S^{0}\), a _Random Walk_(Ying et al., 2018) starts from \(s\) and each step picks a random neighbor \(s^{\prime}\) from \(N(s)\). For the remaining \(o-1\) steps, the next neighbor is picked from \(N(s^{\prime})\) with probability \(1-p\), otherwise it is picked from \(N(s)\). This process is repeated \(a\) times for each seed vertex and lastly, the top \(k\) most visited vertices become the _neighbors_ of \(s\) for the current layer.
Notice that random walks correspond to weighted neighbor sampling from a graph with adjacency matrix \(\tilde{A}=\sum_{i=1}^{o}A^{i}\), where the weights of \(\tilde{A}\) depend on the parameters \(a\), \(p\) and \(k\). Random walks give us the ability to sample from \(\tilde{A}\) without actually forming \(\tilde{A}\).
### Independent Minibatching
Independent minibatching is commonly used in multi-GPU, and distributed GNN training frameworks (Cai et al., 2023; Gandhi and Iyer, 2021; Lin et al., 2020; Zheng et al., 2021; Zhu et al., 2019) to parallelize the training and allows scaling to larger problems. Each Processing Element (PE, e.g., GPUs, CPUs, or cores of multi-core CPU), starts with their own \(S^{0}\) of size \(b\) as the seed vertices, and compute \(S^{1},\ldots,S^{L}\) along with the sampled edges to generate minibatches (see Figure 1). Computing \(S^{1},\ldots,S^{L}\) depends on the chosen sampling algorithm, such as the ones explained in Section 2.2.
Independent minibatching has the advantage that doing a forward/backward pass does not involve any communication with other PEs after the initial minibatch preparation stage at the expense of duplicate work (see Figure 1).
## 3 Cooperative Minibatching
In this section, we present two theorems that show the work of an epoch will be monotonically nonincreasing with increasing batch sizes. We provide their proofs in Appendices A.1 and A.2. After that, we propose two algorithms that can take advantage of this monotonicity.
**Theorem 3.1**.: _The work per epoch \(\frac{E[|S^{l}|]}{|S^{0}|}\) required to train a GNN model using minibatch training is monotonically nonincreasing as the batch size \(|S^{0}|\) increases._
**Theorem 3.2**.: _The expected subgraph size \(E[|S^{l}|]\) required to train a GNN model using minibatch training is a concave function of batch size, \(|S^{0}|\)._
Figure 1: Minibatches of two processing elements may share edges in the second layer and vertices starting in the first layer. For independent minibatching, the solid green edges shared by both processing elements represent duplicate work, and input nodes B through G are duplicated along with the directed endpoints of green edges. However for cooperative minibatching, the vertices and edges are partitioned across the PEs with no duplication, and the edges crossing the line between the two PEs necessitate communication.
### Cooperative Minibatching
As explained in Section 2, Independent Minibatching (I.M.) can not take advantage of the reduction in work with increasing global batch sizes and number of PEs, because it uses separate small batches of sizes \(b\) on each PE for each step of training. On the other hand, one can also keep the global batch size constant, \(bP=|S^{0}|\), and vary the number of processors \(P\). As \(P\) increases, I.M. will perform more and more duplicate work because the local batch size is a decreasing function, \(b=\frac{|S^{0}|}{P}\), of \(P\).
Here, we propose the _Cooperative Minibatching_ method that will take advantage of the work reduction with increasing batch sizes in multi-PE settings. In Cooperative Minibatching, a single batch of size \(bP\) will be processed by all the \(P\) PEs in parallel, eliminating any redundant work across all PEs.
We achieve this as follows: we first partition the graph in 1D fashion by logically assigning each vertex and its incoming edges to PEs as \(V_{p}\) and \(E_{p}\) for each PE \(p\). Next, PE \(p\) samples its batch seed vertices \(S^{l}_{p}\) from the training vertices in \(V_{p}\) for \(l=0\) of size \(b\). Then using any sampling algorithm, PE \(p\) samples the incoming edges \(E^{l}_{p}\) from \(E_{p}\) for its seed vertices. Each PE then computes the set of vertices sampled \(\tilde{S}^{l+1}_{p}=\{t\mid(t\to s)\in E^{l}_{p}\}\). Note that, \(\tilde{S}^{l+1}_{p}\) has elements residing on different PEs. The PEs exchange the vertex ids \(\tilde{S}^{l+1}_{p}\) so that each PE receives the set \(S^{l+1}_{p}\in V_{p}\). This process is repeated recursively for GNN models with multiple layers by using \(S^{l+1}_{p}\) as the seed vertices for the next layer. The exchanged information is cached to be used during the forward/backward passes.
For the forward/backward passes, the same communication pattern used during cooperative sampling is used to send and receive input and intermediate layer embeddings before each GNN layer invocation. Algorithm 1 details cooperative sampling and cooperative forward/backward passes for a single GNN training iteration. Independent minibatching works the same except that it lacks the all-to-all operations and has \(\tilde{A}^{l+1}_{p}=A^{l+1}_{p}\) for any given variable \(A\) instead. The redistribution of vertices during sampling happens according to the initial graph partitioning and the rest of the redistribution operations follow the same communication pattern, always converting a variable \(\tilde{A}^{l+1}_{p}\) into \(A^{l+1}_{p}\) during the forward pass and \(A^{l+1}_{p}\) into \(\tilde{A}^{l+1}_{p}\) during sampling and the backward passes for any variable \(A\). We refer the reader to Appendix A.3 for the complexity analysis of Cooperative and Independent Minibatching approaches, and to Appendix A.7 to see the relation between the approach proposed here and the work of Jia et al. (2020) on redundancy-free GCN aggregation.
### Cooperative Dependent Minibatching
Just as any parallel algorithm can be executed sequentially, we can reduce the number of distinct data accesses by having a single PE process \(b\)-sized parts of a single \(\kappa b\)-sized minibatch for \(\kappa\) iterations. In light of Theorems 3.1 and 3.2, consider doing the following: choose \(\kappa\in\mathbb{Z}^{+}\), then sample a batch \(S^{0}\) of size \(\kappa b\), i.e., \(\kappa b=|S^{0}|\) to get \(S^{0},\ldots,S^{L}\). Then sample \(\kappa\) minibatches \(S^{0}_{i}\), of size \(b=|S^{0}_{i}|\) from this batch of size \(\kappa b\) to get \(S^{0}_{i},\ldots,S^{L}_{i},\forall i\in\{0,\ldots,\kappa-1\}\). In the end, all of the input features required for these minibatches will be a subset of the input features of the large batch, i.e. \(S^{j}_{i}\subset S^{j},\forall i,j\). This means that the collective input feature requirement of these \(\kappa\) batches will be \(|S^{L}|\), the same as our batch of size \(\kappa b\). Hence, we can now take advantage of the concave growth of the work in Theorem 3.2 and Figure 2.
Note that, if one does not use any sampling algorithms and proceeds to use the full neighborhoods, this technique will not give any benefits, as by definition, the \(l\)-hop neighborhood of a batch of size \(\kappa b\) will always be equal to the union of the \(l\)-hop neighborhoods of batches of sizes \(b\). However for sampling algorithms, any overlapping vertex sampled by any two batches of sizes \(b\) might end up with different random neighborhoods resulting in a larger number of sampled vertices. Thus, having a single large batch ensures that only a single random set of neighbors is used for any vertex processed over a period of \(\kappa\) batches.
The approach described above has a nested iteration structure and the minibatches part of one \(\kappa\) group will be significantly different than another group and this might slightly affect convergence. In Appendix A.4, we propose an alternative smoothing approach that does not require two-level nesting and still takes advantage of the same phenomenon for the NS and LABOR sampling algorithms.
The main idea of our smoothing approach is as follows: each time one samples the neighborhood of a vertex, normally it is done independently of any previous sampling attempts. If one were to do it fully dependently, then one would end up with an identical sampled neighborhood at each sampling attempt. What we propose is to do something inbetween, so that the sampled neighborhood of a vertex changes slowly over time. The speed of change in the sampled neighborhoods is \(\frac{1}{\kappa}\), and after every \(\kappa\) iterations, one gets fully independent new random neighborhoods for all vertices. We will experimentally evaluate the locality benefits and the overall effect of this algorithm on convergence in Sections 4.2 and 4.3.1, and more details on our smoothing approach are discussed in Appendix A.4.
## 4 Experiments
**Setup:** In our experiments, we use the following datasets: reddit (Hamilton et al., 2017), papers100M (Hu et al., 2020), mag240M (Hu et al., 2021), yelp and flickr (Zeng et al., 2020), and their details are given in Table 1. We use Neighbor Sampling (NS) (Hamilton et al., 2017), LABOR Sampling (Balm & Catalyurek, 2023) and Random Walks (RW) (Ying et al., 2018) to form minibatches. We used a fanout of \(k=10\) for the samplers. In addition, Random Walks used length of \(o=3\), restart probability \(p=0.5\) and number of random walks from each seed \(a=100\). All our experiments involve a GCN model with \(L=3\) layers (Hamilton et al., 2017), with 1024 hidden dimension for mag240M and papers100M and 256 for the rest. Additionally, papers100M and mag240M datasets were made undirected graphs for all experiments and this is reflected in the reported edge counts in Table 1. Input features of mag240M are stored with the 16-bit floating point type. We use the Adam optimizer (Kingma & Ba, 2014) with \(10^{-3}\) learning rate in all the experiments.
\begin{table}
\begin{tabular}{r|r|r|r|r|r|r} \hline \hline
**Dataset** & \(|V|\) & \(|E|\) & \(\frac{|E|}{|V|}\) & **\# feats.** & **cache size** & **train - val - test (\%)** & **\# minibatches** \\ \hline flickr & 89.2K & 900K & 10.09 & 500 & 70k & 50.00 - 25.00 - 25.00 & 65 \\ yelp & 717K & 14.0M & 19.52 & 300 & 200k & 75.00 - 10.00 - 15.00 & 595 \\ reddit & 233K & 115M & 493.56 & 602 & 60k & 66.00 - 10.00 - 24.00 & 172 \\ papers100M & 111M & 3.2B & 29.10 & 128 & 2M & 1.09 - 0.11 - 0.19 & 1300 \\ mag240M & 244M & 3.44B & 14.16 & 768 & 2M & 0.45 - 0.06 - 0.04 & 1215 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Traits of datasets used in experiments: numbers of vertices, edges, avg. degree, features, cached vertex embeddings, and training, validation and test vertex split. The last column shows the number of minibatches in an epoch during model training with 1024 batch size including validation.
**Implementation:** We implemented5 our experimental code using C++ and Python in the DGL framework (Wang et al., 2019) with the Pytorch backend (Paszke et al., 2019). All our experiments were repeated 50 times and averages are presented. Early stopping was used during model training runs. So as we go to the right along the x-axis, the variance of our convergence plots increases because the number of runs that were ongoing is decreasing.
Footnote 5: Source code is available at [https://github.com/GT-TDAlab/dgl-coop/tree/dist_graph_squashed_wip_cache](https://github.com/GT-TDAlab/dgl-coop/tree/dist_graph_squashed_wip_cache)
We first compare how the work to process an epoch changes w.r.t. to the batch size to empirically validate Theorems 3.1 and 3.2 for different graph sampling algorithms. Next, we show how dependent batches introduced in Section 3.2 benefits GNN training. We also show the runtime benefits of cooperative minibatching compared to independent minibatching in the multi-GPU setting. Finally, we show that these two techniques are orthogonal, can be combined to get multiplicative savings.
### Demonstrating monotonicity of work
We use three sampling approaches, NS, LABOR, and RW, to demonstrate that the work to process an epoch decreases as the batch size increases for the \(L=3\) layer case across these three different classes of sampling algorithms. We carried out our evaluation in two problem settings: node and edge prediction. For node prediction, a batch of training vertices is sampled with a given batch size. Then, the graph sampling algorithms described in Section 2.2 are applied to sample the neighborhood of this batch. The top row of Figure 2 shows how many input vertices is required on average to process an epoch, specifically \(\frac{E[|S^{3}|]}{|S^{0}|}\). For edge prediction, we add reverse edges to the graph making it undirected and sample a batch of edges. For each of these edges a random negative edge (an edge that is not part of \(E\)) with one endpoint coinciding with the positive edge is sampled. Then, all of the endpoints of these positive and negative edges are used as seed vertices to sample their neighborhoods. The bottom row of Figure 2 shows \(E[|S^{3}|]\).
We can see that in all use cases, datasets and sampling algorithms, the work to process an epoch is monotonically decreasing (see Appendix A.1 for the proof). We also see the plot of the expected number of vertices sampled, \(E[|S^{3}|]\), is concave with respect to batch size (proof in Appendix A.2).
Another observation is that the concavity characteristic of \(E[|S^{3}|]\) seems to differ for different sampling algorithms. In increasing order of concavity we have RW, NS, LABOR-0 and LABOR-*. The more concave a sampling algorithm's \(E[|S^{L}|]\) curve is, the less it is affected from the NEP and
Figure 2: Monotonicity of the work. x-axis shows the batch size, y-axis shows \(\frac{E[|S^{3}|]}{|S^{0}|}\) (see Theorem 3.1) for node prediction (top row) and \(E[|S^{3}|]\) (see Theorem 3.2) for edge prediction (bottom row), where \(E[|S^{3}|]\) denotes the expected number of sampled vertices in the 3rd layer and \(|S^{0}|\) denotes the batch size. RW stands for Random Walks, NS for Neighbor Sampling, and LABOR-0/* for the two different variants of the LABOR sampling algorithm described in Section 2.2.
more savings are available through the use of the proposed methods in Sections 3.1 and 3.2. Note that the differences would grow with a larger choice of layer count \(L\).
### Dependent Minibatches
We vary the batch dependency parameter \(\kappa\) introduced in Section 3.2 for the LABOR-0 sampler with a batch size of \(1024\). Our expectation is that as consecutive batches become more dependent on each other, the subgraphs used during consecutive steps of training would start overlapping with each other, in which case, the vertex embedding accesses would become more localized. We attempted to capture this increase in temporal locality in vertex embedding accesses by implementing an LRU cache to fetch them, the cache sizes used for different datasets is given in Table 1. Note that the cache miss rate is proportional to the amount of data that needs to be copied from the vertex embedding storage. The Figure 3(a) shows that as \(\kappa\) increases, the cache miss rate across all datasets drops. On reddit, this is a drop from 64% to 16% on, a 4x improvement. We also observe that the improvement is monotonically increasing as a function of \(\frac{|E|}{|V|}\) given in Table 1. Figure 3 shows that training is not negatively affected across all datasets up to \(\kappa=256\) with less than \(0.1\%\) F1-score difference, after which point the validation F1-score with w/o sampling starts to diverge from the \(\kappa=1\) case. Runtime benefits of this approach can be observed by comparing the **Cache** and **Cache, \(\mathbf{\kappa}\)** columns in Table 2. Appendix A.5 has additional discussion about the effect of varying \(\kappa\) and the last column of Table 1 shows the number of minibatches in an epoch during training.
### Cooperative Minibatching
We use our largest datasets, mag240M and papers100M, as distributed training is motivated by large-scale graph datasets. We present our runtime results on systems equipped with NVIDIA GPUs, with 4 and 8 A100 80 GB (NVIDIA, 2021) and 16 V100 32GB (NVIDIA, 2020b), all with NVLink interconnect between the GPUs (600 GB/s for A100 and 300 GB/s for V100). The GPUs perform all stages of GNN training and the CPUs are only used to launch kernels for the GPUs. Feature copies are performed by GPUs as well, accessing pinned feature tensors over the PCI-e using zero-copy access. In cooperative minibatching, both data size and computational cost are shrinking
Figure 4: LRU-cache miss rates for LABOR-0 sampling algorithm with \(1024\) batch size per PE and varying \(\kappa\) dependent minibatches, \(\kappa=\infty\) denotes infinite dependency.
Figure 3: The validation F1-score with the full neighborhoods for LABOR-0 sampling algorithm with \(1024\) batch size and varying \(\kappa\) dependent minibatches, \(\kappa=\infty\) denotes infinite dependency, meaning the neighborhood sampled for a vertex stays static during training. See Figure 3(a) for cache miss rates. See Figure 3(c) for the validation F1-score with the dependent sampler and the training loss curve.
with increasing numbers of PEs, relative to independent minibatching. We use the GCN model for papers100M and the R-GCN model (Schlichtkrull et al., 2017) for mag240M. As seen in Table 2, cooperative minibatching reduces all the runtimes for different stages of GNN training, except for the F/B (forward/backward) times on papers100M where the computational cost is not high enough to hide the overhead of communication.
If we take the numbers in the **Total** columns from Table 2, divide independent runtimes by the corresponding cooperative ones, then we get Table 3. We can see that the theoretical decrease in work results in increasing speedup numbers with the increasing number of PEs, due to Theorem A.1. We would like to point out that \(\frac{\bar{E}[|S^{3}|]}{|S^{0}|}\) curves in Figure 2 are responsible for these results. With
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c} \hline \hline
**\# PEs, \(\gamma\)** & **Dataset** & \multirow{2}{*}{**Sampler**} & \multirow{2}{*}{**I/C**} & \multirow{2}{*}{**Samp.**} & \multicolumn{4}{c|}{**Feature Copy**} & \multirow{2}{*}{**F/B**} & \multirow{2}{*}{**Total**} \\ \(\alpha,\beta,|S^{0}|\) & & & & & & **Cache** & **cache**, \(\kappa\) & \\ \hline \multirow{4}{*}{\(4\) A100} & \multirow{2}{*}{papers100M} & LABOR-0 & Indep & 21.7 & 18.4 & 16.8 & **11.2** & \(8.9\) & 41.8 \\ & & & Coop & 17.7 & 14.0 & 10.1 & 5.8 & 13.0 & 36.5 \\ \cline{2-10} & \multirow{2}{*}{GCN} & & \multirow{2}{*}{NS} & Indep & 16.1 & 26.5 & **22.1** & - & 10.1 & 48.3 \\ \cline{6-10} \cline{6-
PEs and \(|S^{0}|\) global batch size, the work performed by independent minibatching vs cooperative minibatching can be compared by looking at \(x=\frac{1}{p}|S^{0}|\) vs \(x=|S^{0}|\) respectively.
We also ran experiments that show that graph partitioning using METIS (Karypis and Kumar, 1998) prior to the start of training can help the scenarios where communication overhead is significant. The forward-backward time goes from 13.0ms to 12.0ms on papers100M with LABOR-0 on 4 NVIDIA A100 GPUs with such partitioning due to reduced communication overhead using the same setup as Table 2.
Increasing the number of GPUs increases the advantage of cooperative minibatching compared to independent minibatching. The forward-backward time on mag240M with LABOR-0 is 200 (same as independent baseline), 194, 187 and 183 ms with 1, 2, 3 and 4 cooperating PEs, respectively measured on the NVIDIA DGX Station A100 machine. The decrease in runtime with increasingly cooperating PEs is due to the decrease in redundant work they have to perform. Even though the batch size per PE is constant, the runtime goes down similar to the plots in the top row of Figure 2, except that it follows \(\frac{kE[|S^{2}|]}{|S^{0}|}\), which gives the average number of edges in the 3rd layer when a sampler with fanout \(k\) is used.
Additionally, we demonstrate that there is no significant model convergence difference between independent vs cooperative minibatching in Appendix A.6.
#### 4.3.1 Cooperative-Dependent Minibatching
We use the same experimental setup as Section 4.3 but vary the \(\kappa\) parameter to show that cooperative minibatching can be used with dependent batches (Figure 3(b)). We use a cache size of 1M per PE. Cooperative feature loading effectively increases the global cache size since each PE caches only the vertices assigned to them while independent feature loading can have duplicate entries across caches. For our largest dataset mag240M, on top of \(1.4\times\) reduced work due to cooperative minibatching alone, the cache miss rates were reduced by more than \(2\times\), making the total improvement \(3\times\). Runtime results for \(\kappa\in\{1,256\}\) are presented in Table 2, the Feature Copy **Cache** and **Cache, \(\mathbf{\kappa}\)** columns. Table 4 summarizes these results by dividing the runtimes in **Cache** by **Cache, \(\mathbf{\kappa}\)** and reporting percentage improvements.
## 5 Conclusions
In this paper, we investigated the difference between DNN and GNN minibatch training. We observed that the cost of processing a minibatch is a concave function of batch size in GNNs, unlike DNNs where the cost scales linearly. We then presented theorems that this is indeed the case for every graph and then proceeded to propose two approaches to take advantage of cost concavity. The first approach, which we call cooperative minibatching proposes to partition a minibatch between multiple PEs and process it cooperatively. This is in contrast to existing practice of having independent minibatches per PE, and avoids duplicate work that is a result of vertex and edge repetition across PEs. The second approach proposes the use of consecutive dependent minibatches, through which the temporal locality of vertex and edge accesses is manipulated. As batches get more dependent, the locality increases. We demonstrate this increase in locality by employing an LRU-cache for vertex embeddings on GPUs. Finally, we show that these approaches can be combined without affecting convergence, and speed up multi-GPU GNN training by up to \(64\%\) for free.
\begin{table}
\begin{tabular}{c|c|r|r|r} \hline
**Dataset \& Model** & **I/C** & **4 GPUs** & **8 GPUs** & **16 GPUs** \\ \hline papers100M & Indep & \(50\%\) & \(57\%\) & \(37\%\) \\ GCN & Coop & \(74\%\) & \(78\%\) & \(112\%\) \\ \hline \hline mag240M & Indep & \(37\%\) & \(41\%\) & \(26\%\) \\ R-GCN & Coop & \(58\%\) & \(50\%\) & \(82\%\) \\ \hline \end{tabular}
\end{table}
Table 4: Runtime improvements of Dependent Minibatching for Independent and Cooperative Minibatching methods compiled from the **Cache, \(\mathbf{\kappa}\)** and **Cache** columns of Table 2 with LABOR-0. Making consecutive minibatches dependent increases temporal locality, hence reducing cache misses. This speeds up feature loading runtime. |
2303.14162 | IMA-GNN: In-Memory Acceleration of Centralized and Decentralized Graph
Neural Networks at the Edge | In this paper, we propose IMA-GNN as an In-Memory Accelerator for centralized
and decentralized Graph Neural Network inference, explore its potential in both
settings and provide a guideline for the community targeting flexible and
efficient edge computation. Leveraging IMA-GNN, we first model the computation
and communication latencies of edge devices. We then present practical case
studies on GNN-based taxi demand and supply prediction and also adopt four
large graph datasets to quantitatively compare and analyze centralized and
decentralized settings. Our cross-layer simulation results demonstrate that on
average, IMA-GNN in the centralized setting can obtain ~790x communication
speed-up compared to the decentralized GNN setting. However, the decentralized
setting performs computation ~1400x faster while reducing the power consumption
per device. This further underlines the need for a hybrid semi-decentralized
GNN approach. | Mehrdad Morsali, Mahmoud Nazzal, Abdallah Khreishah, Shaahin Angizi | 2023-03-24T17:15:43Z | http://arxiv.org/abs/2303.14162v1 | # IMA-GNN: In-Memory Acceleration of Centralized and Decentralized Graph Neural Networks at the Edge
###### Abstract.
In this paper, we propose IMA-GNN as an In-Memory Accelerator for centralized and decentralized Graph Neural Network inference, explore its potential in both settings, and provide a guideline for the community targeting flexible and efficient edge computation. Leveraging IMA-GNN, we first model the computation and communication latencies of edge devices. We then present practical case studies on GNN-based taxi demand and supply prediction and also adopt four large graph datasets to quantitatively compare and analyze centralized and decentralized settings. Our cross-layer simulation results demonstrate that on average, IMA-GNN in the centralized setting can obtain \(\sim\)790\(\times\) communication speed-up compared to the decentralized GNN setting. However, the decentralized setting performs computation \(\sim\)1400\(\times\) faster while reducing the power consumption per device. This further underlines the need for a hybrid semi-decentralized GNN approach.
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
+
Footnote †: journal journal: Journal of Computer Science
+
Footnote †: journal: Journal of Computer Science
ability to mitigate the computation overhead by load sharing, decentralized GNN operation faces a major bottleneck; the excessive communication overhead between nodes in different distributed devices (Beng et al., 2017). To the best of our knowledge, this work is among the first to explore and compare in-memory acceleration in both centralized and decentralized GNN settings and to offer a design guideline to the community. The main contributions of this paper are as follows: (1) We develop a PIM architecture with RRAM arrays based on a set of innovative micro-architectural designs that can be optimized and used for centralized and decentralized GNN inference for efficiency and speed-up; (2) We model the latency and power consumption of GNN accelerators implemented in centralized and decentralized settings considering the computation and communication between edge devices; and (3) We present a solid bottom-up evaluation framework to analyze the performance of the whole system in real scenarios and through adopting large graph datasets.
## 2. Proposed Ima-Gnn
### Architecture Overview
The IMA-GNN is a high-performance and energy-efficient RRAM crossbar-based accelerator developed to execute GNN's pivotal operations in both centralized and decentralized settings inspired by (Han et al., 2017). As shown in Fig. 2(a), IMA-GNN comprises three computation cores, i.e., traversal, aggregation, and feature extraction as well as peripherals such as a buffer array and a controller. The traversal core consists of resistive Content Addressable Memory (CAM) crossbars capable of search and comparison operations (Fig. 2(c)). All resistive CAM crossbars on the bottom side are connected to a shared vector generator & scheduler unit and then to a high-bandwidth bus to communicate with other cores at the top. The aggregation core includes resistive crossbars (Fig. 2(b)) to perform in-situ Matrix-Vector-Multiplication (MVM) operations for the feature aggregation. The feature extraction core is designed with a similar resistive crossbar but a different size to take care of transformation in GNN inference. The resistive MVM crossbars are connected to a shared activation unit.
### MVM & CAM Crossbars
The RRAM crossbar memory arrays are widely explored as a potential parallel engine to execute MVM operation and scan and search (Han et al., 2017; Han et al., 2017; Han et al., 2017). As shown in Fig. 2(b) in the MVM crossbars, the weight parameters are first stored as resistance states in each RRAM device in a 1-Transistor-1-RRAM (1T1R) structure, and then the input binary bit-strings, as the inputs to the crossbar array, are converted by the Digital-to-Analog Converter (DAC) into voltages \(V_{i}\) and applied to Bit-Lines (\(BL\)s) in parallel. The weighted currents generated from the RRAM cells sharing a Source-Line (\(SL\)) are accumulated resulting in an intrinsic dot-products operation. The accumulated values are then sampled by Sample & Hold unit and then converted to binary data using Analog-to-Digital Converters (ADC). The partial-product results from each \(SL\) are further processed by the Shift & Add unit to generate the final result. As shown in Fig. 2(c), in the CAM crossbars, each Ternary CAM (TCAM) cell consists of 2-Transistor-2-RRAM (2T2R) to accomplish the XNOR search operation on each pair of cells. For this operation, \(BL\) and \(\overline{BL}\)s are valued with the search data by the Search Data Driver. Accordingly, the Sense Amplifier connected to Match-Lines (\(ML\)SA) senses whether the row is a match or mismatch with the reference connected to Vdd. In the compare operation, \(BL\)s are grounded and \(\overline{BL}\)s are connected to increasing calibrated voltages from the Least Significant Bit (LSB) to the Most Significant Bits (MSB).
### Accelerator Dataflow
Once the edge buffers shown in Fig. 2(a) on the left have been loaded with graph data in either centralized or decentralized GNN settings, the traversal core starts processing edges. The traversal
Figure 2. (a) The proposed IMA-GNN architecture with resistive CAM traversal core, resistive MVM aggregation, and feature extraction cores, (b) Resistive MVM crossbar, (c) Resistive CAM crossbar.
core performs two essential CAM-based operations, i.e., search and compare. To maximize the data reuse of feature data in IMA-GNN, the traversal core implements an efficient node-stationary dataflow by buffering a set of node features in the buffer array and reusing it for the aggregation core. IMA-GNN leverages a Compressed Sparse Row (CSR) format (Krishnam et al., 2015) to form the Edge weight array (E), Column Index array (CI), and Row Pointer array (RP) and loads the graph data to search and scan CAMs (Fig. 2(a)). A sample graph adjacency matrix and the corresponding CSR format are shown in Fig. 3(a)-(b). Any destination node then operates as an input to the search CAM as shown in the data mapping in Fig. 3(c) and rows that match the search data are activated. Matching rows are reference inputs for comparison in the scan CAM, which determines the source nodes with edges to the destination node by comparing the input row with RP (Fig. 3(d)). Next, the vector generator & scheduler unit receives the result of scan CAM and edge data to render input control vectors for the aggregation core (Fig. 2(a)). This will activate particular rows of resistive aggregation core corresponding to incoming edges. Next, the aggregation core input buffers receive input vectors for each destination node along with the destination node. The aggregation core starts with source node features or feature dimensions across its own cluster. IMA-GNN is equipped with double buffering for feature data and graph data. This feature enables overlapping writing/programming phases and the traversal stage. Next, the updated destination node features are fed to the feature extraction core's crossbars programmed with weights. Besides, similar to (Beng et al., 2017), to maximize the crossbar utilization in the aggregation core, both the aggregation and feature extraction cores could work in parallel.
## 3. Network Modeling
We explore both centralized and decentralized GNN landscapes to fairly model the performance metrics in both settings. Figure 4 shows a sample graph with \(N\) (\(N\in\mathbb{Z}^{+}\)) nodes (edge devices). In the centralized GNN setting, a single powerful node as the accelerator is designed with embedded traversal, aggregation, and feature extraction cores to communicate through fast inter-network links (\(L_{n}\)) (Krishnam et al., 2015) to aggregate all edge devices' information and handle the computation burdens of transformation. These cores have \(M_{1},M_{2}\), and \(M_{5}\) times larger allocated computing hardware for traversal, aggregation, and feature extraction operations respectively than a single node in the decentralized mode. Thus, we assume the processing capability of the edge device in the centralized setting is \(M_{1}\), \(M_{2}\), and \(M_{3}\) times larger than the processing capability of a single node in the decentralized mode in the traversal, aggregation, and feature extraction operations, respectively. In the decentralized GNN setting, each edge device is observed as an accelerator with reduced traversal and aggregation cores and in addition to a copy of our network, has an embedded feature extraction core processing \(L\) layers. The output of the feature extraction core at each edge device is only communicated to the adjacent edge devices at a defined cluster as shown in Fig. 4(b). Therefore, the communication between neighbors through inter-cluster links (\(L_{c}\)) (Krishnam et al., 2015) generates a communication volume as well. The bidirectional communication volume between node-\(i\) to node-\(j\) is represented as \(\epsilon_{i,j}\). Therefore, the minimization of the accelerator's computational latency/power and communication latency/power is a pivotal need in the research community. We estimate the centralized and decentralized GNN accelerators' latency as:
\[T_{Net}(N)=T_{compute}(N)+T_{communicate}(N). \tag{1}\]
In an \(N\)-edge device graph shown in Fig. 4, denoting by \(t_{1}\), \(t_{2}\), and \(t_{3}\) the traversal, aggregation, and feature extraction cores' latency, respectively, the computation latency of a single node in the decentralized GNNs can be estimated by:
\[T_{compute-decentralized}=t_{1}+t_{2}+t_{3}, \tag{2}\]
where in the centralized setting, considering the processing capability of a single powerful edge device the computation latency is given by:
\[T_{compute-centralized}=(t_{1}/M_{1}+t_{2}/M_{2}+t_{3}/M_{5})\times(N-1). \tag{3}\]
In the decentralized setting, the communication latency, \(T_{communicate}\), can be given by:
\[T_{communicate-centralized}=(t_{k}+(c_{k}\times t(L_{c})))\times 2, \tag{4}\]
where \(t_{k}\) is the required time for establishing a connection between two adjacent nodes, \(c_{s}\) denotes the number of adjacent nodes inside a cluster, \(t(L_{c})\) denotes the latency of the inter-cluster link, and number \(2\) is to model a two-way link. We assume that data communication inside each cluster is done in a sequential way, thus the number of adjacent nodes is multiplied by the latency of the inter-cluster link. For the centralized setting, we assume data transfer between the central edge device and nodes is done in a concurrent way. Therefore, the communication latency, \(T_{communicate}\)
Figure 4. Intra- and inter-edge links in a sample (a) centralized versus (b) decentralized GNN.
Figure 3. IMA-GNN’s hardware mapping and acceleration in traversal core: (a) Sample graph adjacency matrix, (b) CSR representation, (c) Search CAM operation, (d) Scan CAM operation.
for centralized inference can be given by:
\[T_{communicate-decentralized}=t(L_{n}), \tag{5}\]
where \(t(L_{n})\) is the latency of the inter-network link. Suppose each edge device runs a GNN with \(X\)-layers, the number of input and output neuron activations for a layer \(x\) for \(1\leqslant x\leqslant\)X can be given by \(\alpha(x)\) and \(\alpha(x+1)\), respectively. The total power consumption of GNNs implemented in the proposed accelerator can be developed as:
\[P_{Net}(N)=P_{compute}(N)+P_{communicate}(N). \tag{6}\]
The first part accounts for the computation power and the second part considers communication power for inter-network and inter-cluster links. In the centralized setting, \(P_{compute-centralized}\) is given by \(\frac{E_{compute-centralized}}{T_{compute-centralized}}\) and \(P_{communicate-centralized}\) can be given by \(p(L_{n})\times 2\). Here \(p(L_{n})\) denotes the power consumption of the inter-network link and number \(2\) is to model a two-way transfer. \(E_{compute-centralized}\) is readily calculated by achieving the energy consumption values of traversal, aggregation, and feature extraction cores. As for the decentralized setting, \(P_{compute-decentralized}\) can be computed with respect to energy and latency parameters. We consider \(\sum_{n=1}^{c_{s}}p_{n}(c_{s}(n)(c_{s}(n)-1))\) transactions between all accelerators inside the cluster. Considering the X-layer GNN, \(P_{communicate}\) can be expressed as follow:
\[P_{communicate-decentralized}=\frac{1}{t(L_{c})}\times\sum_{x=1}^{X-1}\alpha(x+ 1)\times E_{perBtt} \tag{7}\]
## 4. Experiments
### Evaluation Framework
To evaluate the performance of the proposed architecture, a comprehensive bottom-up evaluation framework is developed as depicted in Fig. 5. At the circuit-level, we use the SPICE model for memristors with the Ag-Si memristor device parameters from (K
The enormous sizes of transportation graphs make it challenging to apply the model in (Zhou et al., 2017) in centralized GNN inference. This limitation is further aggravated by the hetGNNs and the huge volumes of local node information. As a remedy to resolve this limitation, the authors in (Zhou et al., 2017) propose a decentralized GNN inference approach. In this approach, each taxi node has a copy of the model (hetGNN-LSTM), exchanges messages with its \(k\)-hop neighbors, and then uses the hetGNN-LSTM model to predict the demands and supplies in its surrounding region. A natural advantage of this approach is handling dynamically varying graph structures. Nevertheless, despite the promising advantages of decentralization, there is still a demanding need for reducing the overall computation and communication latency in the operation of the model.
In this experiment, for the centralized GNN setting, the overall latency (in terms of transmission delay) for sending and receiving a packet of 300 Bytes is considered 1.1 ms where the range of the network is 300 meters (Kumar et al., 2017). This latency is the average overall latency to correctly receive a packet of 300 bytes. Thus, for a packet size of 864 bytes, which is the size of our data, the overall transmission delay can be \(\sim\)3.3 ms.
As for the decentralized GNN setting, we assume that nodes in the graphs of (Zhou et al., 2017) communicate with each other using an ad-hoc wireless network that uses channel 9 (2.452 GHz) of IEEE 802.11n, where the transmission power is fixed to \(\sim\)31 dBm, and bandwidth is 20 MHz. In this configuration, source nodes feed their messages to nearby proxy (relay) nodes which forward the messages to the next nodes, and so on. Since source nodes have more computation compared to proxy (relay) nodes, they incur more delay. We leverage the proposed bottom-up evaluation framework to estimate IMA-GNN architecture performance in both centralized and decentralized settings. In our evaluation, the number of nodes (taxis) of the graph and the cluster size (\(c_{s}\)) are set as 10000 and 10, respectively. The evaluation results of the latency and power consumption are tabulated in Table 1.
In view of the results, the decentralized setting in (Zhou et al., 2017) improves the total computation latency by a factor of \(\sim\)10\(\times\). This is achieved by reducing the traversal latency, aggregation latency, and feature extraction latency by factors of 5\(\times\), 10\(\times\), and \(\sim\)39\(\times\), respectively. Thus, a huge improvement can be observed in terms of computation latency. However, in terms of communication, the centralized setting acts much better than the decentralized setting by incurring a \(\sim\)120\(\times\) less latency. As for computation power consumption, we observe that the decentralized setting reduces the power budget per node by a factor of 18\(\times\). The aggregation core of IMA-GNN consumes most of the power in both centralized and decentralized settings as well as the highest latency. Overall, each of the settings has its own advantage from a different point of view. In the next subsection, more graph datasets with different characteristics are studied in order to further elucidate the pros and cons of centralized and decentralized settings.
### Graph Datasets
In the second case study, four graph datasets, LiveJournal, Collab, Cora, and Citeseer (Citeseer, 2015; Zhou et al., 2017) are used to evaluate the inference latency using the proposed IMA-GNN in centralized and decentralized settings. Key graph statistics of these datasets are provided in Table 2. A given vertex is mapped deterministically to a fixed-sized, uniform sample of its neighbors.
Figure 8 shows the latency for the four aforementioned graph datasets. Each bar consists of two parts; computation latency and communication latency. For each dataset, we have two bars representing the latency in the centralized (left) and decentralized (right) settings. By close observation, it can be realized that in all under-test datasets, the computation latency of the decentralized setting is less than that of the centralized setting. Especially, the difference between the computation latency of centralized and decentralized settings is huge in LiveJournal and Collab datasets, where the graph size is much larger than the other two datasets. In view of Fig. 8, amongst the four datasets, LiveJournal has the largest computation latency in the centralized settings because it owns the largest number of nodes. This is because in the decentralized mode, as each node is responsible to do its' own computation, the computation latency is independent of the total number of nodes and doesn't increase when the number of nodes grows. Conversely, in the centralized setting, by growing the number of nodes, the computation burden of the edge device will increase, causing an increment in the computation latency. On average for these four datasets, the decentralized setting performs computations \(\sim\)1400\(\times\) faster. However, the communication latency which is the dominant part of the total latency is much higher in the decentralized setting as each node is required to establish a peer-to-peer connection and transfer data with all adjacent nodes sequentially. According to Fig. 8, Collab has the largest communication latency amongst the four
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline Settings & \multicolumn{2}{c|}{Centralized} & \multicolumn{2}{c|}{Decentralized} \\ \hline Figure of merits & Latency & Power & Latency & Power \\ \hline Traversal & 38.43 ns & 10.8 mW & 7.68 ns & 0.21 mW \\ \hline Aggregation & 142.77 \(\mu\)s & 780.1 mW & 14.27 \(\mu\)s & 41.6 mW \\ \hline Feature extraction & 14.53 \(\mu\)s & 32.21 mW & 0.37 \(\mu\)s & 3.68 mW \\ \hline Computation (Net) & 157.34 \(\mu\)s & 823.11 mW & 14.6 \(\mu\)s & 45.49 mW \\ \hline Communication & 3.30 ms & - & 406 ms & - \\ \hline \end{tabular}
\end{table}
Table 1. Computation and communication latency/power of IMA-GNN accelerator.
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline Datasets & LiveJournal & Collab & Cora & Citeseer \\ \hline Number of Nodes & 4,847,571 & 372,475 & 2708 & 3,327 \\ \hline Number of Edges & 68,993,773 & 24,574,995 & 5429 & 4,732 \\ \hline Feature Length & 1 & 496 & 1433 & 3,703 \\ \hline Average \(C_{s}\) & 9 & 263 & 4 & 2 \\ \hline \end{tabular}
\end{table}
Table 2. Key statistics of the graph datasets used.
Figure 7. The architecture of the hetGNN-LSTM based prediction in (Zhou et al., 2017).
datasets, in the decentralized settings due to its large Average \(C_{s}\), where each node is required to communicate with a large number of adjacent nodes sequentially. However, in the centralized mode, all nodes are connected to the edge device using a fast and mature connection and can transfer data in a parallel way. In terms of the communication latency, for four under-test datasets, the centralized setting is \(\sim\)790\(\times\) faster than the decentralized setting.
In addition, we observe that the performance of the IMA-GNN architecture can increase linearly with an increase in the number of resistive CAM and MVM crossbars in decentralized setting for various datasets and saturate once the entire node feature data could be fitted onto the crossbars. However, it comes at the cost of higher power consumption for each node.
## 5. Conclusion
While the respective benefits of centralized and decentralized GNNs are known in software implementation, there is a lack of hardware implementation analysis to show the communication and computation loads in each setting. This work undertakes this task by modeling and analyzing practical case studies on GNN-based taxi demand and supply prediction and adopting large-scale graph datasets. Our cross-layer simulation results demonstrate our proposed platform called IMA-GNN in the centralized GNN setting can obtain \(\sim\)790\(\times\) communication speed-up compared to the decentralized GNN setting. However, the decentralized GNN setting performs computation \(\sim\)1400\(\times\) faster while reducing the power consumption per device. This study is conducted based on certain assumptions as discussed. Nevertheless, the results extrapolate that the decentralized GNN setting achieves gains in reducing the computation latency. However, this comes at the expense of increasing communication overhead and latency. This latency is more strongly pronounced with larger graphs. This finding confirms the necessity and the potential of balancing this communication-computation trade-off through a semi-decentralized setting (Zhou et al., 2020). In that setting, multiple edge devices are employed to decentralize the operation on a graph level while each edge device region works in a centralized fashion. Our future work will consider the hardware acceleration of the semi-decentralized GNN setting.
|
2308.13219 | Physics-informed neural networks for unsteady incompressible flows with
time-dependent moving boundaries | Physics-informed neural networks (PINNs) employed in fluid mechanics deal
primarily with stationary boundaries. This hinders the capability to address a
wide range of flow problems involving moving bodies. To this end, we propose a
novel extension, which enables PINNs to solve incompressible flows with
time-dependent moving boundaries. More specifically, we impose Dirichlet
constraints of velocity at the moving interfaces and define new loss functions
for the corresponding training points. Moreover, we refine training points for
flows around the moving boundaries for accuracy. This effectively enforces the
no-slip condition of the moving boundaries. With an initial condition, the
extended PINNs solve unsteady flow problems with time-dependent moving
boundaries and still have the flexibility to leverage partial data to
reconstruct the entire flow field. Therefore, the extended version inherits the
amalgamation of both physics and data from the original PINNs. With a series of
typical flow problems, we demonstrate the effectiveness and accuracy of the
extended PINNs. The proposed concept allows for solving inverse problems as
well, which calls for further investigations. | Yongzheng Zhu, Weizhen Kong, Jian Deng, Xin Bian | 2023-08-25T07:33:47Z | http://arxiv.org/abs/2308.13219v1 | Physics-informed neural networks for unsteady incompressible flows with time-dependent moving boundaries
###### Abstract
Physics-informed neural networks (PINNs) employed in fluid mechanics deal primarily with stationary boundaries. This hinders the capability to address a wide range of flow problems involving moving bodies. To this end, we propose a novel extension, which enables PINNs to solve incompressible flows with time-dependent moving boundaries. More specifically, we impose Dirichlet constraints of velocity at the moving interfaces and define new loss functions for the corresponding training points. Moreover, we refine training points for flows around the moving boundaries for accuracy. This effectively enforces the no-slip condition of the moving boundaries. With an initial condition, the extended PINNs solve unsteady flow problems with time-dependent moving boundaries and still have the flexibility to leverage partial data to reconstruct the entire flow field. Therefore, the extended version inherits the amalgamation of both physics and data from the original PINNs. With a series of typical flow problems, we demonstrate the effectiveness and accuracy of the extended PINNs. The proposed concept allows for solving inverse problems as well, which calls for further investigations.
keywords: physics-informed neural networks; unsteady incompressible flows; time-dependent moving boundaries.
## 1 Introduction
In recent years, machine learning has achieved enormous accomplishments in part due to the technological innovation in graph processing units and healthy development of software ecosystems. It finds applications in a variety of disciplines, including physics, linguistics, biology, and many others, as well as in scenarios such as image recognition, natural language processing, and autonomous driving [1]. Inspired by these developments, applied mathematicians have swiftly proposed various ingenious frameworks based on machine learning, aiming to shift the traditional paradigm of computational science and engineering [2; 3]. The novel frameworks may be data-driven, physics-informed, partial differential equations (PDEs)-regularized, in a variational form, optimization of discrete loss and so on [4; 5; 6; 7; 8]. In particular, the physics-informed neural networks (PINNs) have stimulated a wide range of research efforts [3]. The key idea behind PINNs is to incorporate the residual of PDEs into the loss function of the neural network and leverage its powerful nonlinear fitting abilities to approximate the solutions [7]. Given the impressive
performance, researchers have extended PINNs to solve various types of differential equations [9, 10, 11]. Many algorithmic aspects have also been considered to facilitate the training of PINNs to improve the accuracy and/or convergence. These include but are not limited to adaptive activation functions [12], adaptive weights [13], gradient-enhanced approaches [14], artificial-viscosity regularized loss [15], residual-based and/or solution-gradient based adaptive sampling [16, 17], among others. Furthermore, PINNs are also extended to deal with multi-fidelity data and multiscale physics [18, 19]. To tackle various differential equations steadily, an open-source code known as DeepXDE has been provided as a platform for stream-lined training of PINNs [20].
Most known physics laws are described by PDEs and therefore, PINNs provides a new paradigm for solving problems in various fields. Moreover, the solution strategy is universal for both forward and inverse problems, whether with partial data or without data. In fluid mechanics, Navier-Stokes (NS) equations are regarded as the fundamental PDEs. Raissi et al. [7] applied PINNs to infer an unknown coefficient of the NS equations from flow data around a stationary cylinder. Furthermore, they adjusted PINNs for flow visualization, where concentration fields were observed from artwork or medical images to reconstruct the complete flow fields [21]. Rao et al. [22] utilized PINNs as a direct numerical simulation (DNS) tool to study flow around a stationary cylinder at low Reynolds number. Jin et al. [23] evaluated the accuracy, convergence, and computational cost of PINNs on laminar flows such as flow around a cylinder and turbulent flows in pipeline using DNS dataset as benchmark. Wang et al. [24] proposed an algorithm to enable PINNs to deal with 1D two-phase Stefan problems with free boundaries. Cai et al. [25] explored PINNs for heat transfer problems, in which they utilized sparse measuring points of temperature to infer both temperature and velocity fields of the entire domain. Despite so many successful applications of PINNs in fluid mechanics, flows with moving boundaries are rarely considered, which are frequently encountered in nature and industry, such as flow over a moving blunt body, bird/insect hovering and fish swimming [26, 27, 28, 29, 30, 31]. One exception is the work of Raissi et al. [32], where they adopted a transformed coordinate attached to the cylinder to resolve the flow for vortex-induced vibration. The coordinate transformation is elegant, but it is not suitable for general moving boundaries such as multiple blunt bodies in flows. In contrast, computational fluid dynamics (CFD) methods have been employed routinely to solve flow problems with moving boundaries [33, 34, 35], although they may require complex meshing techniques such as morphing mesh [36] and overset mesh [37], or interpolation techniques such as the immersed boundary method (IBM) [38]. To bridge this gap and allow PINNs to handle flow problems with moving boundaries, we introduce new loss functions proper for training points at moving interfaces and around moving boundaries in the entire temporal domain. Therefore, we are still able to leverage the power and flexibility of the classical PINNs to infer all flow fields, whether with partial data or without any exogenous data. Interactions between fluid and a moving body may be categorized into two types: one-way coupling, where the motion of the body is prescribed and its influence on the fluid dynamics needs to be computed; two-way coupling, where interface between the fluid and the body is part of the solution itself and dynamics of both parts must be computed as well. In this work, we focus on the first type, as in reality it is rather straightforward to extract the trajectories of blunt bodies and reconstruction of the flow fields around moving bodies often remain obscure.
This rest of the paper is organized as follows. In Section 2, we review PINNs briefly and present the no-slip velocity condition for time-dependent moving bodies in detail in Section 3. Direct numerical simulation results of PINNs are validated by three examples in Section 4. In Section 5, we further reconstruct the whole flow fields with partial data in three representative cases. Discussions and conclusions are made in Section 6. Auxiliary data from CFD methods are presented in the Appendix.
## 2 Physics-informed neural networks
Consider the parametric nonlinear PDEs in a general form expressed as follows:
\[f\left(\mathbf{x};\frac{\partial s}{\partial x_{1}},\ldots,\frac{\partial s}{ \partial x_{d}};\frac{\partial^{2}s}{\partial x_{1}\partial x_{1}},\ldots, \frac{\partial^{2}s}{\partial x_{1}\partial x_{d}};\ldots;\lambda\right)=0, \quad\mathbf{x}\in\Omega, \tag{1}\]
with the boundary condition and initial condition:
\[\mathcal{B}(s,\mathbf{x})=0,\quad\mathbf{x}\in\partial\Omega, \tag{2}\] \[\mathcal{I}(s,\mathbf{x})=0,\quad\mathbf{x}\in\Gamma_{i}. \tag{3}\]
Here \(\mathbf{x}=[x_{1},x_{2},\ldots,x_{d}]\) are the independent variables, \(f\) denotes the linear or nonlinear differential operators, \(s\) is the solution, and \(\lambda=[\lambda_{1},\lambda_{2},\ldots]\) are the parameters for combining each component. \(\Omega\) and \(\partial\Omega\) represent the computational domain and the boundaries, respectively. \(\Gamma_{i}\) stands for the space of \(\mathbf{x}\) at the initial snapshot.
Within the framework of PINNs [7], a fully connected neural network (FCNN) is exploited as a highly nonlinear function \(\hat{s}(\mathbf{x})\) to approximate the solution \(s\) of the PDEs. It is composed of an input layer, multiple hidden layers, and an output layer, while each layer contains several neurons with weights, biases, and nonlinear activation function \(\sigma(\cdot)\). Let \(\mathbf{x}\) be the input and the implicit variable of the \(i\)th hidden layer be \(\mathbf{y}^{i}\), then a FCNN with \(L\) layers can be expressed as:
\[\begin{cases}\mathbf{y}^{0}=\mathbf{x},\\ \mathbf{y}^{i}=\sigma\left(\mathbf{W}^{i}\mathbf{y}^{i-1}+\mathbf{b}^{i}\right),\quad 1\leq i \leq L-1,\\ \mathbf{y}^{i}=\mathbf{W}^{i}\mathbf{y}^{i-1}+\mathbf{b}^{i},\quad i=L,\end{cases} \tag{4}\]
where \(\mathbf{W}^{i}\) denotes the weight matrix and \(\mathbf{b}^{i}\) is the bias vector to be trained at \(i\)th layer, respectively. \(\sigma(\cdot)\) functions element-wise.
Since \(\hat{s}\) is the approximate solution, each component in the PDEs can be obtained by taking derivatives of \(\hat{s}\) with respect to \(x_{i}\) one or more times by the automatic differentiation (AD) techniques combined with the chain rule [39]. Subsequently, we define a composite loss function as the sum of the residuals of the equation, boundary condition, initial condition, and labeled data:
\[\mathcal{L}(\mathcal{T})=w_{f}\mathcal{L}_{f}\left(\mathcal{T}_{f}\right)+w_{ b}\mathcal{L}_{b}\left(\mathcal{T}_{b}\right)+w_{i}\mathcal{L}_{i}\left( \mathcal{T}_{i}\right)+w_{d}\mathcal{L}_{data}\left(\mathcal{T}_{data}\right), \tag{5}\]
where \(w_{f}\), \(w_{b}\), \(w_{i}\) and \(w_{d}\) are the corresponding weights and \(\mathcal{T}_{f}\), \(\mathcal{T}_{b}\), \(\mathcal{T}_{i}\) and \(\mathcal{T}_{data}\) are the corresponding sets of training points, respectively.
With the target of minimizing the loss function, the neural network is trained by optimizing the weights and biases through the back-propagation algorithm with an optimizer such as Adam [40]. The flexibility of PINNs lies in the capability of switching between supervised, weakly supervised, and unsupervised learning approaches. For a supervised learning or data-driven approach, the loss function is guided only by a labeled dataset, that is, \(\mathcal{T}_{data}\), and PINNs degenerates to be FCNNs. For an unsupervised learning, the loss function is defined by the equation, the boundary condition, and the initial condition. The training dataset \(\mathcal{T}_{f}\) corresponds to collocation points, where solutions need to be inferred, and they are constrained by the points within \(\mathcal{T}_{b}\) and \(\mathcal{T}_{i}\). More frequently, PINNs are employed with all the four types of loss functions of training points.
## 3 PINNs for incompressible flows with time-dependent moving boundaries
### Governing equations
The NS equations are expressed as continuity and momentum equations in two dimensional coordinates \(\mathbf{x}=(x,y)\) as follows:
\[\nabla\cdot\mathbf{u} =0,\] \[\frac{\partial\mathbf{u}}{\partial t}+(\mathbf{u}\cdot\nabla)\mathbf{u} =-\nabla p+\frac{1}{Re}\nabla^{2}\mathbf{u}, \tag{6}\]
where \(t\) is the time, \(\mathbf{u}=(u,v)\) is the velocity vector, \(p\) is the pressure, and \(Re=UD/v\) is the Reynolds number defined by the reference velocity \(U\), the characteristic length \(D\), and the kinematic viscosity \(v\).
If there are solid or truncated boundaries at stationary, one or more constraints for velocity and/or pressure may be specified as follows:
\[\mathbf{u}=\mathbf{u}_{\Gamma}(\mathbf{x}),\quad\mathbf{x}\in\Gamma_{D}, \tag{7}\] \[p=p_{\Gamma}(\mathbf{x}),\quad\mathbf{x}\in\Gamma_{D},\] (8) \[\frac{\partial\mathbf{u}}{\partial\mathbf{n}}=0,\quad\mathbf{x}\in\Gamma_{N},\] (9) \[\frac{\partial p}{\partial\mathbf{n}}=0,\quad\mathbf{x}\in\Gamma_{N}, \tag{10}\]
where \(\mathbf{n}\) is the normal vector at the boundary; \(\Gamma_{D}\) and \(\Gamma_{N}\) denote the Dirichlet and Neumann boundaries, respectively.
### Boundary conditions of moving bodies
Firstly, we take a cylinder that travels through the fluid constantly with time, as an example. Other moving solid bodies can be accomplished similarly. As the interface between the fluid and solid is time-dependent, we may "excavate a tunnel" occupied by the cylinder in the spatial-temporal domain, as illustrated by Fig. 1(a). We denote the fluid domain outside of the tunnel as \(\Omega\). Accordingly, an interface \(\partial\Omega\) is
Figure 1: Schematic of a moving cylinder in spatial-temporal domain and its corresponding training points. (a) Training points are randomly distributed in \(\Omega\), which excludes the ”tunnel” occupied by the travelling cylinder with \(\partial\Omega\) as the interface. (b) Distribution of training points in \(x-y\) plane at time \(t=t_{1}\), with finer points sampled at the interface and around the cylinder. (c) Aerial view in the y-direction of all training points sampled at the moving interface.
formed as the cylinder travels through the fluid and it is given as a function of time:
\[\partial\Omega:=f(\mathbf{\delta}(t),\alpha(t),t), \tag{11}\]
where \(\mathbf{\delta}=(\zeta,\eta)\) is the spatial coordinate for the center of the cylinder. Moreover, \(\alpha\) is its rotation angle vector, which shall be useful for a general-shaped moving body. The Dirichlet velocity constraint according to the no-slip condition is as follows:
\[\mathbf{u}=\frac{\partial\mathbf{\delta}}{\partial t}+\mathbf{r}\times\frac{\partial \alpha}{\partial t},\quad(\mathbf{x},t)\in\partial\Omega, \tag{12}\]
where \((\mathbf{x},t)=(x,y,t)\) is a spatial-temporal coordinate at the interface and \(\mathbf{r}\) is the corresponding radial vector to the center of the cylinder.
Traditional CFD methods may address this relatively complex issue, for example, by regenerating computational mesh frequently. In contrast, PINNs does not have a time-stepping, but instead optimizes the flow prediction in a continuous spatial-temporal domain via minimizing the composite loss of training points. It is simple to see that _the time-dependent moving boundary problem in the two-dimensional coordinate of space corresponds to a stationary problem in the three dimensional coordinate of space and time._ In this way, one or more moving boundaries can be dealt with steadily. Therefore, we firstly generate sufficient training points randomly in the entire \(\Omega\), as illustrated in Fig. 1(a). Furthermore, to take into account the stiff flow variation in the vicinity of the moving boundary, we distribute finer training points around the moving boundary, as shown in Fig. 1(b). In Fig. 1(c), an aerial view of the training points that are sampled at the moving interface, are looked upon in the \(y\) direction. Meanwhile, additional training points at the interface \(\partial\Omega\) are imposed with Dirichlet velocity constraint.
Theoretically, this strategy can be extended to three spatial dimensions as well, which just needs to be elevated into four dimensional coordinate of space and time. Next we shall introduce the technical details of PINNs' architecture and elaborate each component of the composite loss function.
### Each loss function and technical details for PINNs
Based on the principles of PINNs in Section 2, we take spatial and temporal coordinates \((x,y,t)\) as inputs and \((u,v,p)\) as outputs of the neural networks to predict the unsteady velocity and pressure fields. Fig. 2 illustrates a schematic of PINNs for solving unsteady flow problems with moving boundaries. The derivatives of \(u\), \(v\), and \(p\) with respect to the inputs are calculated using the AD. The composite loss of PINNs is repeated as follows:
\[\mathcal{L}(\theta)=w_{f}\mathcal{L}_{f}(\theta)+w_{b}^{s}\mathcal{L}_{b}^{s} (\theta)+w_{b}^{m}\mathcal{L}_{b}^{m}(\theta)+w_{i}\mathcal{L}_{i}(\theta)+w_ {data}\mathcal{L}_{data}(\theta). \tag{13}\]
The difference from Eq. (5) is that we replace \(w_{b}\mathcal{L}_{b}\) with \(w_{b}^{s}\mathcal{L}_{b}^{s}+w_{b}^{m}\mathcal{L}_{b}^{m}\) to differentiate the sub-loss for stationary and moving boundaries, respectively. Morever, we adopt the Glorot scheme to randomly initialize all weights and biases denoted by \(\theta\)[41], which are to be optimized. The definitions for all components in
Eq. (13) are expressed in mean squared errors (MSE) as:
\[\mathcal{L}_{f} =\frac{1}{N_{f}}\sum_{(\mathbf{x},t)\in\Omega}\|\nabla\cdot\mathbf{u}\|_{2}^ {2}+\frac{1}{N_{f}}\sum_{(\mathbf{x},t)\in\Omega}\bigg{\|}\frac{\partial\mathbf{u}}{ \partial t}+(\mathbf{u}\cdot\nabla)\mathbf{u}+\nabla p-\frac{1}{Re}\nabla^{2}\mathbf{u} \bigg{\|}_{2}^{2}, \tag{14}\] \[\mathcal{L}_{b}^{s} =\frac{1}{N_{b}^{s}}\sum_{\mathbf{x}\in\Gamma_{D}}\Big{(}\|\mathbf{u}-\bm {u}_{\Gamma}(\mathbf{x})\|_{2}^{2}+\|p-p_{\Gamma}(\mathbf{x})\|_{2}^{2}\Big{)}+\frac{1 }{N_{b}^{s}}\sum_{x\in\Gamma_{N}}\bigg{(}\bigg{\|}\frac{\partial\mathbf{u}}{ \partial\mathbf{u}}\bigg{\|}_{2}^{2}+\bigg{\|}\frac{\partial p}{\partial\mathbf{u}} \bigg{\|}_{2}^{2}\bigg{)},\] (15) \[\mathcal{L}_{b}^{m} =\frac{1}{N_{b}^{m}}\sum_{(\mathbf{x},t)\in\partial\Omega}\bigg{(} \bigg{\|}\mathbf{u}-\frac{\partial\mathbf{\delta}}{\partial t}-\mathbf{r}\times\frac{ \partial\alpha}{\partial t}\bigg{\|}_{2}^{2}\bigg{)},\] (16) \[\mathcal{L}_{i} =\frac{1}{N_{i}}\sum_{(\mathbf{x},t)\in\Gamma_{i}}\Big{(}\|\mathbf{u}-\bm {u}_{i}(t)\|_{2}^{2}+\|p-p_{i}(t)\|_{2}^{2}\Big{)},\] (17) \[\mathcal{L}_{data} =\frac{1}{N_{data}}\sum_{n=1}^{N_{data}}\bigg{(}\Big{|}\mathbf{u}( \mathbf{x}_{u}^{n},t_{u}^{n})-\mathbf{u}_{data}^{n}\big{|}^{2}+\big{|}p(\mathbf{x}_{p}^{n},t_{p}^{n})-p_{data}^{n}\big{|}^{2}\bigg{)}, \tag{18}\]
where \(N_{f}\), \(N_{b}^{s}\), \(N_{b}^{m}\), \(N_{i}\) and \(N_{data}\) denote the number of training points corresponding to each loss term. \(\Omega\), \(\Gamma_{D}/\Gamma_{N}\), \(\partial\Omega\) and \(\Gamma_{i}\) indicate the training domain, the stationary boundaries, the moving boundaries and the spatial-temporal coordinates for initial condition, respectively. Each component of the loss function is also illustrated in Fig. 2 for a clear overview.
The numbers of hidden layers and neurons in each layer of the neural network are usually selected to meet specific needs of the problem. After trial-and-error for the sensitivity of the results, we employ 8 hidden layers and each layer contains 40 neurons universally. We choose the continuous and differentiable hyperbolic tangent function _tanh_ as the activation function \(\sigma(\cdot)\)[1]. Typically we minimize the total loss
Figure 2: Schematic of PINNs for solving unsteady flow problems with moving boundaries: four types of loss functions and the one for boundary condition consists of stationary and moving parts.
function using the Adam optimizer [40] with an adaptive stochastic objective function combined with the L-BFGS optimizer [42] based on the Quasi-Newton method. The Adam optimizer works with a decreasing learning rate schedule. More specifically, we train the model with a learning rate of \(10^{-3}\) for the first 100,000 epochs, \(5\times 10^{-4}\) for the subsequent 30,000 epochs, and \(10^{-4}\) for the last 30,000 epochs. The L-BFGS optimizer follows for about 50,000 epochs to further diminish the residuals. We obtain approximate solutions when the loss function reaches a sufficiently low level. To evaluate the accuracy of the prediction, we select the relative \(L_{2}\) error as a metric:
\[\varepsilon_{V}=\frac{\|V-V^{*}\|_{2}}{\|V^{*}\|_{2}}, \tag{19}\]
where \(V\) represents the predicted solutions \((u,v,p)\), and \(V^{*}\) denotes the corresponding reference solutions.
All the implementations are based on the framework of DeepXDE [20], which further employs TensorFlow [43] in the backend. High-fidelity DNS data computed by OpenFOAM (FVM-based) [44] are taken as reference in Section 4 and also leveraged as partial data for flow reconstruction in Section 5. The mesh designed for each case is presented in A.
## 4 PINNs as a direct numerical simulation solver
In this section, we employ PINNs only with initial data and validate the proposed extension as direct numerical simulation for three challenging flow problems, which involve moving boundaries.
### In-line oscillating cylinder
The interaction of an oscillating cylinder with a fluid at rest is of particular interest in fields such as marine engineering. It is a classical case of fluid-structure interaction problem and is well-documented in literature [45; 46]. The flow manifests itself by a complex vortex-structure interaction phenomena. Two key dimensionless parameters are Reynolds number \(Re\) and Keulegan-Carpenter number \(KC\), defined as:
\[Re=\frac{U_{max}D}{\nu},\quad KC=\frac{U_{max}}{fD}, \tag{20}\]
where \(U_{max}\) is the maximum velocity of the moving cylinder, \(D\) is the cylinder diameter, \(\nu\) is the kinematic viscosity of the fluid, and \(f\) is the characteristic frequency of the oscillation. The geometry of this flow is shown in Fig. 3(a). The translational motion of the cylinder is prescribed by a simple harmonic oscillation:
\[x(t)=-A\cdot\sin(2\pi f\cdot t), \tag{21}\]
where \(x\) denotes the location of the cylinder's center, and \(A\) is the amplitude of the oscillation. Thus, the Keulegan-Carpenter number can also be written as:
\[KC=\frac{2\pi A}{D}. \tag{22}\]
Referring to the experiments of Dutsch et al. [45] and the numerical simulations of Liu et al. [46], the Reynolds and Keulegan-Carpenter numbers are set to \(Re=125\) and \(KC=5\), respectively. Accordingly, \(\rho=1\), \(\nu=0.008\), \(D=1\), \(U_{max}=1\) and oscillation period \(T=1/f=5\). The far-field velocities are subjected to Neumann velocity conditions, as illustrated in Fig. 3(a). Furthermore, Dirichlet pressure condition is applied on the left and right sides, while Neumann pressure condition is imposed on the top and bottom sides. To capture the cylinder oscillation accurately, we enforce the Dirichlet velocity condition
on the cylinder boundary, wherein \(u(t)=-2\pi f\cdot A\cdot\cos(2\pi f\cdot t)\) and \(v(t)=0\). As for the initial conditions, we adopt the high-fidelity velocity and pressure data obtained from the FVM at \(t^{\prime}=15\) (phase \(0^{\circ}\)) as the starting point \(t=0\) of PINNs.
We first define the spatial computational domain as a rectangular region \([x_{min},x_{max}]\times[y_{min},y_{max}]=[-5,5]\times[-5,5]\). For the time domain, a single oscillation period from \(0.0\) to \(5.0\) is partitioned into two separate half-periods, \([0.0,2.5]\) and \([2.5,5.0]\), to be solved individually. Next, we excavate a tunnel formed by the oscillation of the cylinder in the spatial-temporal domain. A visual representation of the tunnel corresponding to one oscillation period is shown in Fig. 3(b). Note that the computational domain excludes the interior of the moving cylinder. Therefore, only training points obtained from locations in the domain other than the interior of the tunnel are considered. A number of \(20,120\) data points from the FVM containing the velocity and pressure data of the flow field are used as the initial conditions. We randomly sample \(100,000\) training points for the fluid, \(20,000\) training points at the cylinder, and \(10,000\) training points at the truncated rectangular boundary. We further enhance the sampling density by adding an additional \(50,000\) points of finer resolution in close proximity to the surface of the cylinder, as well as \(20,000\) finer points within a rectangular region \([-2,-1]\times[2,1]\). Here, a snapshot of the distribution of the points sampled within the temporal neighborhood \(\epsilon=0.1\) at \(t\)=1.25 is shown in Fig. 3(c). In Fig. 3(d), an aerial view of the training points that are sampled at the moving interface in two half-periods, are looked upon in the \(y\) direction. The convergence of the total loss, PDEs loss, moving boundary loss, stationary boundary loss, and initial con
Figure 3: Problem setup and training strategy for an in-line oscillating cylinder in fluid at rest. (a) The geometry of the computational domain and boundary conditions. (b) Training points at the moving interface, around the oscillating cylinder boundary and within the computational domain in space and time. (c) Snapshots of spatial training points sampled within the temporal neighborhood \(\epsilon=0.1\) at \(t\)=1.25. The points are sampled with finer resolution in the region close to the cylinder and within the in-line oscillation range. (d) Aerial view of the training points sampled at the moving interface in the two half-period time domains from the \(y\)-direction, respectively. (e) The training losses versus the number of optimization epochs in the first (left) and second (right) half periods.
dition loss, are depicted in Fig. 3(e), where we observe an effective reduction of the residuals by the Adam optimizer followed by the L-BFGS optimizer.
In Fig. 4, the predicted vorticity and pressure contours and isolines of the flow at four typical phases (\(0^{\circ}\), \(96^{\circ}\), \(192^{\circ}\), and \(288^{\circ}\)) by PINNs are presented, exhibiting excellent consistency with the results of FVM. This indicates that the method is capable of capturing the symmetrical vortex shedding characteristics and pressure distribution during the oscillation of the cylinder. The first row of Fig. 5 shows the velocity \(u\), \(v\), and pressure \(p\) at phase \(180^{\circ}\) predicted by PINNs, where the maximum velocity of the circular cylinder is located. These results are compared to those of FVM in the second row, and meanwhile point-wise absolute errors are presented in the third row of Fig. 5, where the errors are well below 5% overall. For selected time instants, we present relative \(L_{2}\) errors for the whole field in Fig. 6. We note that the grid nodes in the FVM
Figure 4: Results of PINNs and FVM for vorticity and pressure contours, and isolines in several typical phases. Four distinct phases and the corresponding time: (a) \(0^{\circ}\) (\(t\)=0); (b) \(94^{\circ}\) (\(t\)=1.3); (c) \(194^{\circ}\) (\(t\)=2.7); (d) \(288^{\circ}\) (\(t\)=4). Solid and dotted lines denote positive and negative contours, respectively.
are employed as the coordinate points for PINNs' predictions and validations.
The velocity distributions in the \(y\)-directional cross-section are further shown in Fig. 7 for \(u\) and \(v\) in three phases (180\({}^{\circ}\), 210\({}^{\circ}\), and 330\({}^{\circ}\)) of the oscillating cylinder, and compared with FVM results. The comparison demonstrates that the proposed model captures the finer flow distribution with great accuracy.
### Multiple cylinders translating along a circle
We further consider the translational motion of multiple bodies in fluid to evaluate the capability and universality of the proposed extension on PINNs. The key objective is to examine the relation between
Figure 5: The comparison between PINNs and FVM for the velocity \(u\), \(v\) and pressure \(p\) at phase 180\({}^{\circ}\), where the velocity of the cylinder is maximum. The point-wise absolute errors are depicted in the bottom panels.
Figure 6: Relative \(L_{2}\) errors of PINNs for the in-line oscillating cylinder in fluid at rest: the predicted velocity \(u\), \(v\) and pressure \(p\).
the number of moving boundaries and the accuracy of model training in the simulation of multiple moving bodies. Inspired by the numerical simulations of Xu et al. [47], we consider 2, 3 and 4 cylinders moving clockwise in fluid along a circle of radius \(R\) as an orbit. These cylinders start from rest in a sinusoidal pattern and gradually accelerate to a steady maximum velocity. The geometries of the setups, featuring 2, 3, and 4 cylinders at the maximum velocity, are presented in Fig. 8(a).
The key dimensionless parameter in this flow is the Reynolds number \(Re\), defined as:
\[Re=\frac{U_{max}D}{\nu}, \tag{23}\]
where \(U_{max}\) is the maximum velocity of the moving cylinders, \(D\) is the diameter of the cylinders, \(\nu\) is the kinematic viscosity of the fluid. The motion of the cylinders is given by:
\[\begin{cases}x=R\cdot\cos(\theta(t)),\\ y=R\cdot\sin(\theta(t)),\end{cases} \tag{24}\]
where \(\theta(t)\) represents the angle of the center of the cylinder in a polar coordinate, which can be expressed
Figure 7: PINNs results for velocity contours and selected profiles for \(u\) and \(v\) along the \(y\)-direction are compared with those of FVM at three typical phases of the in-line moving cylinder. Solid and dotted lines denote PINNs and FVM results along four profiles indicated with colors: red: \(x=-0.6D\); green: \(x=0D\); blue: \(x=0.6D\); orange: \(x=1.2D\).
as:
\[\theta(t)=\begin{cases}\varphi+\dfrac{A}{R}\cdot\cos(2\pi f\cdot t), \quad 0\leq t\leq 2.5,\\ \varphi-\dfrac{U_{\max}}{R}\cdot(t-2.5),\quad t\geq 2.5,\end{cases} \tag{25}\]
where \(A\) and \(f\) represent the amplitude and frequency, respectively, of the cosine acceleration, \(R\) is the radius of the circle along which the cylinders are translating. The variable \(\varphi\) represents the phase angle, given in polar coordinates, at which the cylinders reach their maximum linear velocity. For the two-cylinder system, the phase angles of cylinders A and B are \(\varphi_{A}=\pi/2\) and \(\varphi_{B}=-\pi/2\), respectively; For the three-cylinder system, the phase angles of cylinders A, B and C are \(\varphi_{A}=\pi/2\), \(\varphi_{B}=-\pi/6\) and \(\varphi_{C}=-5\pi/6\), respectively; For the four-cylinder system, the phase angles of cylinders A, B, C and D are \(\varphi_{A}=\pi/2\), \(\varphi_{B}=0\), \(\varphi_{C}=-\pi/2\) and \(\varphi_{D}=-\pi\), respectively.
In all three systems, the Reynolds number is set to \(Re=100\). The parameters are set as follows, \(\rho=1\), \(\nu=0.01\), \(D=1\), \(R=2\), \(U_{max}=1\) and acceleration parameters \(A=1.592\), \(f=0.1\). The far-field boundaries are subjected to Neumann velocity conditions and constant pressure condition \(p=1\) on four sides, as illustrated in Fig. 8(a). To accurately describe the motion of the cylinder, Dirichlet velocity conditions are imposed on each boundary of the cylinder, where \(u(t)=-R\cdot\sin(\theta(t))\cdot\frac{d\theta(t)}{dt}\) and \(v(t)=R\cdot\cos(\theta(t))\cdot\frac{d\theta(t)}{dt}\). The initial condition \(t=0\) for these systems is that both the fluid and the cylinders are stationary, i.e. \(u=v=p=0\).
Figure 8: Problem setup and training points for multiple cylinders translating along a circle. (a) The geometry of the computational domain and boundary conditions. The left, middle, and right panels depict the geometric configurations of 2, 3, and 4 cylinders, respectively. (b) Scattered training points at four moving interfaces, around four moving boundaries, and in the fluid domain in space and time. (c) Snapshots of spatial training points sampled within a temporal neighborhood of \(\epsilon=0.2\) at \(t=0\). (d) Aerial view of the training points sampled at the interfaces of four moving cylinders from the \(y\) direction.
The spatial computational domain is defined as a rectangular region \([-4,4]\times[-4,4]\). The time interval [0, 5] is partitioned into two stages: [0, 2.5] for acceleration of the cylinders, and [2.5, 5] for maintaining maximum velocity through cylinder translation. And the two stages are treated as separate intervals and solved independently. Similarly, we excavate the tunnels that are formed as a result of the movements of multiple cylinders throughout the space-time domain. A visual representation of these moving boundary tunnels for the four-cylinder system is displayed in Fig. 8(b). We randomly sample 70,000 points in the fluid domain, 10000 initial points, 10,000 rectangular boundary points, 15,000 points at each cylinder interface. We have enhanced the sampling density by adding an additional 40,000 points of higher resolution, in close proximity to the boundary surface of each cylinder. In addition, 50,000 points are sampled within the orbit region \((R-D/2<r<R-D/2)\) of the cylinder motion. We illustrate the spatial distribution of the training points within the temporal neighborhood where \(\epsilon=0.2\) at \(t\)=0 for a system composed of four cylinders, as shown in Fig. 8(c). Additionally, an aerial view of the training points sampled at the interfaces of four moving cylinders from the \(y\) direction is depicted in Fig. 8(d).
Fig. 9 illustrates the convergence of the total loss during training for the three multi-cylinder systems, as well as the individual components of the loss, namely PDEs loss, moving boundary loss, stationary boundary loss, and initial boundary loss. The predicted vorticity and pressure contours of the flow for three multi-cylinder systems are presented in Fig. 10. The results demonstrate remarkable consistency with the FVM simulations, thereby confirming PINNs' effectiveness in solving multi-body flow problems. Fig. 11 provides the point-wise relative \(L_{2}\) errors of predicted velocity and pressure solutions in three multi-cylinder systems. It should be noted that the grid nodes in the FVM have been selected as the coordinate points for PINNs' validations. The results from the relative errors show that the model is able to solve the flow
Figure 9: The training losses versus the number of optimization epochs for (a) two-cylinder system, (b) three-cylinder system, and (c) four-cylinder system. The training losses for the acceleration and steady velocity stages are depicted in the top and bottom panels, separately.
problem containing multiple rigid bodies quite accurately. As the number of moving bodies increases, there is a slight increment of the prediction error on average. Nevertheless, the magnitude of the error remains within an acceptable range. We postulate that this error increases due to the increased complexity of the flow as a result of hydrodynamic interactions from multiple moving objects. To address this, it may become imperative to supplement PINNs with additional training points and/or explore a new network architecture to capture finer details. The primary objective of this study is to interrogate the feasibility of the new extension of PINNs to solve multi-body flow problems. Further research endeavors may potentially verify these hypotheses.
### Flow around a flapping wing
We study a flapping wing that undergoes both translation and rotation. This motion pattern of the wing is similar to that of insect or bird wings during hovering. Here, we provide only the initial condition data of the flow field and to predict the entire flow field by PINNs. We consider a rigid wing with an elliptical shape, characterized by a chord length \(C\) and an aspect ratio \(E\). Fig. 12(a) depicts a sinusoidal motion of the wing cross-section in the chord direction, which is prescribed as follows:
\[\left\{\begin{aligned} x(t)&=\frac{A}{2}\left[\cos \left(\frac{2\pi t}{T}\right)+1\right]\cos\beta,\\ y(t)&=\frac{A}{2}\left[\cos\left(\frac{2\pi t}{T} \right)+1\right]\sin\beta,\\ \alpha(t)&=\alpha_{0}\left[1-\sin\left(\frac{2\pi t }{T}+\varphi\right)\right].\end{aligned}\right. \tag{26}\]
Figure 11: Relative \(L_{2}\) errors of velocity \(u\), \(v\) and pressure \(p\) by PINNs for multiple cylinders translating along a circle.
Figure 10: PINNs-predicted and FVM results for vorticity and pressure contours in three multi-cylinder flow problems.
Here \(x\) and \(y\) denote the coordinates of the wing's center, \(\alpha\) represents the attack angle, \(T\) is the flapping period, \(A\) is the flapping displacement, \(\beta\) is the angle of inclination of the stroke plane, and \(\varphi\) is the phase angle. The velocity amplitude of the flapping wing motion is, therefore, \(U_{max}=\pi\cdot A/T\). The key dimensionless parameter in this flow is the Reynolds number \(Re\) defined as
\[Re=\frac{U_{max}C}{\nu}=\frac{\pi AC}{\nu T}, \tag{27}\]
where \(\nu\) is the kinematic viscosity of the fluid.
The Reynolds number for is selected as \(Re\)=157, following the simulation undertaken by Wang et al. [26]. The various simulation parameters are set as: \(C=1\), \(E=4\), \(A=2.5C\), \(\alpha_{0}=\pi/4\), \(\beta=\pi/3\), \(\varphi=0\), \(T=5,\rho=1\), and \(\nu=0.01\). The spatial computational domain is defined as a rectangular region \([-3,4]\times[-4,4]\). We employ Neumann velocity conditions and a constant pressure condition \(p=1\) at all far-field boundaries, as illustrated in Fig. 12(a). To precisely describe the wing's motion, the boundary of the wing is subject to Dirichlet velocity conditions, which can be computed using Eq. (12). As for the initial conditions, we adopt the high-fidelity velocity and pressure data obtained from the FVM at \(t^{\prime}=20\) as the starting point \(t=20\) of PINNs, and we solve the flow within the time domain T = [20.0, 20.6]. We use the time sequence scheme with a subdomain size of \(\Delta T=0.2\). Fig. 12(b) displays a visual representation of the tunnel of a wing in space and time. 100,000 training points are randomly sampled in the domain composed of a rectangular region of width \(7C\) and height \(8C\) and the time subdomain. 30,000 and 20,000 points are
Figure 12: Problem setup and training strategy of flow around a flapping wing. (a) The geometry of the computational domain and boundary conditions. The solid ellipses indicate the down-stroke phase while the dashed ellipses represent the upstroke phase. (b) Visual representation of scattered training points at interface, around boundary and in the fluid domain. (c) Snapshots of spatial training points sampled within the temporal neighborhood \(\epsilon=0.04\) at \(t\)=20. The training points are refined with higher resolution in the region close to the wing and within the flapping range. (d) Aerial view of all the points sampled at the wing surface from the \(x\) (left) and \(y\) (right) directions. (e) The training losses versus the number of optimization epochs.
sampled on the wing surface and rectangular boundaries respectively. 30,000 finer sampled points are added to a rectangular region \([-1.3,1.3]\times[2.5,3.5]\). In addition, 80,000 finer sampled points are added to a circular region with a radius of \(1.5C\) centered on the wing. There are about 8,967 data points from FVM used as initial conditions. Fig. 12(c) presents a snapshot of the distribution of sampled points except for initial data points within the temporal neighborhood \(\epsilon=0.04\) at \(t\)=20.0. We additionally present, in Fig. 12(d), the aerial views of the training points sampled at the wing surface, are looked upon in the \(x\) and \(y\) direction. The convergence of various training losses is depicted in Fig. 12(e).
Fig. 13 illustrates PINNs' results of velocity \(u\), \(v\) and pressure \(p\) contours in three time frames \(t=20.2\), 20.4, 20.6 within a flapping period. Fig. 14 presents the snapshots of the vorticity contours from \(t=20.0\) to 20.6. The model captures the typical and subtle variations in the vortex structure around the wing as it rotates and translates. Fig. 15 provides the point-wise relative \(L_{2}\) errors at each time from \(t=20.0\) to 20.6. The comparison between the predictions by PINNs and results obtained by the FVM demonstrates the capability of the former to accurately predict changes in the flow field based on the boundary conditions of the flapping wing motion.
Fig. 13: PINNs’ results for flow around a flapping wing: velocity \(u\), \(v\) and pressure \(p\) contours at several time frames: (a) \(t\)=20.2; (b) \(t\)=20.4; (c) \(t\)=20.6.
## 5 Reconstructing the flow fields via partial data
In this section, we evaluate the capability of the proposed extension in reconstructing the entire flow field when only partial data is available. This is also one of the best properties of PINNs, which is unavailable in other computational frameworks.
### Transversely oscillating cylinder in steady flow
We consider a transversely oscillating cylinder immersed in a steady flow. In this particular instance, the existence of asymmetrical vortex shedding introduces an elevated level of complexity in comparison to the preceding case lacking a steady flow, thus presenting an additional challenge for PINNs. The two key dimensionless parameters are Reynolds number \(Re\) and Strouhal number \(St\), defined as:
\[Re=\frac{U_{\infty}D}{\nu},\quad St=\frac{fD}{U_{\infty}}, \tag{28}\]
where \(U_{\infty}\) is the velocity of uniform stream, \(D\) is the diameter of the cylinder, \(\nu\) is the kinematic viscosity of the fluid, and \(f\) is the vortex shedding frequency. The geometry is illustrated in Fig. 16(a), where setups
Figure 14: PINNs’ results for flow around a flapping wing: vorticity contours at several time frames: (a) \(t\)=20.1; (b) \(t\)=20.2; (c) \(t\)=20.3; (d) \(t\)=20.4; (e) \(t\)=20.5; (f) \(t\)=20.6.
Figure 15: Relative \(L_{2}\) error of velocity \(u\), \(v\) and pressure \(p\) by PINNs for flow around a flapping wing.
for inlet, outlet, and far-field boundaries are also given. A prescribed motion of the cylinder is expressed as:
\[y(t)=-A\cdot\cos(2\pi f_{c}\cdot t), \tag{29}\]
where \(y\) denotes the cross stream location of the cylinder's center, \(A\) and \(f_{c}\) are the amplitude and characteristic frequency of the oscillation, respectively.
The Reynolds number is set to \(Re=185\), consistent with the numerical simulations reported by Guilmineau et al. [48]. Accordingly, the simulation parameters are set as follows: \(D=1\), the fluid density \(\rho=1\), \(\nu=0.01\), \(U_{\infty}=1.85\), the oscillation amplitude \(A/D=0.2\), and the frequency ratio \(f_{c}/f_{0}=0.8\), where \(f_{0}=St\cdot U_{\infty}/D\) is the vortex shedding frequency of the stationary cylinder at \(St=0.195\). Neumann velocity and pressure boundary conditions are used at the far-field boundaries on the top and bottom sides. At the inlet boundary, a steady horizontal free stream is imposed with the velocity Dirichlet boundary condition \(u=U_{\infty}\) and \(v=0\). And the zero pressure condition and Neumann velocity conditions are applied to the outlet. Dirichlet velocity conditions are imposed on the cylinder boundary to capture the no-slip condition due to oscillation. Specifically, we set \(u(t)=0\) and \(v(t)=2\pi f_{c}\cdot A\cdot\sin(2\pi f_{c}\cdot t)\). We take the velocity and pressure data on 21,535 points obtained from the FVM at \(t^{\prime}=2.0\) as the initial conditions to train the model from \(t=2.0\).
We first define the computational spatial domain, i.e., a rectangular region \([-5,10]\times[-5,5]\). As vortex shedding takes time to develop, we adapt a time sequence training scheme in which the time domain [2, 10]
Figure 16: Problem setup and training strategy of a transversely oscillating cylinder in steady flow. (a) The geometry of the computational domain and boundary conditions. (b) Visual representation of scattered training points around the oscillating cylinder boundary and in the fluid domain in space and time. (c) Snapshots of spatial training points sampled within the temporal neighborhood \(\epsilon=0.07\) at \(t\)=0.86. The training points are sampled with higher resolution in the vicinity of the cylinder and in the wake region where vortex shedding is prominent. (d) Aerial view of the training points sampled at the moving cylinder surface in the time domain [2, 10] from the \(x\)-direction. (e) The training losses versus the number of optimization epochs in two randomly selected time domains, [2, 3] and [5, 6], respectively.
is discretized into \(n\) domains as: [\(T_{0}=2\), \(T_{1}\)], [\(T_{1}\), \(T_{2}\)], \(\cdots\), [\(T_{n-1}\), \(T_{n}=10\)], to be solved individually. We set the range of the time sub-domain to \(\Delta T=2\). Next, we excavate the tunnel formed by the oscillation of the cylinder in the whole spatial-temporal domain. A visual representation of the tunnel corresponding to about two oscillation periods (one period is 3.465) is shown in Fig. 16(b). For each sub-domain [\(T_{n-1}\), \(T_{n}\)], a total of 452,235 FVM data points, corresponding to 21 time snapshots, scatter in space and time, with a time interval of \(\Delta t\) = 0.1 between two consecutive snapshots. Among these FVM data points, a total of 150,000 data points are randomly selected to reconstruct the entire field. In each sub-domain [\(T_{n-1}\), \(T_{n}=T\)], we randomly sample 50,000 domain points, 10,000 cylinder boundary points, and 10,000 rectangular boundary points. Moreover, we additionally sample 30,000 finer points in the vicinity of the cylinder and 30,000 points in the wake region where vortex shedding is prominent. Fig. 16(c) displays a snapshot of the distribution of sampled points from the above-mentioned processes within the temporal neighborhood \(\epsilon\) = 0.07 at \(t\)=4.32. Furthermore, we provide an aerial perspective of the training points that are sampled on the moving cylinder surface in \(t\)=[2, 10], from the \(x\) direction, as shown in Fig. 16(d).
Fig. 16(e) illustrates the convergence of the total loss and its individual components during training, including the PDEs loss, moving boundary loss, stationary boundary loss, and initial boundary loss. Fig. 17 shows the predicted velocity and pressure at time instances \(t\)=1.0, 4.0, and 7.0. Additionally, Fig. 18 illustrates the vorticity predictions of the trained model for the time period of t=[0, 7] and compared with
Figure 17: PINNs-predicted and FVM results for velocity \(u\), \(v\) and pressure \(p\) contours for several time frames. (a) \(t\)=1.0; (b) \(t\)=4.0; (c) \(t\)=7.0.
the FVM results. The comparison of vorticity indicates that the model successfully reproduces the vortex shedding phenomenon in alignment with the FVM calculation. Despite the slightly larger dissipation in the wake region, the predicted solution accurately and comprehensively captures the process of vortex shedding from the cylinder and its development in the wake region. Fig. 19 provides the point-wise relative \(L_{2}\) errors of inferred velocity and pressure solutions at each time. The relative \(L_{2}\) errors illustrate the rapid accumulation of errors in time series model predictions over time. Offering initial conditions with high precision may mitigate errors to a certain extent in the early stages of the simulation, but these errors will surge rapidly in the subsequent stages. The velocity distributions in the \(x\)-directional cross-section are further shown in Fig. 20 for \(u\) and \(v\) in three time frames (\(t\) = 3.0, 5.0, and 7.0) of the oscillating cylinder, and compared with FVM results. The comparison shows that the proposed model accurately captures finer flow distributions. Overall, our strategy for PINNs is capable of reproducing the phenomenon of vortex shedding with an acceptable margin of error.
Figure 19: Relative \(L_{2}\) errors of PINNs for the transversely oscillating cylinder in steady flow: the inferred velocity \(u\), \(v\) and pressure \(p\).
Figure 18: Results of PINNs and FVM for vorticity contours in several time frames. (a) \(t\)=0.1; (b) \(t\)=1.0; (c) \(t\)=2.0; (d) \(t\)=3.0; (e) \(t\)=4.0; (f) \(t\)=5.0; (g) \(t\)=6.0; (h) \(t\)=7.0. A significant vortex shedding phenomenon is observed.
### Flow around a flapping wing
In this case, we still consider a flapping wing that undergoes both translational and rotational motions. We solve this problem to demonstrate the flexibility of our proposed framework in the moving boundary scenario, that is, reconstruct the entire flow field (velocity \(v\) and pressure \(p\)) based on the partial data (velocity \(u\)) of the spatial-temporal domain of the flow field.
Here, we employ the velocity \(u\) data at the grid points of the FVM as Dirichlet constraint to train the neural network. Meanwhile, we take the velocity and pressure data on \(8,967\) points obtained from the FVM at \(t^{\prime}=20.0\) as the initial conditions to train the model from \(t=20.0\). A single neural network is used to reconstruct the flow field in the time domain [20.0, 20.4]. A total of 367,738 FVM data points, corresponding to 41 time snapshots, scatter in space and time, with a time interval of \(\Delta t\) = 0.01 between two consecutive snapshots. Among these FVM data points, a total of 120,000 data points are randomly selected to reconstruct the entire field. In addition, 50,000 collocation points are randomly sampled in the domain to complement the resolution of the computational domain. 25,000 and 10,000 points are sampled on the wing surface and rectangular boundaries, respectively. 40,000 finer sampled points are sampled within a circular region with a radius of \(1.5C\) centered on the wing. 10,000 finer sampled points are collected within a rectangular region \([-1.3,1.3]\times[2.5,3.5]\). Fig. 21(a) depicts a snapshot at \(t\)=20.0 demonstrating
Figure 20: The PINNs-predicted velocity contours and the velocity \(u\), \(v\) distribution in the \(y\)-direction for several frames of the cylinder, and are compared with the FVM results. Solid and dotted lines denote PINNs and FVM results in four profiles(indicated with colors: red: \(y=-0.6D\); green: \(y=0D\); blue: \(y=0.6D\); orange: \(y=1.2D\)).
the distribution of data points and sampled collocation points.
And the convergence of various training losses is given (see Fig. 21(b)). Fig. 22 shows the snapshots of inferred velocity \(v\) and pressure \(p\) contours in two time frames \(t=20.2\), \(20.4\). Fig. 23 provides the point-wise relative \(L_{2}\) errors at each time from \(t=20.0\) to \(20.4\). Obviously, providing partial data for the entire flow field can significantly reduce the relative error in velocity and especially pressure compared to providing data only of the initial conditions. This demonstrates the flexibility of our proposed strategy in solving moving boundary scenarios, which can rely on different types or amount of data for the prediction and reconstruction of the entire flow field.
Figure 21: Training strategy for the reconstruction of the flow field around a flapping wing. (a) Snapshots of data points randomly selected and training points sampled within the temporal neighborhood \(\epsilon=0.02\) at \(t\)=20. (b) The training losses versus the number of optimization epochs.
Figure 22: Flow around a flapping wing: inferred velocity \(v\) and pressure \(p\) contours at two time frames: (a) \(t\)=20.2; (b) \(t\)=20.4.
### A cylinder settling under gravity
In the last case, we conduct simulations pertaining to the free settling of a cylinder under gravity. This instance can be classified as a two-way fluid-structure interaction problem. In this case, the settling trajectory of the cylinder is extracted from the FVM results, making this problem degenerate into a flow problem with prescribed motion to be solved using our proposed strategy.
The objective is to employ partial flow field information for the reconstruction of the complete flow
Figure 23: Relative \(L_{2}\) errors of PINNs for flow around a flapping wing: the inferred velocity \(u\), \(v\) and pressure \(p\).
Figure 24: Problem setup and training strategy of a cylinder settling under gravity. (a) The geometry of the computational domain and boundary conditions. (b) Visual representation of scattered training points around the moving boundary and grid points in space and time. (c) Snapshots of the distribution of data points at \(t=4.4\). The spatial resolution of the points is gradually enhanced in the proximity of the cylinder region. (d) Aerial view of all the points sampled at the moving cylinder surface from the \(x\) (left) and \(y\) (right) directions. (e) The training losses versus the number of optimization epochs.
field, with the pressure \(p\) inferred from the known velocities \(u\) and \(v\) obtained via the FVM. In the present study, a computational domain is defined by intercepting a portion of the whole region in which the cylinder is settling. The geometric description and boundary conditions of the domain are provided in Fig. 24(a). The simulation parameters are set as follows: the gravity \(g=1\), the cylinder diameter \(D=0.2\), the fluid density \(\rho=1\), and the kinematic viscosity \(\nu=0.001\). In this computational region, the maximum velocity of the settling cylinder is approximately \(U_{max}=1.79\), resulting in a Reynolds number of \(Re=U_{max}D/\nu=358\) for this problem. The Neumann velocity conditions and the zero pressure condition are imposed on the left and right sides of the far-field boundaries, as depicted in Fig. 24(a). We impose Dirichlet velocity conditions on the boundary of the cylinder, where the velocities \(u\) and \(v\) are determined based on the positions \(x\) and \(y\) of the cylinders. Notably, we employ the velocities \(u\) and \(v\) at the grid points of the FVM as labeled data to train the neural network.
The cylinder settles from rest at \(y=0\). The selected spatial domain for simulation corresponds to a rectangular region bounded by \([-1,1]\times[-6,-3]\). And the time domain is [4.0, 4.4]. The location and velocity of the settling cylinder are obtained by fitting the Fourier series to the cylinder trajectory extracted from the FVM results. We excavate the tunnel formed by the settling cylinder in the whole spatial-temporal domain. A visual representation of this tunnel corresponding to the settling trajectory is shown in Fig. 24(b). A total of 229,433 data points, corresponding to 21 time snapshots, scattering in space and time, have been utilized to perform inference of the pressure field. The time interval between two consecutive snapshots is \(\Delta t=0.02\). Fig. 24(c) depicts a snapshot at \(t\)=4.4 demonstrating the distribution of these data points. In addition to the 1575 sparse data points in the 21 time snapshots that are at the cylinder boundary, we randomly sample an additional 10,000 data points on cylinder surface as a supplement. Fig. 24(d) provides an aerial perspective of all the points located at the moving cylinder surface, viewed from both the \(x\) and
Figure 25: A cylinder settling under gravity: PINNs-inferred velocity \(u\), \(v\) fields and pressure \(p\) field contours at two time frames are in the first columns for (a) \(t\)=4.1; (b) \(t\)=4.4. The corresponding results of FVM are in the second columns. The point-wise absolute errors for each frame are in the third columns.
\(y\) directions. The convergence of the total loss and its individual components during training, including PDEs loss, moving boundary loss, stationary boundary loss, and velocity \((u,v)\) data loss, are presented in Fig. 24(e).
Fig. 25 shows snapshots of inferred velocity \(u\), \(v\) and pressure \(p\) contours, as well as a visual comparison with the FVM results at two time frames. The proposed framework is capable of accurately reconstructing the pressure. Fig. 26 provides the point-wise relative \(L_{2}\) errors from \(t\)= 4.0 to 4.4. The network demonstrates the capability to accurately reconstruct the complete flow field.
## 6 Discussions and conclusions
We presented a novel extension to incorporate moving boundary conditions in fluid mechanics into the loss functions of physics-informed neural networks (PINNs). As we interpreted the time-dependent moving boundaries in the spatial domain as stationary boundaries in a higher dimension of the spatial-temporal domain, we can effortlessly distribute fine training points at and around the interfaces for accuracy. Therefore, we are able to solve one class of unsteady flow problems with moving bodies, where the time-dependent interfaces are available via other means. We have validated the extended PINNs on a variety of classical flow problems involving moving bodies. For instance, we solved the velocity, vorticity, and pressure fields for a complete cycle of an in-line oscillating cylinder in fluid at rest using only data for the initial condition. Additionally, we predicted the flow field surrounding two, three, or four cylinders translating along a circle and demonstrated the effectiveness of this approach for multiple moving bodies. Furthermore, for flows around a flapping wing, we illustrated the flexibility of the proposed framework first by relying data from the initial conditions to predict the later flow field in Section 4 and subsequently using data on the velocity \(u\) to infer the velocity \(v\) and pressure for a reconstruction of the entire flow field in Section 5. In the case of a transversely oscillating cylinder in flow, we also reproduced the phenomenon of rapid vortex shedding due to oscillation with pressure data. Lastly, for a cylinder settling under gravity, we relied on both the trajectory of the cylinder and the velocity data to infer the pressure field. Comparisons with reference solutions for these flow problems demonstrated the effectiveness and accuracy of the extended PINNs.
The extension proposed here may be insightful for flow problems solved by other machine learning frameworks and should be also applicable to physics problems with time-dependent moving boundaries beyond fluid mechanics. As PINNs exploits a fully connected neural network to approximate the solution of PDEs, currently we cannot achieve the same order of accuracy as the CFD methods within the same wall-time window. For example, the in-line oscillating cylinder case took about 3 hours to solve for a full oscillating period on a Nvidia RTX 4090 graphic card. Improving the accuracy further would require an even larger number of training points and a more significant amount of epochs for optimization, which both
Figure 26: Relative \(L_{2}\) errors of PINNs for a cylinder settling under gravity: the inferred velocity \(u\), \(v\) and pressure \(p\).
result in a higher computational cost. Due to this exact reason, it does not seem necessary to further extend PINNs to handle the two-way coupling problems, which may be more feasible by CFD methods currently. For flow problems with access to the trajectories of moving interfaces, yet with limited measurements on the flow fields, the proposed method can be applied steadily. A potential revolution to the existing neural network architecture and training strategy may also fundamentally reform the status and allow PINNs for addressing more computational challenges in fluid mechanics.
## Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
## Acknowledgments
Support from the grant of Innovative Research Foundation of Ship General Performance under contract number 31422121 is gratefully acknowledged. X. Bian also received the starting grant from 100 talents program of Zhejiang University.
## Appendix A Technical details of FVM simulations
The simulations by FVM in this study have been executed within OpenFOAM. Morphing mesh and overset mesh techniques are constructed for handling the moving boundaries. We employed the pimple-Foam solver coupled with the morphing mesh technique to solve flow problems in Section 4.1, 5.1, and used the overPimpleDyMFoam solver coupled with the overset mesh technique to solve flow problems in Section 4.2, 4.3, 5.2, 5.3. The time step is automatically adjusted within the limit of the Courant number being 0.4. Other technical details are given in the captions of the figures as follows.
Figure 28: Multiple cylinders translating along a circle: a uniform background mesh (\(80\times 80\)) with finer resolution overset mesh (each contains a total of 2,250 cells and 4,650 nodes) around each cylinder is adopted at time t=2.5. And the left, middle, and right panels depict the configurations of 2, 3, and 4 cylinders, respectively.
Figure 29: Flow around a flapping wing: a uniform background mesh (\(70\times 80\)) with a finer resolution overset mesh (contains a total of 13,564 cells and 17,720 nodes) around wing is adopted at time t=20. Adaptive mesh refinement technology is adopted to improve the resolution of the mesh near the wing.
Figure 31: A cylinder settling under gravity: a uniform background mesh (\(80\times 120\)) with a finer resolution overset mesh (contains a total of 2,250 cells and 4,650 nodes) around cylinder is adopted.
Figure 30: A transversely oscillating cylinder in steady flow: a morphing mesh with a finer resolution zone of (Diameter=3D) around cylinder is adopted at time t=7. It contains a total of 21,200 cells and 43,070 nodes. |
2310.08315 | Fusion framework and multimodality for the Laplacian approximation of
Bayesian neural networks | This paper considers the problem of sequential fusion of predictions from
neural networks (NN) and fusion of predictions from multiple NN. This fusion
strategy increases the robustness, i.e., reduces the impact of one incorrect
classification and detection of outliers the \nn has not seen during training.
This paper uses Laplacian approximation of Bayesian NNs (BNNs) to quantify the
uncertainty necessary for fusion. Here, an extension is proposed such that the
prediction of the NN can be represented by multimodal distributions. Regarding
calibration of the estimated uncertainty in the prediction, the performance is
significantly improved by having the flexibility to represent a multimodal
distribution. Two class classical image classification tasks, i.e., MNIST and
CFAR10, and image sequences from camera traps of carnivores in Swedish forests
have been used to demonstrate the fusion strategies and proposed extension to
the Laplacian approximation. | Magnus Malmström, Isaac Skog, Daniel Axehill, Fredrik Gustafsson | 2023-10-12T13:26:04Z | http://arxiv.org/abs/2310.08315v1 | # Fusion framework and multimodality for the Laplacian approximation of Bayesian neural networks
###### Abstract
This paper considers the problem of sequential fusion of predictions from neural networks (nn) and fusion of predictions from multiple nn. This fusion strategy increases the robustness, i.e., reduces the impact of one incorrect classification and detection of outliers the nn has not seen during training. This paper uses Laplacian approximation of Bayesian nn (bnns) to quantify the uncertainty necessary for fusion. Here, an extension is proposed such that the prediction of the NN can be represented by multimodal distributions. Regarding calibration of the estimated uncertainty in the prediction, the performance is significantly improved by having the flexibility to represent a multimodal distribution. Two class classical image classification tasks, i.e., mnist and cfari10, and image sequences from camera traps of carnivores in Swedish forests have been used to demonstrate the fusion strategies and proposed extension to the Laplacian approximation.
## I Introduction
This paper studies how to fuse the prediction from multiple neural networks (nn) classifiers. Two problem scenarios are considered. Firstly, the problem of fusing the predictions from multiple nn classifiers that attempt to classify the same object. Secondly, to fuse the predictions from a single nn classifier that are given a sequence of inputs known to belong to the same class.
One often uses multiple algorithms and methods that work in parallel to have redundancy in a decision process. The predictions are then combined to make the decision more robust. However, without knowledge of the uncertainty in the prediction, it is unclear how the different predictions should be weighted. Hence, it is necessary to have good knowledge regarding the uncertainty in the prediction from the model.
In recent years, nn have had great success generating images from text [1], mastering board games such as GO [2], and in various control tasks [3, 4]. Despite their tremendous success, there is still limited use of nn in safety-critical applications, e.g., medical imaging and autonomous driving [5, 6, 7]. One of the main reasons nn have not revolutionized autonomous vehicles yet is the lack of knowledge of the uncertainty in their predictions. Take, for example, the infamous accident in 2018 by one of Uber's autonomous vehicles [8]. Here, the lack of reliable classification uncertainty by the nn of the surrounding object was a contributing factor to the fatal outcome of the accident. To create a more robust decision, the fusion of the prediction from sensors such as cameras and lidars could have been helpful in this scenario.
The problem of quantifying the uncertainty in the prediction of nn has lately been gaining interest with many suggested methods [9, 10, 11, 12, 13, 14]. See [15] for a survey over different methods. Broadly, one can separate these methods to quantify uncertainty in the prediction into two categories. The first category of methods is based on designing the structure of the nn such that it learns its own uncertainty [16, 17, 18, 19]. The second category is based on creating an ensemble of predictions, from which the uncertainty in the prediction could be computed [20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30].
This paper will consider methods from the second category. The naive approach would be to independently train multiple nn on the same task [21]. This method is frequently referred to as _deep ensemble_. The major disadvantage of this approach is that it is very computationally costly since it requires training multiple nn where only training one takes a lot of computational resources. The so-called test time augmentation is another method to create the ensemble. Here, a single nn creates an ensemble of predictions by predicting the slightly modified version of the input [26]. Test time augmentation is typical in medical image classification, where little data is available. Here, the modification could, for example, be to rotate the image. However, this method does not consider the correlation from using the same nn to predict the output of all the augmented inputs.
Instead of assuming the parameters of the nn to be fixed, one can assume that they have some distribution from which one can draw samples to create the ensemble, i.e., a Bayesian nn (bnn) [31]. However, to train a bnn is not straightforward and can be computationally expensive, and hence, one is often referred to use some approximation method. For example, values of the parameters could be sampled during the later part of the training [24, 25], or the distribution can be assumed to be given by some already used regularization techniques from which samples can be drawn during inference [22, 23]. Another well-used approximation is the so-called Laplacian approximation of a bnn, where the curvature of the likelihood function gives the distribution of the parameters [30, 31, 32, 33, 34]. In particular, this paper will focus on the linearized Laplacian approximation (lla) [29, 35], where the so-called _delta method_ is used to model the distribution for the prediction rather than
the distribution for the parameters [36, 37, 38].
A shortcoming with Laplacian approximation is that it is a local approach that only quantifies the uncertainty in a neighborhood of some given parameters. Hence, it lacks the expressiveness to represent a multimodal distribution. To solve this shortcoming, we propose the ensemble lla (ella). The proposed method combines deep-ensemble and Laplacian approximation, training multiple nns making a Laplacian approximation for each of the nns.
The contribution of this paper is three-fold. Firstly, the paper presents a method to fuse predictions from multiple classifications, which achieves a more accurate classifier. Both for the case using multiple classifiers and when the classification comes from a sequence of inputs known to belong to the same class. Here, fusing the information is shown to lead to more robust decisions. Secondly, the delta method for the Laplacian approximation of bnn presented in [29] is extended such that the estimated probability can represent multimodal distributions. Thirdly, the presented methods demonstrate efficiency in detecting out-of-distribution examples.
## II Fusion of Classifiers
We consider the problem of fusing several classifiers' predicted probability mass functions (pmf) from a Bayesian perspective. Let \(y\in\{1,\ldots,M\}\) and \(x\in\mathbb{R}^{n_{x}}\) denote the class label and input, respectively. Further, assume that the joint probability distribution \(p(X,Y)\) can describe the underlying data-generating process. There are \(C\) training data sets generated as
\[\mathcal{T}^{(c)}=\{x_{i},y_{i}\}_{i=1}^{N_{c}},\quad(x_{i},y_{i })\stackrel{{\text{\tiny def}}}{{\sim}}p(X,Y),\quad c=1,2,\ldots,C. \tag{1}\]
available to train the classifiers. During the training-phase, the classifiers try to learn (identify) a function \(f(x|\mathcal{T}^{(c)})\) that approximates the conditional distribution \(p(Y=y|X=x)\) from the training data \(\mathcal{T}^{(c)}\). That is, after the classifiers have been trained, we have the pmf approximations
\[p(Y=y|X=x)\approx f(x|\mathcal{T}^{(c)}),\quad c=1,2,\ldots,C. \tag{2}\]
In the classification phase, \(L\) inputs (assumed to be known to belong to the same class) are generated as \(x_{l}^{\star}\sim p(X,Y=y^{\star})\) for \(l=1,2,\ldots,L\). Inserting the new inputs into the classifiers yields the pmf estimates
\[\hat{p}_{lc}(y^{\star})=f(x_{l}^{\star}|\mathcal{T}^{(c)}),\quad l =1,\ldots L,\ c=1,\ldots,C. \tag{3}\]
The fundamental question is how to fuse the estimates \(\hat{p}_{lc}(y)\) given that we know that all \(x_{l}^{\star}\) are input samples corresponding to the same, but unknown class \(y^{\star}\)? That is, to estimate
\[p(y^{\star}|x_{1:L}^{\star})\triangleq p(y^{\star}|x_{1:L}^{\star},\mathcal{T }^{(1)},\ldots,\mathcal{T}^{(C)}). \tag{4}\]
As a remark, most practical use cases have either \(L=1\) (apply several classifiers to one input) or \(C=1\) (apply the same classifier to several images, e.g., from a video stream), but both cases and their generalization will be treated in parallel in the sequel.
### _Fusion using non-parametric models_
If all inputs and training data sets are independent, then the Bayes rule gives that
\[p(y|x_{1:L}^{\star})\propto\prod_{l=1}^{L}\prod_{c=1}^{C}p_{lc}(y). \tag{5}\]
Thus, one reasonable way to fuse the pmf estimates is
\[\hat{p}(y|x_{1:L}^{\star})=\frac{\prod_{l=1}^{L}\prod_{c=1}^{C} \hat{p}_{lc}(y)}{\sum_{m=1}^{M}\prod_{l=1}^{L}\prod_{c=1}^{C}\hat{p}_{lc}(m)}. \tag{6}\]
If the relative quality (uncertainty) of the estimates \(\hat{p}_{lc}(y)\) are known and represented by the scalar weights \(w_{lc}\), \(\sum_{lc}w_{lc}=1\), they may also be fused as
\[\hat{p}(y|x_{1:L}^{\star})\propto\prod_{l=1}^{L}\prod_{c=1}^{C} \hat{p}_{lc}^{w_{lc}}(y). \tag{7}\]
Note that this fusion rule is equivalent to log-linear pooling [39]. Compared to our Bayesian approach, this method is rather _ad-hoc_, and the weights \(w_{lc}\) reflect reliability in each classifier rather than a stochastic measure of the expected performance. Furthermore, the weights do not depend on the input, so the relative strength of each classifier in different regions is not exploited.
### _Fusion using parametric models_
So far, we have assumed a black-box structure of the classifier, which only depends on the training data set \(\hat{p}_{lc}(y)=f(x_{l}^{\star}|\mathcal{T}^{(c)})\). This assumption covers simple classifiers such as the nearest neighbor. In the sequel, we will consider parametric functions, such as nns, where the classifier explicitly depends on a parameter \(\theta\), which is estimated as \(\hat{\theta}_{N}^{c}\) from the training data \(\mathcal{T}^{(c)}\). This dependence will be made explicit in the sequel, so we denote the conditional probabilities
\[\hat{p}_{lc}(y^{\star})\triangleq f(x_{l}^{\star}|\hat{\theta}_{N}^{c}), \quad\forall l,c. \tag{8}\]
The uncertainty in \(\hat{\theta}_{N}^{c}\) implies that the conditional probability \(f(x_{l}^{\star}|\hat{\theta}_{N}^{c})\) is itself a distribution. This fact is the key point with our approach that allows the fusion of classifiers where the reliability of each one may depend on both the input and the quality of the training data, as reflected in the uncertainty of the parameter estimate. A requirement to be able to do this is to specify the posterior distribution of the parameters \(p(\theta|\mathcal{T}^{(c)})\).
Now, a distribution of distributions is quite a complex object to handle, where no analytical expressions can be expected. One remedy is to represent the distribution of the estimated parameters \(\hat{\theta}_{N}^{c}\) with Monte Carlo (mc) samples \(\theta^{c(k)}\). That is, we use the approximation
\[\theta^{c(k)} \sim p(\theta|\mathcal{T}^{(c)}),\quad k=1,2\ldots,K, \tag{9a}\] \[\hat{p}_{lc}(y^{\star}) \approx\frac{1}{K}\sum_{k=1}^{K}f(x_{l}^{\star}|\theta^{c(k)}), \quad\forall l,c,\] (9b) \[\hat{p}(y^{\star}|x_{1:L}^{\star}) \propto\prod_{l=1}^{L}\prod_{c=1}^{C}\hat{p}_{lc}(y^{\star}). \tag{9c}\]
Here, \(K\) denotes the number of samples used in the mc sampling. One major shortcoming with this formulation is that every classifier is represented with only a point estimate, which does not utilize the fact that we are dealing with a distribution of distributions. This is because the fusion is done after the estimation using the mc samples. A remedy for this problem is to perform the fusion before the sampling stage. However, quantifying \(p(\theta|\mathcal{T}^{(c)})\) is a difficult problem, hence also to fuse the predictions from all the classifiers, both over the classifiers and also over the sequence of inputs.
Another shortcoming with approximating \(p(y^{\star}|x_{1:L}^{\star})\) by mc sampling from (9) is that it comes with a high computational cost. This cost is a result from the approximation requiring drawing samples from a high dimensional Gaussian distribution and evaluating the whole nn multiple times. The high computational complexity is particularly true for nn classifiers, which have a huge dimension of the parameter space. In [29], a remedy to this is presented, which reduces the dimension in the sampling to an \(M\)-dimensional space. In this paper, it is done using the so-called delta method [36, 38]. Then, the predicted values before the normalization are sampled instead of the values of the parameters \(\theta\) of the nn. Hence, no forward passes of the nn are required, and the dimension of the distribution from which the samples are drawn is significantly smaller. Apart from decreasing the computational complexity, this might also lead to a more straightforward fusion strategy compared to (9). This method to propagate the uncertainty will further be described in the following sections.
## III Neural Network Classifiers
This section will outline some basic concepts that are required for the two presented methods in this paper to approximate the conditional pmf given a set of classifiers, i.e., to estimate \(\hat{p}(y^{\star}|x_{1:L}^{\star})\). In this way, uncertainty in the parameters from the estimation step can be preserved and transformed to, first, the output from the last layers of the nns and, second, to the class probabilities. The goal is to estimate the probability distribution of the class probabilities, including cross-correlations.
### _The Softmax Operator_
It will be assumed that the nn classifier has a softmax function in the output layer. This ensures that the model \(f(x|\theta^{(c)})\) fulfills the properties associated with a pmf, i.e., \(f_{m}(x|\theta^{(c)})\geq 0\)\(\forall m\) and \(\sum_{m}f_{m}(x|\theta^{(c)})=1\). Thus, it is assumed that the nn is structured as
\[f(x|\theta^{(c)})=\text{softmax}\left(g(x|\theta^{(c)})\right)\] (10a) where \[\text{softmax}(z)\triangleq\frac{1}{\sum_{m=1}^{M}e^{z_{m}}}\begin{bmatrix}e ^{z_{1}}\\ \vdots\\ e^{z_{M}}\end{bmatrix}. \tag{10b}\]
Here the family of functions \(g(x|\theta)\) is unconstrained, while the softmax function maps these functions onto the interval \([0,1]\).
It should here be noted that the softmax function is invariant to translations, so\(\text{softmax}(z)=\text{softmax}(z+\alpha)\) for all \(\alpha\). Here, a transformation is needed to make different classifiers comparable. The chosen translation is arbitrary, but a natural choice is
\[\bar{g}(x|\theta^{(c)})\triangleq g(x|\theta^{(c)})-\max_{k}g_{k}(x|\theta^{( c)})\leq 0. \tag{11}\]
This is a sound choice from a numerical point of view since the exponential functions will operate on numbers smaller than zero, and there is no risk of numerical overflow. However, our main reason for transformation is to enable fusion.
### _Monte Carlo Sampling of Class Probabilities_
Define \(z\in\mathbb{R}^{M}\) as the value of the classifiers before the normalization done by the softmax function. Assume the distribution \(p(z^{\star}|x_{1:L}^{\star})\) has been quantified, i.e., a distribution for the fusion of the classifiers before the normalization. By combining the mc sampling technique in (9), we get an mc approximation of the class probabilities in the following way:
\[z^{(k)} \sim p(z^{\star}|x_{1:L}^{\star}),\quad k=1,2,\ldots,K, \tag{12a}\] \[\hat{p}^{(k)} =\text{softmax}\big{(}z^{(k)}\big{)},\] (12b) \[\hat{p}(y^{\star}|x_{1:L}^{\star}) =\frac{1}{K}\sum_{k=1}^{K}\hat{p}^{(k)}. \tag{12c}\]
The uncertainty of the estimated class labels \(\hat{p}(y^{\star}|x_{1:L}^{\star})\) here explicitly represented by the point cloud \(\hat{p}^{(k)}\). From this, various probabilities can be computed, e.g., the probability that \(\hat{p}(y^{\star}|x_{1:L}^{\star})\) is true (count the fraction of sample vectors \(\hat{p}^{(k)}\) whose \(c\)'th element is the largest one), or the risk that we decide \(c_{1}\) while \(c_{2}\neq c_{1}\) is true.
Hence, we have laid the ground for the proposed Bayesian approach to estimate the class probability distribution. The remaining task is to estimate \(p(z^{\star}|x_{1:L}^{\star})\), preferably without adding a substantial amount of computations in the training and classification stages.
## IV Uncertainty quantification
Next, a method to quantify \(p_{lc}(z^{\star})\) for nns is described. After that, the presentation of an extension where the correlation between classifiers is included. This extension is required to estimate \(p(z^{\star}|x_{1:L}^{\star})\).
### _Laplacian approximation and the delta method_
Representing \(p(\theta|\mathcal{T}^{(c)})\) is challenging. Hence, one often has to rely on approximations to represent \(p(\theta|\mathcal{T}^{(c)})\). One such approximation is the Laplacian approximation of bnns [31, 34]. It is a local linear approach, that approximates the uncertainty in the parameters using the curvature of the likelihood function around the estimated parameters \(\hat{\theta}_{N}^{\star}\). Assume some prior of the parameters \(p(\theta)=\mathcal{N}(\theta;0,P_{0})\). Then the
Laplacian approximation of the posterior \(p(\theta|\mathcal{T}^{(c)})\) yields [31]
\[p(\theta|\mathcal{T}^{(c)}) =\mathcal{N}(\theta;\hat{\theta}_{N}^{c},P_{N}^{\theta,c}), \tag{13a}\] \[P_{N}^{\theta,c} =\bigg{(}-\frac{\partial^{2}L_{N}(\theta)}{\partial\theta^{2}} \Big{|}_{\theta=\hat{\theta}_{N}^{c}}+P_{0}^{-1}\bigg{)}^{-1}. \tag{13b}\]
Here
\[L_{N}(\theta)=\sum_{n=1}^{N}\ln f_{y_{n}}(x_{n}|\theta) \tag{14}\]
denotes the cross-entropy likelihood function [40] where \(y_{n}\) is used as an index operator for the subscript \(m\) of \(f_{m}(x|\theta)\). The maximum a posteriori estimate of the parameter \(\hat{\theta}_{N}\) is given by
\[\hat{\theta}_{N}^{c}=\operatorname*{arg\,max}_{\theta}p(\theta|\mathcal{T}^{( c)})=\operatorname*{arg\,max}_{\theta}L_{N}(\theta)+\ln p(\theta), \tag{15}\]
The choice of the prior distribution \(p(\theta)=\mathcal{N}(\theta;0,P_{0})\), corresponds to the so-called \(L2\)-regularization.
Under the assumption that the true model belongs to the considered model set, Bernstein-von Mises theorem [41] gives us that the distribution of the maximum posterior estimate asymptotically coincides with the Laplacian approximation in (13). Furthermore, the second derivative of the likelihood function approximates the inverse of the Fisher information matrix. In [29], it was shown that for a classification problem, the Fisher information is given by
\[\mathcal{I}^{\theta,c}\simeq\sum_{n=1}^{N}\sum_{m=1}^{M}\eta_{m,n}^{c}\frac{ \partial g_{m}(x_{n}|\theta)}{\partial\theta}\Bigg{|}_{\theta=\hat{\theta}_{ N}^{c}}\bigg{(}\frac{\partial g_{m}(x_{n}|\theta)}{\partial\theta}\Bigg{|}_{ \theta=\hat{\theta}_{N}^{c}}\bigg{)}^{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
where \(W_{lc}\) is used to find the classification of the \(c\)'th calasssifyer of the \(l\)'th input, i.e.,
\[W_{lc}=\begin{bmatrix}0&\ldots&0&I_{M}&0&\ldots&0\end{bmatrix}\in\mathbb{R}^{M, CLM}. \tag{23c}\]
Then mc samples can be used to compute a point estimate of the pmf as
\[\hat{p}_{lc}(y^{\star})=\frac{1}{K}\sum_{k=1}^{K}\text{softmax}(z_{lc}) \tag{24}\]
Afterward, the fusion formula (9c) can be applied to fuse prediction. However, this is still a point estimate, and the fusion does not take that some classifiers might have a larger covariance into consideration. Hence, two strategies for aggregating the extended state of classifiers \(\zeta\) that can utilize the information that some classifiers might have a larger covariance than others are presented in the following sections.
## V Fusion of Neural Network Classifiers
It is possible to use the information from one prediction from an nn and fuse it with a prediction from other sources of information (which might also be an nn) having access to the uncertainty in the prediction for an nn. Given the extended description in (22), for a set of classifications coming from \(C\) classifiers and a sequence of \(L\) inputs known to belong to the same class \(y^{\star}\), these predictions can be fused as follows
\[P_{N}^{g} =(H^{\top}R^{-1}H)^{-1}, \tag{25a}\] \[\tilde{g}_{N} =P_{N}^{g}H^{\top}R^{-1}\hat{\zeta} \tag{25b}\]
where
\[H=\begin{bmatrix}I_{M}\\ \vdots\\ I_{M}\end{bmatrix}\in\mathbb{R}^{CLM,M}. \tag{25c}\]
Due to the Gaussian approximation in (22), the fused information will also be Gaussian distributed. Hence, the distribution before the normalization can be approximated as
\[p(z^{\star}|x_{1:L}^{\star})\approx\mathcal{N}\big{(}z;\bar{g}_{N},\bar{P}_{N }^{g}), \tag{26}\]
where mc samples as in (12) can be applied to estimate the desired pmf. That is, by changing (12a) to
\[z^{(k)}\sim\mathcal{N}\big{(}z;\hat{\bar{g}}_{N},P_{N}^{g}),\quad k\!=\!1,2, \ldots K. \tag{27}\]
In this paper, two special cases of (22) are investigated:
1. Multiple nn independently train and classify the same input, i.e., \(L=1\).
2. An nn is used to classify a sequence of inputs known to belong to the same class, i.e., \(C=1\).
These are the edge cases, where a combination results in the general case. The first case is motivated by safety-critical systems. There is often more than one sensor measuring the environment. Multiple sensors measuring the same thing are often used to add redundancy to the system.
For the second case, if one classification is uncertain, the additional information that all the inputs in the sequence belong to the same class will result in a more robust classification. In a standard object detection algorithm, this setup can be used on cropped images of the tracked object. After the object detection algorithm finds a bounding box locating an object, it crops the image and sends it to a new classifier that classifies the cropped image. Then, the sequential fusion can be used on the cropped images. The second case of fusion strategies can be used in the case of test time augmentation where a single nn is predicting the output from multiple augmented versions of the same input [26].
## VI Ensemble linearized Laplacian approximation
Fusing the prediction using (25) to estimate \(p(z^{\star}|x_{1:L}^{\star})\) summarizes the information from multiple distributions with a single mode into a new distribution with a single mode. The lla provides a local description of the uncertainty in the prediction, from which efficient mc sampling can estimate the pmf, marginalizing the uncertainty in the parameters. However, the Laplacian approximation is a local approach requiring more flexibility to represent multimodal distributions. Hence, if the loss of (14) is multimodal, the approximation might be inaccurate. On the other hand, the ensemble method, training multiple nns, e.g., deep ensemble [21], can represent multimodal distributions. However, it is very costly to create a new sample, i.e., since every new sample requires training of a new nn. Hence, we suggest combining the two methods mentioned, i.e., training multiple nns and creating an lla for each traned nn. This proposed extension of the lla is referred to as ella. Instead of drawing samples from the distribution described by (26), one can draw samples from the full description of the distribution in (23). Afterward, (12) can be applied to estimate the desired pmf. That is, by changing (12a) to
\[z^{(k)}=\sum_{l=1}^{L}\sum_{c=1}^{C}w_{lc}z_{lc}^{(k)}. \tag{28}\]
That is, \(p(z^{\star}|x_{1:L}^{\star})\) is approximated by \(CL\) Gaussian distributions where the parameter is used \(w_{lc}\) to weigh the impact of the different modes. For example, could \(w_{lc}\) be chosen proportional to the covariance \(P_{N}^{g,lc}\) for the classifier. This results in a method that can represent a multimodal distribution and is efficient to sample from. The resulting method is similar to a Gaussian mixture model but with some key differences [44]. Instead of independently sampling from the difference modes, the sampling is done over the extended state \(\zeta\), where the weighting is done after the sampling. This enables us to include cross-correlation between the modes.
Similar ideas to combine different methods to quantify the uncertainty have been presented in, e.g., [45], where deep ensemble and mc dropout are combined. Ideally, each realization should represent a different mode in the distribution. To make this more likely, repulsive training can be used when training the nns[46], where it is enforced during the training process that the different nns converge to different parameters.
The complete strategy proposed in this paper to estimate \(\hat{p}(y^{\star}|x_{1:L}^{\star})\) is summarized in Fig. 1. Here, for the compression of \(\zeta\), either the fusion strategy in Section V or the ella proposed in this section could be used.
## VII Validation
Suppose access to some validation data set \(\mathcal{V}=\{y_{n}^{\circ},x_{n}^{\circ}\}_{n=1}^{N_{\circ}}\). The question is how to validate that the estimated pmf resembles the true one. The inherent difficulty is that the validation data, as the training data, consists of inputs with corresponding class labels and not measures of a pmf. This is one of the reasons why a unified qualitative evaluation metric is lacking for the uncertainty in the prediction [15]. However, some of the most common metrics used are classification accuracy, log-likelihood (ll), prediction entropy, Brier score, expected calibration error (ece), area under the receiver operating characteristic curve (auroc), and the area under the precision-recall curve (aupr) when detecting out-of-distribution samples.
Negative ll, prediction entropy, and Brier score are all proper scoring rules, i.e., they emphasize a careful and honest assessment of the uncertainty [47]. However, none of them measure the calibration, i.e., the reliability of the estimated pmf. Hence, the most important metric is the ce when comparing the method on in-distribution data. For out-of-distribution data, the auroc and aupr are useful metrics, as well as the difference in predicted entropy for in and out-of-distribution data.
### _Accuracy and calibration_
Calculate the \(J\)-bin histogram defined as
\[B_{j}=\left\{n:\frac{j-1}{J}\leq\max_{m}\hat{p}(m|x_{n}^{\circ})<\frac{j}{J}\right\} \tag{29}\]
from the validation data. For a perfect classifier \(B_{j}=\emptyset\) for \(j<J\). For a classifier that is just guessing, all sets are of equal size, i.e., \(|B_{j}|=|B_{i}|\ \forall i,j\). Note that \(\max_{m}\hat{p}(m|x_{n}^{\circ})\geq 1/M\), so the first bins will be empty if \(J>M\).
The accuracy of the classifier is calculated by comparing the size of each set with the actual classification performance within the set. That is,
\[\text{acc}(B_{j})=\frac{1}{|B_{j}|}\sum_{n\in B_{j}}1\big{(}\hat{y}_{n}^{\circ }=y_{n}^{\circ}\big{)}\] (30a) where \[\hat{y}_{n}^{\circ}=\operatorname*{arg\,max}_{m}\hat{p}(m|x_{n}^{\circ}) \tag{30b}\]
Instead of certainty, from hereon, the standard and the equivalent notion of confidence will be used [43, 48]. The mean confidence in a set is denoted \(\text{conf}(B_{j})\) and is defined as
\[\text{conf}(B_{j})=\frac{1}{|B_{j}|}\sum_{n\in B_{j}}\max_{m}\hat{p}(m|x_{n}^{ \circ}), \tag{31}\]
This measures how much the classifier trusts its estimated class labels. In contrast to the accuracy, it does not depend on the annotated class labels \(y_{n}\). Comparing accuracy to confidence gives the ce, defined as
\[\text{ece}=\sum_{j}^{J}\frac{1}{|B_{j}|}|\text{acc}(B_{j})-\text{conf}(B_{j})|. \tag{32}\]
A small value indicates that the weight is a good measure of the actual performance.
### _Log likelihood, predicted entropy, and Brier score_
The total negative ll is given by (14), while the predicted entropy for one input is given by
\[E=-\sum_{m=1}^{M}\hat{p}(m|x_{n}^{\circ})\ln\bigg{(}\hat{p}(m|x_{n}^{\circ}) \bigg{)}. \tag{33}\]
Here \(\hat{p}(m|x_{n}^{\circ})\) denotes some generic estimate of the pmf. Hence, the difference between ll and entropy is that to calculate the predicted entropy, the class label \(y^{\circ}\) information is not required. That means the predicted entropy can be computed for out-of-distribution examples. For out-of-distribution detections, the difference in entropy between the in-distribution data and out-of-distribution data is of interest, i.e., \(\Delta_{E}=E_{in}-E_{out}\).
The Brier score [47, 48] corresponds to the least squares fit
\[\frac{1}{N_{\circ}}\sum_{n=1}^{N_{\circ}}\sum_{m=1}^{M}\big{(}\delta_{m,y_{n} ^{\circ}}-\hat{p}(m|x_{n}^{\circ})\big{)}^{2}, \tag{34}\]
where \(\delta_{i,j}\) denotes the Kronecker delta function.
Fig. 1: A block diagram of the proposed strategy to estimate \(\hat{p}(y^{*}|x_{1,L}^{*})\). Note that, two different methods are presented in this paper to compress the aggregated classification to quantify \(p(z^{*}|x_{1:L}^{*})\). Namely, the fusion strategy proposed in Section V or the ella proposed in Section VI.
Fig. 2: The images visualize the distribution for the fusion of prediction from multiple lla and the distribution generated by an ella. Here, the visualized using images from cfar10.
### _Receiver operating characteristic, precision, and recall_
The different methods of quantified uncertainty in the prediction can be evaluated on how well out-of-distribution samples can be detected. How well can the prediction of an input that the model has been trained on, in-distribution, be distinguished from an entirely different input, i.e., out-of-distribution examples. Three widely used measures to measure how well out-of-distribution samples are detected are probability of detection, \(P_{D}\) (also known as sensitivity or recall \(P_{R}\)) probability of false alarm \(P_{FA}\) and precision \(P_{P}\)[49, 50]. Assume that the \(\mathcal{H}_{0}\) hypothesis is that the samples are out-of-distribution samples and that under the \(\mathcal{H}_{1}\) hypothesis, the samples are in-distribution samples. Then, defining the three quantities as
\[P_{D}=P_{R} =Pr\{\text{detect }\mathcal{H}_{1}|\mathcal{H}_{1}\}, \tag{35a}\] \[P_{FA} =Pr\{\text{detect }\mathcal{H}_{1}|\mathcal{H}_{0}\},\] (35b) \[P_{P} =Pr\{\mathcal{H}_{1}|\text{detect }\mathcal{H}_{1}\}. \tag{35c}\]
The three measures in (35) will change depending on a set threshold. The receiver operating characteristic (roc) curve is created by plotting the \(P_{D}\) against the \(P_{FA}\) for different threshold values. Similarly, the precision-recall curve is created by plotting the precision against the recall for different threshold values. To evaluate how well different methods are at detecting out-of-distribution samples from in-distribution samples, one often refers to auroc and aurp. The auroc and the aurp are between zero and one, and the value is one for a perfect detector.
### _Temperature scaling_
The output of the softmax function can be used as a point estimate of pmf. However, this does not consider the uncertainty in the parameters, and this approach is known not accurately to reflect the uncertainty in the prediction, i.e., the uncertainty needs to be calibrated [43, 48]. It is often overconfident and underestimates the uncertainty. Hence assigning small probability mass to unlikely classes.
A common approach used to calibrate the estimated pmf is the so-called temperature scaling [43]. In temperature scaling, \(g(x|\theta)\) is scaled by a scalar quantity \(T\) before the normalization by the softmax operator. With a slight abuse of notation, introduce
\[\hat{p}_{lc}^{T}(y^{\star})\triangleq f(x_{l}^{\star}|\hat{\theta}_{N}^{c},T )=\text{softmax}\left(g(x^{\star}|\hat{\theta}_{N}^{c})/T\right). \tag{36}\]
The scaling parameter \(T\) is found after training on the model using validation data. The subscript \(T\) is used for the estimate of the pmf when temperature scaling is used.
## VIII Experiment study
The experiments in this section can be divided into two parts. The first part investigates the fusion of multiple classifiers that classify the output for the same input. Here, an ensemble of nns are trained on the two classical data sets mnist[51] and cfari10[52]. The nn trained on mnist will be evaluated on Fashion mnist (fmnist) [53] to investigate how to detect out-of-distribution examples. This part investigates the fusion strategy described in Section V and ella described in Section VI. In the second part, we are simulating the setup where a sequence of images known to belong to the
\begin{table}
\begin{tabular}{l|c c c} \hline \hline & \multicolumn{3}{c}{mnist (in) fmnist (out)} \\ \cline{2-4} Method & \(\Delta_{E}\) (\(10^{3}\)) \(\downarrow\) & auroc \(\uparrow\) & aurp \(\uparrow\) \\ \hline Temp. sc. [43] & -6.36 & 0.82 & 0.70 \\ Deep ensemble [21] & -13.12 & 0.94 & 0.91 \\ lla[29] & -11.23 & 0.91 & 0.91 \\ Fusion of lla, Section V & -13.83 & **0.95** & **0.96** \\ ella \(w_{lc}=1/LC\), Section VI & -5.42 & 0.87 & 0.83 \\ \hline \hline \end{tabular}
\end{table} TABLE II: Performance measure for out-of-distribution detection. The arrows indicate whether a high or low value is preferable. See Section VII for descriptions of the measures.
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{mnist} \\ \cline{2-4} & acc. \(\uparrow\) & lL (\(10^{3}\)) \(\uparrow\) & Brier score \(\downarrow\) & ecE \(\downarrow\) \\ \hline Temp. sc. [43] & 92\(\%\) & 7.83 & 0.123 & 2.85 \\ Deep ensemble [21] & **95\(\%\)** & 7.94 & 0.080 & 2.29 \\ LLa[29] & 92\(\%\) & 7.81 & 0.125 & 1.33 \\ Fusion of lla, Section V & 94\(\%\) & 7.89 & 0.103 & 1.82 \\ ella \(w_{lc}=1/LC\), Section VI & -93\(\%\) & **8.37** & **0.071** & **1.06** \\ \hline \hline \end{tabular}
\end{table} TABLE I: The computed performance measures for the two datasets. The arrows indicate whether a high or low value is preferable. For a description of measures, see Section VII.
Fig. 3: Distribution for the predicted entropy (33) for in-distribution (mnist) and out-of-distribution (fmnist) samples. Here, five different methods to create the estimated conditional pmf are investigated.
same class by giving a classifier a sequence of images from the cfar10 data set and data from a camera trap that captures images of Swedish carnivores.
For the three different classification tasks, i.e., mnist, cfar10, and camera trap images of Swedish carnivores, different structures of the nns were used. For the mnist task, a five-layer fully connected nn was used; for the cfar10 task, using a LeNet5 [54] inspired structure; and for the Swedish carnivores, using an AlexNet [55] inspired structure. Here, using the adam optimizer [56] with the standard setting to estimate parameters \(\theta\) for all the tasks. The training was done for three epochs for the mnist task, ten epochs for the cfar10, and six epochs for the Swedish carnivores.
This paper is investigating five different methods to quantify the uncertainty, namely
1. Temperature scaling [43], see Section VII-D.
2. Deep ensemble [21]
3. One single lla, [29].
4. Fusion of the prediction of an ensemble of lla, see Section V.
5. An ella where every mode is equally weighted, i.e., (28) with \(w_{lc}=1/LC\).
For the deep ensemble, the size of the ensemble was ten for the mnist task and five for the cfar10 task. This was also the number of nns trained when fusing the prediction from multiple llas and the number of modes in the ella.
### _Fusion of multiple measurements_
Having access to predictions from multiple independently trained nns, each associated with an lla, it is possible to fuse their prediction to better understand the predicted output of a given input. Table I shows the result of the experiment done on mnist and cfar10. For the cfar10 data set, all the measures are improving for the fusion of llas compared to only using the temperature scaling, deep ensemble, and a single lla. In particular, the ecc decreases. However, for the mnist data set, the ecc does not decrease for the fusion of llas compared to only using a single lla. That the fusion of llas does not decrease the ecc can be a consequence of that some information about the distribution goes missing in the fusion stage. An example of the fusion of the prediction can be seen in the second column in Fig. 2. Here, the mean and standard deviation for the different modes are shown in red, and the fused prediction is in blue. The fourth column shows the estimated pmf for the fused prediction.
In Fig. 3, the distribution for the predicted entropy for images from both images the nn is trained on (in-distribution) and images the nn is not trained on (out-of-distribution). The in distribution is mnist, i.e., images of numbers, and the out of distribution is fmnist, i.e., images of clothing items. Compared to the other methods, fusing the prediction from multiple llas seen in Fig. 3(d) provides a more evident visual difference between the in and out distribution images. This claim is also supported in Table II where the auroc and aupr are better for the fused predictions. The difference in the predicted entropy for in and out of distribution samples for the fused predictions is also significant; see Table II.
### _Sequential fusion_
In Fig. 4, a sequence of six images of aircraft from cfar10 is shown. Here, classifying all the images in the sequence by the same nn. Hence, the predictions are correlated according to (25). Assuming that all the images in the sequence belong to the same class, this information can be used to improve the prediction accuracy from images later in the sequence. For example, even though the prediction of the fourth image is uncertain, thanks to the information from the previous images, the fused prediction indicates that it is likely that the object in image is an aircraft.
Fig. 5 illustrates an experiment where the prediction of multiple images from a camera trap. Here, a lynx is passing the camera, and even though the last image in the sequence is classified as a wolf, by using the information from the previous
Fig. 4: The image illustrates the fusion of a sequence of images from the cfar10 dataset. Here, using the same nn to predict the class for all the images in the sequence and using (25) to fuse the prediction. Hence, the cross-correlation between the predictions is taken into consideration.
images, the fused prediction still indicates that it is a lynx in the image.
### _Multimodal model_
It is possible to have a multimodal representation of the prediction distribution with access to an ella. To visualize the multimodality, one can draw independent samples from the different modes created by the different nn in the ensemble. Fig. 2 is visualizing this for two images from the cfar10 dataset. The third column shows a representation of the distribution before the normalization. Here, independent mc-samples are taken from the different modes to visualize the distribution.
In Table I, it can be seen that inclusion of the multimodality improves calibration of the predicted uncertainty, i.e., leads to lower a ece. This while still maintaining high accuracy and high ll. The use of the ella to detect out-of-distribution samples can be seen in Fig. 3(e) and Table II. Here, it is shown that multimodality improves the detection compared to temperature scaling, but the improvements could be better when fusing the prediction from multiple llas.
## IX Summary and conclusion
This paper suggests two methods to estimate the conditional pmf given a sequence of classifications when one can estimate the uncertainty in the prediction from an nn. The first method is the fusion of the predictions, where both the information from the previous and the cross-correlation between the predictions are considered. The suggested method to include uncertainty in the prediction from an nn used in the fusion strategy is called llla. The second method extends the llla to represent multimodality in the predicted distribution. This extension is referred to as ella, where it combines a deep ensemble and llla.
The fusion strategies and the proposed extension are evaluated on classification tasks with images from the classical data sets mnist and cfar10 and images from camera traps collected in the Swedish forests. It also investigates how to use the methods to detect out-of-distribution samples. Here, mnist is used as in-distribution and fmnist as out-of-distribution samples. Sequential fusion of the predictions is shown to increase the robustness of the prediction. Both when all predictions in the sequence are uncertain and only one prediction is confident but incorrect. The results in terms of ece show that the multimodal representation using the extended ella method improves the performance compared to previous work. In most cases, the fusion of multiple predictions improved the performance. Furthermore, when detecting out-of-distribution samples, the fusion of predictions from multiple nn leads to improvement in performance. This improvement is in terms of increased auroc and aupr and the difference in predicted entropy. Here, the performance using ella could be improved. This indicates that some modes might be more critical in detecting out-of-distribution samples. Hence, methods to give the modes different weights are a possible direction for future research.
|
2301.13060 | Zero-One Laws of Graph Neural Networks | Graph neural networks (GNNs) are the de facto standard deep learning
architectures for machine learning on graphs. This has led to a large body of
work analyzing the capabilities and limitations of these models, particularly
pertaining to their representation and extrapolation capacity. We offer a novel
theoretical perspective on the representation and extrapolation capacity of
GNNs, by answering the question: how do GNNs behave as the number of graph
nodes become very large? Under mild assumptions, we show that when we draw
graphs of increasing size from the Erd\H{o}s-R\'enyi model, the probability
that such graphs are mapped to a particular output by a class of GNN
classifiers tends to either zero or to one. This class includes the popular
graph convolutional network architecture. The result establishes 'zero-one
laws' for these GNNs, and analogously to other convergence laws, entails
theoretical limitations on their capacity. We empirically verify our results,
observing that the theoretical asymptotic limits are evident already on
relatively small graphs. | Sam Adam-Day, Theodor Mihai Iliant, İsmail İlkan Ceylan | 2023-01-30T17:02:23Z | http://arxiv.org/abs/2301.13060v5 | # Zero-One Laws of Graph Neural Networks
###### Abstract
Graph neural networks (GNNs) are de facto standard deep learning architectures for machine learning on graphs. This has led to a large body of work analyzing the capabilities and limitations of these models, particularly pertaining to their representation and extrapolation capacity. We offer a novel theoretical perspective on the representation and extrapolation capacity of GNNs, by answering the question: how do GNNs behave as the number of graph nodes become very large? Under mild assumptions, we show that when we draw graphs of increasing size from the Erdos-Renyi model, the probability that such graphs are mapped to a particular output by a class of GNN classifiers tends to either _zero_ or to _one_. This class includes the popular graph convolutional network architecture. The result establishes 'zero-one laws' for these GNNs, and analogously to other convergence laws, entails theoretical limitations on their capacity. We empirically verify our results, observing that the theoretical asymptotic limits are evident already on relatively small graphs.
Machine Learning, Graph Neural Networks, Graph Neural Networks
## 1 Introduction
Graphs are common structures for representing relational data in a wide range of domains, including physical (Shlomi et al., 2021), chemical (Duvenaud et al., 2015; Kearnes et al., 2016), and biological (Zitnik et al., 2018; Fout et al., 2017)) systems, which sparked interest in machine learning over graphs. Graph neural networks (GNNs) (Scarselli et al., 2009; Gori et al., 2005) have become prominent models for graph machine learning for a wide range of tasks, owing to their capacity to explicitly encode desirable relational inductive biases (Battaglia et al., 2018), and to their adaptability to different graphs of different sizes.
One important virtue of these architectures is that every GNN model can be applied to _arbitrarily large_ graphs, since, in principle, the model parameters can be independent of the graph size. This raises the following question: how do GNNs behave as the number of nodes becomes very large? Inspired by the remarkable 'zero-one law' for first-order properties of graphs (Glebskii et al., 1969; Fagin, 1976), we investigate the behavior of GNNs when we draw graphs of increasing size from the Erdos-Renyi distribution on graphs, together with random node features.
The motivation of this work is twofold. First, our study establishes a setting in which to probe the _extrapolation capacity_ of GNNs: to what extent can models trained on smaller graphs capture properties which generalize well to larger graphs? Second, our analysis transfers readily to yield _expressiveness results_ for GNNs with random node features which are known to be very powerful on bounded graph domains (Sato et al., 2021; Abboud et al., 2021). What class of functions over graphs can GNNs with random node features _uniformly_ approximate?
The main result of this paper is given for binary graph classification: under mild assumptions on the model, several GNN architectures including graph convolutional networks (Kipf and Welling, 2017) satisfy a zero-one law over Erdos-Renyi graphs with random node features. This means that as we draw larger and larger graphs, the probability that the GNN maps such graphs to a particular output either tends to _one_ or tends to _zero_. This surprising result establishes important limits on the extrapolation capacity of GNNs as well as the expressivity of GNNs with random node features.
Zero-one laws give a simple and a direct argument for proving that properties for which these convergence laws fail cannot be expressed in the target language. A well-known example is the fact that no sentence of first-order logic can distinguish between finite structures of _even_ and _odd_ cardinality. Our study can thus be seen as an upper bound on the expressive power of the class of models under consideration. We also prove an interesting lower bound on the expressive power of GNNs with random node features: any property which satisfies a certain zero-one law can be universally approximated by a GNN using sum aggregation with random node features. This complementary results allows us to better understand the expressive power of such models.
Our main contributions can be summarized as follows.
* We obtain a zero-one law for the prominent model of graph convolutional networks using mild assumptions. We also prove a zero-one law for models with mean and sum aggregation, subject to certain mild conditions.
* Our result implies a corresponding zero-one law for GNNs with random node features, which establishes an upper bound on their expressivity. We complement this result, by additionally showing that these models can universally approximate any property which satisfies a certain zero-one law.
* We conduct experiments to validate our theoretical findings. Since zero-one laws are of asymptotic nature, we may need to consider very large graphs to observe a clear empirical evidence for the phenomenon. Surprisingly, however, GNNs already exhibit a clear evidence of zero-one law even on small graphs. Importantly, this is true for networks with very few layers (even single-layer), which is reassuring, as it precludes confounding factors, such as the effect of over-smoothing due to increased number of layers (Li et al., 2018).
All proof details are deferred to the appendix of this paper.
## 2 Preliminaries
**Random graphs and matrices.** The focus of our study is on classes of random graphs with random features, for which we introduce some notation. We write \(\mathbf{x}\in\mathbb{R}^{d}\) to represent a vector, and \(\mathbf{X}\in\mathbb{R}^{d\times n}\) to represent a matrix. Analogously, we write \(\mathbf{x}\) to denote a _random_ vector, and \(\mathbf{X}\) to denote a _random_ matrix, whose entries are (real) random variables.
We write \(\mathbb{G}(n,r)\) to denote a class of simple, undirected Erdos-Renyi (ER) graphs and \(\mathbb{D}(d)\) to denote a distribution of feature vectors over \(\mathbb{R}^{d}\). We define an Erdos-Renyi graph equipped with random node features as a pair \(\mathcal{G}=(\mathbf{A},\mathbf{X})\), where \(\mathbf{A}\sim\mathbb{G}(n,r)\) is the random graph adjacency matrix of a simple graph \(G=(V,E)\) of order \(n\) and \(\mathbf{X}\in\mathbb{R}^{d\times n}\) is a corresponding random feature matrix which contains, for each node \(v\in V\), an initial random node feature \(\mathbf{x}_{v}\sim\mathbb{D}(d)\) as the corresponding columns of \(\mathbf{X}\).1
Footnote 1: We define a \(d\times|V|\) dimensional (random) feature matrix as opposed to the more common \(|V|\times d\). This is for ease of presentation, since we aim to work on the (random) column vectors of such matrices (without the need of transposing each time).
**Message passing neural networks.** Graph neural networks have become prominent in machine learning over graph-structured data. The focus of this work is on _message-passing neural networks (MPNNs)_ (Gilmer et al., 2017) which encapsulate the vast majority of GNNs. The fundamental idea in MPNNs is to update the initial (random) state vector \(\mathbf{x}_{v}^{(0)}=\mathbf{x}_{v}\) of each node \(v\) for \(T\in\mathbb{N}\) iterations, based on its own state and the state of its neighbors \(\mathcal{N}(v)\) as:
\[\mathbf{x}_{v}^{(t+1)}=\phi\Big{(}\mathbf{x}_{v}^{(t)},\psi\big{(}\mathbf{x}_ {v}^{(t)},\{\!\{\!\mathbf{x}_{u}^{(t)}|\;u\in\mathcal{N}(v)\}\!\}\big{)}\Big{)},\]
where \(\{\!\{\!\cdot\!\}\!\}\) denotes a multiset, and \(\phi\) and \(\psi\) are differentiable _combination_, and _aggregation_ functions, respectively. We allow each layer's node representations to have different dimensions, and denote by \(d(t)\) the dimension of each \(\mathbf{x}_{v}^{(t)}\). We also write \(d(0)=d\). The choice for the combine (\(\phi\)) and aggregate (\(\psi\)) functions yields different models.
In other words, when applied on a random graph \(\mathbf{A}\) with random features \(\mathbf{x}_{v}^{(0)}\) for every node \(v\in V\), an MPNN generates a sequence of random node state vectors \(\mathbf{x}_{v}^{(1)},\dots,\mathbf{x}_{v}^{(T)}\). The final node representations can then be used for node-level predictions. For graph-level predictions, the final node embeddings are _pooled_ to form a graph embedding vector to predict properties of entire graphs. The pooling often takes the form of simple averaging, summing or element-wise maximum. For Boolean node (resp., graph) classification, we further assume a classifier \(\mathfrak{C}:\mathbb{R}^{d(T)}\to\mathbb{B}\) which acts on the final node representations (resp., on the final graph representation).
It is important to note that there are more general massage passing paradigms (Battaglia et al., 2018). In particular, _MPNNs with global readout_, which additionally aggregate over all node features at every layer, are known to be more expressive (Barcelo et al., 2020). Some specific model instances considered in this paper include a global readout component.
**GCNs.** The primary GNN architecture we consider is the _graph convolutional network_(Kipf & Welling, 2017), denoted by GCN. These are instances of MPNNs with self-loops, which aggregate over the extended neighborhood of a node \(\mathcal{N}^{+}(v):=\mathcal{N}(v)\cup\{v\}\). GCNs iteratively update the representations as \(\mathbf{x}_{v}^{(t)}=\sigma\left(\mathbf{y}_{v}^{(t)}\right)\), where the preactivations are given by:
\[\mathbf{y}_{v}^{(t)}=\mathbf{V}_{\text{n}}^{(t)}\sum_{u\in\mathcal{N}^{+}(v)} \frac{1}{\sqrt{|\mathcal{N}(u)||\mathcal{N}(v)|}}\mathbf{x}_{u}^{(t-1)}+\mathbf{b} ^{(t)}\]
We apply the linear transformation \(\mathbf{V}_{\text{n}}^{(t)}\in\mathbb{R}^{d(t)\times d(t-1)}\) to a normalized sum of the activations for the previous layers of the neighbors of the node under consideration, together with its own activation. Adding a bias term \(\mathbf{b}^{(t)}\) yields the preactivation \(\mathbf{y}_{v}^{(t)}\), to which we apply the non-linearity \(\sigma\).
**MeanGNNs.** We also consider _GNN models with self-loops, mean aggregation and global readout_, denoted MeanGNN\({}^{+}\). This is a modification of GCN architecture in which we normalize the previous preactivations by taking the mean. We also include a global readout term. Formally,
these are instances of MPNNs with global readout, given as \(\mathbf{x}_{v}^{(t)}=\sigma\left(\mathbf{y}_{v}^{(t)}\right)\), where:
\[\mathbf{y}_{v}^{(t)}=\left(\begin{array}{c}\frac{1}{|\mathcal{N}^{\frac{1}{ \left(v\right)}}|\boldsymbol{V}_{n}^{(t)}\sum_{u\in\mathcal{N}^{+}\left(v \right)}\mathbf{x}_{u}^{(t-1)}}\\ +\frac{1}{n}\mathbf{V}_{\mathrm{r}}^{(t)}\sum_{u\in V}\mathbf{x}_{u}^{(t-1)} \\ +\boldsymbol{b}^{(t)}\end{array}\right).\]
We sometimes refer to this model _without the global readout_ term, which is simply denoted as MeanGNN.
**SumGNNs.** Finally, we define the _sum GNN models with global readout_, denoted SumGNN\({}^{+}\), which are instances of MPNNs with global readout, given as \(\mathbf{x}_{u}^{(t)}=\sigma\left(\mathbf{y}_{u}^{(t)}\right)\), where:
\[\mathbf{y}_{v}^{(t)}=\left(\begin{array}{c}\boldsymbol{V}_{\mathrm{s}}^{(t) }\mathbf{x}_{\mathrm{v}}^{(t-1)}\\ +\boldsymbol{V}_{\mathrm{n}}^{(t)}\sum_{u\in\mathcal{N}(v)}\mathbf{x}_{u}^{(t- 1)}\\ +\boldsymbol{V}_{\mathrm{r}}^{(t)}\sum_{u\in V}\mathbf{x}_{u}^{(t-1)}\\ +\boldsymbol{b}^{(t)}\end{array}\right).\]
This time, we separate out the contribution from the preactivation of the previous activation for the node itself. This yields three linear transformations \(\boldsymbol{V}_{\mathrm{s}}^{(t)},\boldsymbol{V}_{\mathrm{n}}^{(t)}, \boldsymbol{V}_{\mathrm{r}}^{(t)}\in\mathbb{R}^{d(t)\times d(t-1)}\). The model without the global readout term is denoted SumGNN.
## 3 Related Work
Graph neural networks are flexible models which can be applied to graphs of any size following training. This makes an asymptotic analysis in the size of the input graphs very appealing, since such a study could lead to a better understanding of the extrapolation capabilities of MPNNs which is widely studied in the literature (Yehudai et al., 2020; Xu et al., 2021). More broadly, zero-one laws have a rich history in first-order logic and random graph theory (Glebskii et al., 1969; Fagin, 1976; Libkin, 2004; Shelah and Spencer, 1988; Bollobas, 2001). Being the first of its kind in the graph machine learning literature, our study establishes fundamental links between graph representation learning, probability theory and logic, while also presenting an interesting way to analyze the expressive power of GNN models.
It is well-known that the expressive power of MPNNs is upper bounded by the _1-dimensional Weisfeiler Leman graph isomorphism test (1-WL)_(Xu et al., 2019; Morris et al., 2019). This implies that graphs cannot be distinguished by MPNNs if \(1\)-WL does not distinguish them. It has been shown that some architectures can match the power of 1-WL, including SumGNN\({}^{+}\) models (Morris et al., 2019). Barcelo et al. (2020) further gives a logical characterization for a class of MPNNs showing SumGNN\({}^{+}\) models can capture any function which can be expressed in the logic C\({}^{2}\), which is an extension of the two-variable fragment of first-order logic with counting quantifiers. Several works study the expressive power of these models under the assummption that there are _unique node identifiers_(Loukas, 2020), or define _higher-order_ GNN models (Morris et al., 2019; Maron et al., 2019; Beriven and Peyre, 2019) to obtain more expressive architectures.
Our work has direct implications on GNNs using random node features (Sato et al., 2021; Abboud et al., 2021), which are shown to be universal in the bounded graph domain. Specifically, we derive a zero-one law for GNNs using random node features which puts an upper bound on the expressive power of such models in a uniform sense: what class of functions can be captured by a _single_ GNN with random node features? Abboud et al. (2021) prove a universality result for these models, but it is not uniform, since the construction depends on the graph sizes, and yields a different model depending on the choice of the graph sizes. There is no known upper bound for the expressive power of GNNs with random node features in the uniform setting. Our results imply such an upper bound assuming ER graphs as inputs, and we complement this result, by giving a lower bound: GNNs with random node features can uniformly approximate any property which satisfies a certain zero-one law.
Other limitations of MPNNs include _over-smoothing_(Li et al., 2018; Oono and Suzuki, 2020) and _over-squashing_(Alon and Yahav, 2021) which are related to information propagation, and are linked to using more message passing layers. The problem of over-smoothing has also been studied from an asymptotic perspective (Li et al., 2018; Oono and Suzuki, 2020), where the idea is to see how the node features evolve as we increase the number of layers in the network. Our study can be seen orthogonal to this work: we conduct an asymptotic analysis in the size of the graphs rather than in the number of layers.
## 4 Zero-One Laws of Graph Neural Networks
### Problem statement
We first define graph invariants following Grohe (2021).
**Definition 4.1**.: A _graph invariant_\(\xi\) is a function over graphs, such that for any pair of graphs \(G_{1}\), \(G_{2}\), and, for any isomorphism \(f\) from \(G_{1}\) to \(G_{2}\) it holds that \(\xi(G_{1})=\xi(f(G_{2}))\). Graph invariants for graphs with node features are defined analogously.
Consider any GNN model \(\mathcal{M}\) used for binary graph classification. It is immediate from the definition that \(\mathcal{M}\) is invariant under isomorphisms of the graphs on which it acts. Hence \(\mathcal{M}\), considered as function from graphs to \(\mathbb{B}\), is a graph invariant. In this paper, we study the asymptotic behavior of \(\mathcal{M}\) as the number of nodes increases.
One remarkable and influential result from finite model theory is the 'zero-one law' for first-order logic. A (Boolean) graph invariant \(\xi\) satisfies a _zero-one law_ if when we draw graphs \(G\) from the ER distribution \(\mathbb{G}(n,r)\), as \(n\) tends to infinity the probability that \(\xi(G)=1\) either tends to \(0\) or tends to \(1\). The result, due to Glebskii et al. (1969) and Fagin (1976), states that any graph invariant which can be expressed by a first-order formula satisfies a zero-one law.
Inspired by this asymptotic analysis of first-order properties, we ask whether GNNs satisfy a zero-one law. As the input of a GNN is a graph with node features, we need to reformulate the statement of the law to incorporate these features.
**Definition 4.2**.: Let \(\mathcal{G}=(\mathbf{A},\mathbf{X})\) be a graph with node features, where \(\mathbf{A}\sim\mathbb{G}(n,r)\) is a graph adjacency matrix and, independently, \(\mathbf{X}\) is a matrix of node embeddings, where \(\mathbf{x}_{v}\sim\mathbb{D}(d)\) for every node \(v\). A graph invariant \(\xi\) for graphs with node features satisfies a _zero-one law_ with respect to \(\mathbb{G}(n,r)\) and \(\mathbb{D}(d)\) if as \(n\) tends to infinity, the probability that \(\xi(\mathcal{G})=1\) either tends to \(0\) or tends to \(1\).
Studying the asymptotic behavior of GNNs helps to shed light on their capabilities and limitations. A zero-one law establishes a limit on the ability of such models to extrapolate to larger graphs: any GNN fitted to a finite set of datapoints will tend towards outputting a constant value on larger and larger graphs drawn from the distribution described above. A zero-one law in this setting also transfers to a corresponding zero-one law for GNNs with random features. This establishes an upper-bound on the uniform expressive power of such models.
### Graph convolutional networks obey a zero-one law
Our main result in this subsection is that (Boolean) GCN classifiers obey a zero-one law. To achieve our result, we place some mild conditions on the model and initial node embeddings.
First, our study covers sub-Gaussian random vectors, which is a very wide class.
**Definition 4.3**.: A random vector \(\mathbf{x}\in\mathbb{R}^{d}\) is _sub-Gaussian_ if there is \(C>0\) such that for every unit vector \(\mathbf{y}\in\mathbb{R}^{d}\) the random variable \(\mathbf{x}\cdot\mathbf{y}\) satisfies the sub-Gaussian property: for every \(t>0\):
\[\mathbb{P}(|\mathbf{x}\cdot\mathbf{y}|\geq t)\leq 2\exp\left(-\frac{t^{2}}{C^{2}}\right)\]
Sub-Gaussian random vectors encompass almost all practical machine learning setups, including all bounded random vectors, and every multivariate normal random vector.
Second, we require that the non-linearity \(\sigma\) be Lipschitz continuous.
**Definition 4.4**.: A function \(f\colon\mathbb{R}\to\mathbb{R}\) is _Lipschitz continuous_ if there is \(C>0\) such that for any \(x,y\in\mathbb{R}\):
\[|f(x)-f(y)|\leq C|x-y|\]
Lipschitz continuity is a mild assumption, and in practice all non-linearities used are Lipschitz continuous. For example, each of \(\mathrm{ReLU}\), clipped \(\mathrm{ReLU}\), \(\mathrm{sigmoid}\), linearized \(\mathrm{sigmoid}\) and \(\tanh\) are Lipschitz continuous.
Third, we place a condition on the GCN weights with respect to the classifier function \(\mathfrak{C}\colon\mathbb{R}^{d(T)}\to\mathbb{B}\).
**Definition 4.5**.: Consider a distribution \(\mathbb{D}(d)\) with mean \(\mathbf{\mu}\). Let \(\mathcal{M}\) be a GCN used for binary graph classification. Define the sequence \(\mathbf{\mu}_{0},\dots,\mathbf{\mu}_{T}\) of vectors inductively by \(\mathbf{\mu}_{0}:=\mathbf{\mu}\) and \(\mathbf{\mu}_{t}:=\sigma(\mathbf{V}_{n}^{(t)}\mathbf{\mu}_{t-1}+\mathbf{b}^{(t)})\). The classifier classifier \(\mathfrak{C}:\mathbb{R}^{d(T)}\to\mathbb{B}\) is _non-splitting_ for \(\mathcal{M}\) if the vector \(\mathbf{\mu}_{T}\) does not lie on a decision boundary for \(\mathfrak{C}\).
For all reasonable choices of \(\mathfrak{C}\), the decision boundary has dimension lower than the \(d(T)\), and is therefore a set of zero-measure. This means that in practice essentially all classifiers are non-splitting.
**Theorem 4.6**.: _Let \(\mathcal{M}\) be a GCN used for binary graph classification and take \(r\in[0,1]\). Then, \(\mathcal{M}\) satisfies a zero-one law with respect to \(\mathbb{G}(n,r)\) and \(\mathbb{D}(d)\) assuming the following conditions hold:_
1. \(\mathbb{D}(d)\) _is sub-Gaussian._
2. \(\sigma\) _is Lipschitz continuous._
3. _The graph-level representation uses average pooling._
4. _The classifier is non-splitting._
The proof hinges on a probabilistic analysis of the preactivations in each layer. We use a sub-Gaussian concentration inequality to show that the deviation of each of the first-layer preactivations \(\mathbf{y}_{v}^{(1)}\) from its expected value becomes less and less as the number of node \(n\) tends to infinity. Using this and the fact that \(\sigma\) is Lipschitz continuous we show then that each activation \(\mathbf{x}_{v}^{(1)}\) tends towards a fixed value. Iterating this analysis through all the layers of the network yields the following key lemma, which is the heart of the argument.
**Lemma 4.7**.: _Let \(\mathcal{M}\) and \(\mathbb{D}(d)\) satisfy the conditions in Theorem 4.6. Then, for every layer \(t\), there is \(\mathbf{z}_{t}\in\mathbb{R}^{d(t)}\) such that when sampling a graph with node features from \(\mathbb{G}(n,r)\) and \(\mathbb{D}(d)\), for every \(i\in\{1,\dots,d(t)\}\) and for every \(\epsilon>0\) we have that:_
\[\mathbb{P}\left(\forall v\colon\left|\left[\mathbf{x}_{v}^{(t)}-\mathbf{z}_{t} \right]_{i}\right|<\epsilon\right)\to 1\quad\text{as }n\to\infty\]
With the lemma established, the proof of Theorem 4.6 follows straightforwardly from the last two assumptions. Since the final node embeddings \(\mathbf{x}_{v}^{(T)}\) tend to a fixed value \(\mathbf{z}_{T}\), the average-pooled graph-level representations also tend to this. Since we assume that the classifier is non-splitting, this value cannot lie on a decision boundary, and thus the final output is asymptotically stable at \(\mathfrak{C}(\mathbf{z}_{T})\).
### Graph neural networks with mean aggregation
Let us turn now to establishing a zero-one law for GNNs using mean aggregation. We place the same conditions as with Theorem 4.6. This time the notion of 'non-splitting' is as follows.
**Definition 4.8**.: Consider a distribution \(\mathbb{D}(d)\) with mean \(\mathbf{\mu}\). Let \(\mathcal{M}\) be a MeanGNN\({}^{+}\) used for binary graph classification. Define the sequence \(\mathbf{\mu}_{0},\ldots,\mathbf{\mu}_{T}\) of vectors inductively by \(\mathbf{\mu}_{0}:=\mathbf{\mu}\) and \(\mathbf{\mu}_{t}:=\sigma((\mathbf{V}_{\mathrm{n}}^{(t)}+\mathbf{V}_{\mathrm{r}}^{(t)})\bm {\mu}_{t-1}+\mathbf{b}^{(t)})\). The classifier classifier \(\mathfrak{C}:\mathbb{R}^{d(T)}\to\mathbb{B}\) is _non-splitting_ for \(\mathcal{M}\) if the vector \(\mathbf{\mu}_{T}\) does not lie on a decision boundary for \(\mathfrak{C}\).
Again, in practice essentially all classifiers are non-splitting.
**Theorem 4.9**.: _Let \(\mathcal{M}\) be a MeanGNN\({}^{+}\) used for binary graph classification and take \(r\in[0,1]\). Then, \(\mathcal{M}\) satisfies a zero-one law with respect to \(\mathbb{G}(n,r)\) and \(\mathbb{D}(d)\) assuming the following conditions hold:_
1. \(\mathbb{D}(d)\) _is sub-Gaussian._
2. \(\sigma\) _is Lipschitz continuous._
3. _The graph-level representation uses average pooling._
4. _The classifier is non-splitting._
Note that the result immediately applies to MeanGNN, since it is a special case of MeanGNN\({}^{+}\) where \(\mathbf{V}_{\mathrm{g}}=0\).
The overall structure of the proof is the same as for GCN. In particular, we prove the following key lemma stating that all node embeddings tend to fixed values.
**Lemma 4.10**.: _Let \(\mathcal{M}\) and \(\mathbb{D}(d)\) satisfy the conditions in Theorem 4.9. Then, for every layer \(t\), there is \(\mathbf{z}_{t}\in\mathbb{R}^{d(t)}\) such when sampling a graph with node features from \(\mathbb{G}(n,r)\) and \(\mathbb{D}(d)\), for every \(i\in\{1,\ldots,d(t)\}\) and for every \(\epsilon>0\) we have that:_
\[\mathbb{P}\left(\forall v\colon\left|\left[\mathbf{x}_{v}^{(t)}-\mathbf{z}_{t} \right]_{i}\right|<\epsilon\right)\to 1\quad\text{as }n\to\infty\]
With the key lemma in place the proof of Theorem 4.9 follows as before.
### Graph neural networks with sum aggregation
The third variant of GNNs we consider are those with sum aggregation. The proof in the case works rather differently, and we place different conditions on the model.
**Definition 4.11**.: A function \(\sigma\colon\mathbb{R}\to\mathbb{R}\) is _eventually constant in both directions_ if there are \(x_{-\infty},x_{\infty}\in\mathbb{R}\) such that \(\sigma(y)\) is constant for \(y<x_{-\infty}\) and \(\sigma(y)\) is constant for \(y>x_{\infty}\). We write \(\sigma_{-\infty}\) to denote the minimum and \(\sigma_{\infty}\) to denote the maximum value of an eventually constant function \(\sigma\).
Both the linearized \(\mathrm{sigmoid}\) and clipped \(\mathrm{ReLU}\) are eventually constant in both directions. Moreover, when working with finite precision any function with vanishing gradient in both directions can be regarded as eventually constant in both directions.
We also place the following condition on the weights of the model with respect to the mean of \(\mathbb{D}(d)\) and the edge-probability \(r\).
**Definition 4.12**.: Let \(\mathcal{M}\) be any SumGNN\({}^{+}\) for binary graph classification with a non-linearity \(\sigma\) which is eventually constant in both directions. Let \(\mathbb{D}(d)\) be any distribution with mean \(\mathbf{\mu}\), and let \(r\in[0,1]\). Then, the model \(\mathcal{M}\) is _synchronously saturating_ for \(\mathbb{G}(n,r)\) and \(\mathbb{D}(d)\) if the following conditions hold:
1. For each \(1\leq i\leq d(1)\): \[\left[(r\mathbf{V}_{\mathrm{n}}^{(1)}+\mathbf{V}_{\mathrm{g}}^{(1)})\mathbf{\mu}\right]_{ i}\neq 0\]
2. For every layer \(1<t\leq T\), for each \(1\leq i\leq d(t)\) and for each \(\mathbf{z}\in\{\sigma_{-\infty},\sigma_{\infty}\}^{d(t-1)}\) we have that:2 Footnote 2: We use \(\{\sigma_{-\infty},\sigma_{\infty}\}^{d(t-1)}\) to denote the set of \(d(t-1)\)-dimensional vectors whose components are all either \(\sigma_{-\infty}\) or \(\sigma_{\infty}\).
\[\left[(r\mathbf{V}_{\mathrm{n}}^{(t)}+\mathbf{V}_{\mathrm{g}}^{(t)})\mathbf{z}\right]_{i}\neq 0\]
To get a flavor of how wide the class of synchronously saturating models is, the following result shows that under certain assumptions on \(\mathbb{D}(d)\) and \(\sigma\), almost all \(\textsc{SumGNN}^{+}\) models are synchronously saturating. It is a straightforward consequence of the fact that to be non-synchronously-saturating is to satisfy one of a finite set of linear equalities.
**Proposition 4.13**.: _Let \(\mathbb{D}(d)\) be any distribution with mean \(\mathbf{\mu}\), such that \(\mathbf{\mu}_{i}\neq 0\) for each \(i\). Take \(r\in[0,1]\). Fix a non-linearity \(\sigma\) which is eventually constant in both directions, satisfying \(\sigma_{-\infty}\neq 0\) and \(\sigma_{\infty}\neq 0\). The following statements hold:_
1. _The set of values for each_ \(\mathbf{V}_{\mathrm{n}}^{(t)}\) _and_ \(\mathbf{V}_{\mathrm{g}}^{(t)}\) _which yield a_ SumGNN_\({}^{+}\) _model_ \(\mathcal{M}\) _which is not synchronously saturating for_ \(\mathbb{G}(n,r)\) _and_ \(\mathbb{D}(d)\) _has empty interior in the vector space of all weights._
2. _If the weights of a_ \(\textsc{SumGNN}^{+}\) _model_ \(\mathcal{M}\) _are drawn from a distribution whose support has non-empty interior, such as a multivariate normal or uniform distribution, then_ \(\mathcal{M}\) _is simultaneously saturating for_ \(\mathbb{G}(n,r)\) _and_ \(\mathbb{D}(d)\) _with probability_ \(1\)_._
With these definitions in place we can now lay out the main result:
**Theorem 4.14**.: _Let \(\mathcal{M}\) be a \(\textsc{SumGNN}^{+}\) used for binary graph classification and take \(r\in[0,1]\). Then, \(\mathcal{M}\) satisfies a zero-one law with respect to \(\mathbb{G}(n,r)\) and \(\mathbb{D}(d)\) assuming the following conditions hold:_
1. \(\mathbb{D}(d)\) _is sub-Gaussian._
2. \(\sigma\) _is eventually constant in both directions._
3. _The graph-level representation uses either average or element-wise maximum pooling._
4. \(\mathcal{M}\) _is synchronously saturating for_ \(\mathbb{G}(n,r)\) _and_ \(\mathbb{D}(d)\)_._
Note that as with Theorem 4.9, this result also applies to SumGNN, since it is a special cases of \(\textsc{SumGNN}^{+}\).
The proof works in a different way to the GCN and \(\textsc{MeanGNN}^{+}\) cases, but still rests on a probabilistic analysis of the preactivations in each layer. Assuming that \(\mathcal{M}\) is synchronously saturating for \(\mathbb{G}(n,r)\) and \(\mathbb{D}(d)\), we can show that the expected absolute value of each preactivation tends to infinity as the number of nodes increases, and that moreover the probability that it lies below any fixed value tends to \(0\) exponentially. Therefore, the probability that all node embeddings after the first layer are the same and have components which are all \(\sigma_{-\infty}\) or \(\sigma_{\infty}\) tends to \(1\). We then extend this analysis to further layers, using the fact that \(\mathcal{M}\) is synchronously saturating, which yields inductively that all node embeddings are the same with probability tending to \(1\). We therefore prove the following key lemma.
**Lemma 4.15**.: _Let \(\mathcal{M}\), \(\mathbb{D}(d)\) and \(r\) be as in Theorem 4.14. Let \(\sigma_{-\infty}\) and \(\sigma_{\infty}\) be the extremal values taken by the non-linearity. Then, for every layer \(t\), there is \(\mathbf{z}_{t}\in\{\sigma_{-\infty},\sigma_{\infty}\}^{d(t)}\) such that when we sample graphs with node features from \(\mathbb{G}(n,r)\) and \(\mathbb{D}(d)\) the probability that \(\mathbf{x}_{v}^{(t)}=\mathbf{z}_{t}\) for every node \(u\) tends to \(1\) as \(n\) tends to infinity._
The final classification output must therefore be the same asymptotically, since its input consists of node embeddings which always take the same value.
## 5 GNNs with Random Features
Up to this point we have been considering the graph plus node features as the (random) input to the GNN model. In this section, we make a change in perspective and regard the initial node features as part of the model, so that its input consists solely of the graph without features. We focus in this section on \(\textsc{SumGNN}^{+}\). Adding random initial features to GNNs is known to increase their power (Sato et al., 2021).
Note that Theorem 4.14 immediately yields a zero-one law for these models. This places restrictions on what can be expressed by \(\textsc{SumGNN}^{+}\) models with random features subject to the conditions of Theorem 4.14. For example, it is not possible to express that the number of graph nodes is even, since the property of being even doesn't satisfy a zero-one law with respect to any \(r\in[0,1]\).
It is natural to wonder how tight these restrictions are: what precisely is the class of functions which can be approximated by these models? Let us first formalize the notion of approximation.
**Definition 5.1**.: Let \(f\) be a Boolean function on graphs, and let \(\zeta\) be a random function on graphs. Take \(\delta>0\). Then \(\zeta\)_uniformly \(\delta\)-approximates_\(f\) if:
\[\forall n\in\mathbb{N}\colon\,\mathbb{P}(\zeta(G)=f(G)\mid|G|=n)\geq 1-\delta\]
when we sample \(G\sim\mathbb{G}(n,1/2)\).
The reason for sampling graphs from \(\mathbb{G}(n,1/2)\) is that under this distribution all graphs on \(n\) nodes are equally likely. Therefore, the requirement is the same as that for every \(n\in\mathbb{N}\) the proportion of \(n\)-node graphs on which \(\zeta(G)=f(G)\) is at least \(1-\delta\).
Building on results due to Abboud et al. (2021), we show a partial converse to Theorem 4.14: if a graph invariant satisfies a zero-one law for \(\mathbb{G}(n,1/2)\) then it can be universally approximated by \(\textsc{SumGNN}^{+}\) with random features.
**Theorem 5.2**.: _Let \(\xi\) be any graph invariant which satisfies a zero-one law with respect to \(\mathbb{G}(n,1/2)\). Then, for every \(\delta>0\) there is a \(\textsc{SumGNN}^{+}\) with random node features \(\mathcal{M}\) which uniformly \(\delta\)-approximates \(\xi\). Moreover, \(\mathcal{M}\) uses the linearized \(\mathrm{sigmoid}\) as the non-linearity and the distribution of the initial node embeddings consists of \(d\) iid \(U[0,1]\) random variables._
The basis of the proof is a result due to Abboud et al. (2021) which states that \(\textsc{SumGNN}^{+}\) models with random node features can approximate any graph invariant _on graphs of bounded size_. When the graph invariant satisfies a zero-one law, we can use the global readout to count the number of nodes. Below a certain threshold, we use the techniques of Abboud et al. (2021) to approximate the invariant, and above the threshold we follow its asymptotic behavior. We emphasise that the combination of these techniques yeilds a model which provides an approximation which is uniform across all graph sizes.
## 6 Experimental Evaluation
We empirically verify our theoretical findings over ER graphs with random node features. We want to answer the following questions for each model under consideration:
1. [label=**Q0**]
2. Do we empirically observe a zero-one law?
3. What is the rate of convergence like empirically?
4. What is the impact of the number of layers?
**Experimental setup.** We report experiments for GCNs, MeanGNNs, and SumGNNs. The following setup is carefully designed to eliminate confounding factors:
* We consider \(10\) GNN models of the same architecture each with _random weights_, where each weight is sampled independently from \(U(-1,1)\). The non-linearity is eventually constant in both directions: identity between \([-50,50]\), and truncated to \(-50\) if the input is smaller than \(-50\), and \(50\) if the input is greater than \(50\). We apply mean pooling to yield a final representation \(\mathbf{z}_{G}\in\mathbb{R}^{d}\) of the input graph.
* For every model, we apply a final classifier \(\sigma(f):\mathbb{R}^{d}\rightarrow\mathbb{B}\) where \(f\) is a 2-layer MLP with _random weights_ and with \(\tanh\) activation, which outputs a real value, and \(\sigma\) is the sigmoid function. Graphs are classified as \(1\) if the output of the sigmoid is greater than \(0.5\), and \(0\) otherwise.
* The input graphs are drawn from \(\mathbb{G}(n,1/2)\) with corresponding node features independently drawn from \(U(0,1)\).
* We conduct these experiments with three choices of layers: \(10\) models with \(T=1\) layer, \(10\) models with \(T=2\) layers, and \(10\) models with \(T=3\) layers.
The goal of these experiments is to understand the behavior of the respective GNN graph classifiers with mean-pooling, as we draw larger and larger ER graphs. Specifically, each model classifies graphs of varying sizes, and we are interested in knowing _how the proportion of the graphs which are classified as \(1\) evolves, as we increase the graph sizes_.
We independently sample 10 models to ensure this is not a model-specific behavior, aiming to observe the same phenomenon across the models. If there is a zero-one law, then for each model, we should only see two types of curves: either tending to \(0\) or tending to \(1\), as graph sizes increase. Whether it will tend to \(0\) or \(1\) depends on the final classifier: since each of these are independent MLPs with random weights the specific outcome is essentially random.
**Remark.** We consider models with up to \(3\) layers to ensure that the node features do not become alike because of the orthogonal over-smoothing issue (Li et al., 2016), which surfaces with increasing number of layers. Using models with random weights is a neutral setup, and random GNNs are widely used in the literature as baseline models (Thompson et al., 2022), as they nonetheless define valid graph convolutions and tend to perform reasonably well.
### Empirical results
We report all results in Figure 1 for all models considered and discuss them below. Each plot in this figure depicts the curves corresponding to the behavior of independent models with random weights. Figure 2 in the appendix contains experiments using SumGNN\({}^{+}\) models.
**GCNs.** For this experiment, we use an embedding dimensionality of \(128\) for each GCN model and draw graphs of sizes \(\{10,50,100,500,1000,2000,5000\}\), where we take \(32\) samples of each size. The key insight of Theorem 4.6 is that the final mean-pooled embedding vector \(\mathbf{z}_{G}\) tends to a constant vector as we draw larger graphs. Applying an MLP followed by a sigmoid function will therefore map \(\mathbf{z}_{G}\) to either \(0\) or \(1\), showing a zero-one law. It is evident from Figure 1 (top row) that all curves tend to either \(0\) or \(1\), confirming our expectation regarding the outcome of these experiments for GCNs. Moreover, this holds regardless of the number of layers considered. Since the convergence occurs quickly, already around graphs of size of \(1000\), we did not experiment with larger graph sizes in this experiment. In line with the proof of our theorem, the speed of convergence depends on the embedding dimensionality, hence our choice of \(128\). GCN models converge even faster with embedding dimensions \(64\), as expected.
**MeanGNN.** Given that the key insight in Theorem 4.9 is essentially similar to that of Theorem 4.6, we follow the exact same configuration for these models as for GCNs. The proof structure is the same in both cases: we show that the preactivations and activations become closer and closer to some fixed values as the number of nodes increases. Moreover, comparing the summations in the definitions of GCN and MeanGNN, on a typical ER graph drawn from \(\mathbb{G}(n,1/2)\) we would expect each corresponding summand to have a similar value, since \(\sqrt{|\mathcal{N}(u)||\mathcal{N}(v)|}\) should be close to \(|\mathcal{N}^{+}(v)|\).
Figure 1(mid row) illustrates the results for MeanGNNs and the trends are reassuringly similar to those of GCNs: all models converge quickly to either \(0\) and \(1\) with all choices of layers. Interestingly, the plots for GCN and MeanGNN are almost identical. We used the same seed when drawing each of the model weights, and the number of parameters is the same between the two. Hence the GCN models were parameterized with the same values as the MeanGNN models. The fact that each pair of models preforms near identically confirms the expectation that the two architectures
work in similar ways on ER graphs.
SumGNN.Theorem 4.14 shows that, as the number of nodes grow, the embedding vector \(\mathbf{z}_{v}\) of each _node_\(v\) will converge to a constant vector with high probability. Hence, when we do mean-pooling at the end, we expect to get the same vector for different graphs of the same size. The mechanism by which a zero-one law is arrived at is quite different compared with the GCN and MeanGNN case. In particular, in order for the embedding vectors to begin to converge, there must be sufficiently many nodes so that the preactivations surpass the thresholds of the non-linearity. We therefore expect slower convergence. To accommodate for this, for this experiment, we use a smaller embedding dimensionality of \(64\) for each SumGNN model and draw graphs of sizes \(\{10,50,100,500,1000,2000,5000,10000,15000,20000,\\ 50000,100000,150000,200000,50000\}\), where we take \(32\) samples of each size. Figure 1 shows the results for SumGNNs, which clearly suggest slower convergence, as predicted.
In fact, two models (out of 10) do not converge in the two-layer setup, and three models (out of 10) do not converge in the three-layer setup. While it is highly likely that the randomly sampled model weights yield models which are simultaneously saturating (see the second item in Proposition 4.13), analysis of the proof of Theorem 4.14 suggests that the closer the model weights are to being non-simultaneously-saturating, the slower they will be to converge to a zero-one law. We hypothesize that the slowest converging models are those whose weights are closest to the critical values.
## 7 Discussions and Outlook
Our study has uncovered an intimate connection between graph neural networks and zero-one laws. The relation
Figure 1: Each plot shows the \(\%\) of graphs of certain size which are classified as \(1\) by the respective models: GCNs (top row), MeanGNNs (middle), and SumGNNs (bottom row). Each curve (color-coded) shows the behavior of a model, as we draw increasingly larger graphs. The phenomenon is observed for 1-layer models (left column), 2-layer models (mid column), and 3-layer models (last column). GCNs and MeanGNNs behave very similarly with all models converging quickly to 0 or to 1. SumGNNs shows much slower convergence, but all models converge in the case of 1-layer models. When considering 2 layers (resp., 3 layers), there are 2 (resp., 3) models out of 10, which have not yet converged with graphs of size \(50000\).
ship has consequences for the expressive power and extrapolation capacity of these models. In both the empirical evaluation and the technical proofs of our results we discovered two distinct mechanisms by which GNNs arrive at zero-one laws. One of these was at play in the case of GCN and MeanGNN models, while the other in the case of SumGNN models.
We leave as future work to examine the asymptotic behavior of other GNN architectures, such as GAT (Velickovic et al., 2018) and GIN (Xu et al., 2019). We would expect GAT models to follow the pattern of GCN and MeanGNN, since it is a self-loop MPNN in which the attention weights are normalized. In contrast, the architecture of GIN models suggest they may behave similarly to SumGNN.
|
2301.09038 | Graph Convolutional Neural Networks for Optimal Power Flow Locational
Marginal Price | The real-time electricity market with the integration of renewable energies
and electric vehicles have been receiving significant attention recently. So
far most of the literature addresses the optimal power flow (OPF) problem in
the real-time electricity market context by iterative methods. However, solving
OPF problems in real-time is challenging due to the high computational
complexity by the iterative methods. Motivated by this fact, in this paper, we
propose a Chebyshev Graph Convolutional Neural Networks (ChebGCN) to improve
the efficiency of integrating low-carbon energy sources into power grids and to
address scalability and adaptivity of end-to-end existing OPF solutions. The
proposed GCN method is capable to predict the optimal energy market marginal
prices in real time. Numerical analysis is used to benchmark the results and
validate the improvement. | Adrian-Petru Surani, Rahul Sahetiya | 2023-01-22T01:50:10Z | http://arxiv.org/abs/2301.09038v1 | # Graph Convolutional Neural Networks for Optimal Power Flow Locational Marginal Price
###### Abstract
The real-time electricity market with the integration of renewable energies and electric vehicles have been receiving significant attention recently. So far most of the literature addresses the optimal power flow (OPF) problem in the real-time electricity market context by iterative methods. However, solving OPF problems in real-time is challenging due to the high computational complexity by the iterative methods. Motivated by this fact, in this paper, we propose a Chebyshev Graph Convolutional Neural Networks (ChebGCN) to improve the efficiency of integrating low-carbon energy sources into power grids and to address scalability and adaptivity of end-to-end existing OPF solutions. The proposed GCN method is capable to predict the optimal energy market marginal prices in real time. Numerical analysis is used to benchmark the results and validate the improvement.
## I Introduction
Electricity market pricing is one of the core tasks of operating power-grids, as the real-time market determines the incremental adjustment for the day-ahead by solving the optimal-power flow (OPF) problem [1]. The real-time OPF for market pricing also ensures high efficiency and reliability of grid operations, especially in the case of low-carbon energy sources integration, including renewable energy generations and electric vehicle integration. As the amount of smart, flexible resources' participation in distribution system operations keeps increasing, market-based approaches are needed for economic efficiency. [2] A key component to enable power distribution markets and provide economic incentives to market participants is the distribution locational marginal price (DLMP).
Typically, solving OPF problems by iterative methods incurs excessive computational complexity, limiting its applicability in large-scale power networks and real-time implementations. The locational marginal price (LMP) represents the marginal cost to serve one incremental unit of demand at a specific location in electric power networks, [3, 4]. Typically the LMP is calculated by solving the direct current optimal power flow (DC OPF) problem. Due to high voltage volatility and high power losses due to high R/X ratio, reactive power modeling is needed. As the DC OPF approach is unable to capture power losses and reactive powers in distribution systems, it is unsuitable for calculating the DLMP. A different approach has been proposed to calculate DLMPs in [2, 5, 6] which uses the single-phase alternating current optimal power flow (AC OPF) problem for the balanced system. At the same time, as LMPs relate to the duality analysis for OPF, their dependence on grid topology has been recognized in [7, 8].
To improve the high computation complexity of the accurate AC OPF problem, which is non-convex and non-linear, machine learning (ML) techniques have been proposed throughout existing literature: identifying active constraints [9], finding a warm start for iterative OPF solutions [10], or addressing the feasibility issue [11]. These models require an extensive off-line training of neural network (NN) models and as they rely on end-to-end NNs, also incur a high model and computation complexity for large-scale power grids. An additional cost is represented by the re-training of the models whenever the system inputs change as a result of frequently varying topologies.
To address these problems, we are motivated to utilize graph convolutional neural networks (GCN) to approximate the optimal marginal prices. The proposed method considers the power system measurements as the low-pass graph signals, and derive the suitable Graph Shift Operator (GSO) to design GCN. The proposed method also designs the regulation terms for the feasibility of power flow constraints.
The rest of the paper is organized as follows: The AC OPF formulation is introduced initially in Section II. In Section III, the proposed approach is discussed including various adaptations and optimizations such as convolution in Fourier domain to speed up computations. The case study results on a transmission system are given in Section IV, while Section V concludes the paper and discusses future research directions and exploratory questions.
## II Problem Formulation
The LMP at a certain location represents the incremental cost to supply an extra unit of load at this location. The typical approach is to use the DC OPF problem to determine the LMPs in the transmission level. The formulation of the considered AC OPF problem is:
\[\begin{array}{ll}\min\limits_{P_{G,k}}&\sum\limits_{k\in\mathcal{N}_{G}}C_{i}(P_ {G,k})\\ \text{s.t.}&P_{G,i}-P_{L,i}=\Re\{V_{i}\cdot(I_{i})^{*}\},i\in\mathcal{N}\\ &Q_{G,i}-Q_{L,i}=\Im\left\{V_{i}\cdot(I_{i})^{*}\right\},i\in\mathcal{N}\\ &\underline{P}_{G,k}\leq P_{G,k}\leq\overline{P}_{G,k},k\in\mathcal{N}_{G}\\ &\underline{Q}_{G,k}\leq Q_{G,k}\leq\overline{Q}_{G,k},k\in\mathcal{N}_{G}\\ &\underline{V_{i}}\leq|V_{i}|\leq\overline{V_{i}},i\in\mathcal{N}\end{array} \tag{1}\]
where \(\mathcal{N}_{G}\) denotes the set of buses which have generators. \(\Re\{\cdot\}\) and \(\Im\{\cdot\}\) denote the real and the imaginary part of a complex number, respectively. \(P_{G,k}\) and \(Q_{G,k}\) represent the active and reactive power output of the generator at bus k. Similarly, \(\underline{P}_{G,k},\underline{Q}_{G,k}\) and \(\overline{P}_{G,k},\overline{Q}_{G,k}\) correspond to the lower bounds and upper bounds for the active and reactive power generation. \(|V_{i}|\) corresponds to the voltage magnitude at bus \(i\), and \(\underline{V_{i}},\overline{V_{i}}\) the associated lower and upper bounds.
For simplicity, let \(u\) denote the vector of all control variables and \(x\) the vector of all state variables, including the voltage magnitude and angle at every bus and phase. We define \(h(x,u)\leq 0\) to represent the inequality constraints in (1). The Lagrangian function of the formulated AC OPF problem can be written as:
\[\begin{array}{ll}&L(x,u,\lambda,\nu,\mu)=\sum\limits_{k\in\mathcal{N}_{G}}C _{i}(P_{G,k})\\ &-\sum_{i\in\mathcal{N}}\lambda_{i}\left(P_{G,i}-P_{L,i}-\Re\{V_{i}\cdot(I_{i })^{*}\}\right)\\ &-\sum_{i\in\mathcal{N}}\nu_{i}\cdot(Q_{G,i}-Q_{L,i}-\Im\{V_{i}\cdot(I_{i}) ^{*}\}^{*})\\ &+\sum\limits_{m\in\mathcal{H}}\mu_{m}\cdot h_{m}(x,u)\end{array} \tag{2}\]
where \(\mathcal{N}\) represents the set of all buses in the system, \(\mathcal{P}_{i}\) the set of phases at bus i, and \(\mathcal{H}\) the set of inequality constraints. \(\lambda_{i}\) and \(\nu_{i}\) are the Lagrange multipliers corresponding to the active power balance equation, and the reactive power balance equation. \(u_{m}\) is the Lagrange multiplier associated with the inequality constraint \(h_{m}(x,u)\leq 0\).
Assuming the considered AC OPF problem has an optimal solution \((x^{*},u^{*})\), it can be calculated as:
\[DLMP_{i}=\left.\frac{\partial f}{\partial P_{L,i}}\right|_{x^{*},u^{*}}=\left. \frac{\partial L}{\partial P_{L,i}}\right|_{x^{*},u^{*}}=\lambda_{i} \tag{3}\]
Note that each bus has a marginal price \(\lambda_{i}\).
## III Proposed Approach
The GCNs are used to learn a mapping function from voltage measurements to optimal marginal prices.
### _Chebyshev Graph Neural Networks_
In spectral graph analysis, the graph Laplacian is an essential operator for undirected connected graphs, defined as: \(\mathbf{L}=\mathbf{D}-\mathbf{A}\in\mathbb{R}^{N\times N}\), while the normalized Laplacian is \(\mathbf{L}=\mathbf{I}_{N}-\mathbf{D}^{-1/2}\mathbf{A}\mathbf{D}^{-1/2}\), where \(\mathbf{I}_{N}\) is the identity matrix, \(\mathbf{A}\) is the adjacent matrix, and \(\mathbf{D}\in\mathbb{R}^{N\times N}\) is the diagonal degree matrix with \(\mathbf{D}_{ii}=\sum_{j}\mathbf{A}_{ij}\). The Laplacian matrix is diagonalized by the Fourier basis \(U=[u_{0},...,u_{N-1}]\in\mathbb{R}^{N\times N}\), where \(\{u_{l}\}_{l=0}^{\mathbb{N}-1}\in\mathbb{R}^{N}\) are the orthogonal eigenvectors of \(\mathbf{L}\). The eigenvalue decomposition for the Laplacian matrix is \(L=\mathbf{U}\mathbf{\Lambda}U^{T}\), where \(\Lambda=diag([\lambda_{0},...\lambda_{N-1}])\in\mathbb{R}^{N\times N}\) and \(\{\lambda_{l}\}_{l=0}^{\mathcal{N}-1}\in\mathbb{R}^{\mathcal{N}}\) is the ordered list of non-negative real eigenvalues associated with \(\mathbf{L}\), referred to as the graph's frequencies. The voltage phasor at time \(t\) is represented as the signal \(x=\left[|\mathbf{V}|,\mathbf{\theta}\right]\in\mathbb{R}^{2N}\) and the graph Fourier transform of the signal \(x\in\mathbb{R}^{2N}\) is \(\hat{x}=\mathbf{U}^{T}x\in\mathbb{R}^{2N}\), and its inverse Fourier transform as \(x=\mathbf{U}\hat{x}\). In the Fourier domain, the convolution on graph \(\mathcal{G}\) is denoted \(x_{*\mathcal{G}y}=U((U^{T}x)\odot(U^{T}y))\), where the \(\odot\) represents the element-wise product. Filtering the signal \(x\) by \(g_{\theta}\):
\[y=g_{\theta}(\mathbf{L})x=g_{\theta}(\mathbf{U}\Lambda\mathbf{U}^{T})x=\mathbf{U}g_{\theta}( \Lambda)\mathbf{U}^{T}x \tag{4}\]
The ChebNet model uses mainly Chebyshev polynomials to replace the convolution kernel in the spectral domain to represent the filter, as defined in 5, which is the truncated expansion of order \(K-1\):
\[g_{\theta}(\Lambda)=\sum\limits_{k=0}^{K-1}\theta_{k}T_{k}(\tilde{\Lambda}) \tag{5}\]
In (5), the parameter \(\theta\in\mathbb{R}^{K}\) is the vector of Chebyshev coefficients. \(T_{k}(\tilde{\mathbf{\Lambda}})\in\mathbb{R}^{\mathcal{N}\times\mathcal{N}}\) is the Chebyshev polynomial with order \(k\) evaluated at \(\tilde{\mathbf{\Lambda}}=2\lambda/\lambda_{max}-\mathbf{I}_{\mathcal{N}}\), where \(\tilde{\Lambda}_{ii}\in[-1,1]\) and \(\lambda_{max}\) is the maximum eigenvalue.
The calculation of the Chebyshev polynomial, \(T_{k}(x)\) can be obtained through the following recursive equation:
\[\left\{\begin{array}{ll}T_{0}(x)=1\\ T_{1}(x)=x\\ T_{\mathcal{N}+1}(x)=2xT_{\mathcal{N}}(x)-T_{\mathcal{N}-1}(x)\end{array}\right. \tag{6}\]
Therefore, the convolution operation is expressed as:
\[x_{*\mathcal{G}g_{\theta}}=\mathbf{U}(\sum\limits_{i=0}^{K-1}\theta_{k}T_{k}( \tilde{\mathbf{\Lambda}}))\mathbf{U}^{T}x \tag{7}\]
Similar to \(\tilde{\mathbf{\Lambda}}=2\mathbf{\Lambda}/\lambda_{max}-\mathbf{I}_{\mathcal{N}}\) and \(T_{k}(\tilde{\mathbf{L}})\), \(\tilde{\mathbf{L}}\), and \(T_{i}(\tilde{\mathbf{L}})\) can be defined as:
\[\tilde{\mathbf{L}}=2\mathbf{L}/\lambda_{max}-\mathbf{I}_{\mathcal{N}} \tag{8}\]
and
\[\begin{array}{ll}T_{k}(\tilde{\mathbf{L}})&=T_{k}(2\mathbf{L}/\lambda_{max}-\mathbf{I}_{ \mathcal{N}})\\ &=T_{k}(2\mathbf{U}\mathbf{\Lambda}\mathbf{U}^{T}/\lambda_{max}-\mathbf{U}\mathbf{I}_{\mathcal{N}}U ^{T})\\ &=T_{k}(\mathbf{U}(2\mathbf{\Lambda}/\lambda_{max}-\mathbf{I}_{\mathcal{N}})U^{T})\\ &=\mathbf{U}\mathbf{U}T_{k}(\hat{\mathbf{\Lambda}})\mathbf{U}^{T}\end{array} \tag{9}\]
By performing the Fourier transform, doing the convolution and then recovering the \(\mathbf{U}\) through the inverse Fourier transform, we can avoid extra-calculations, and simplify the equation further:
\[\begin{array}{ll}x_{*\mathcal{G}g_{\theta}}&=\mathbf{U}\left(\sum\limits_{i=0}^{K- 1}\theta_{k}T_{k}(\tilde{\mathbf{\Lambda}})\right)\mathbf{U}^{T}x\\ &=\sum\limits_{i=0}^{K-1}\theta_{k}\left(\mathbf{U}T_{k}(\tilde{\mathbf{\Lambda}})\mathbf{U}^ {T}\right)x\\ &=\sum\limits_{i=0}^{K-1}\theta_{k}T_{k}\left(\tilde{\mathbf{L}}\right)x\end{array} \tag{10}\]
As pointed above in (10), the calculation does not require eigenvector decomposition, which improves the computational complexity. Given a sample \(x\), we calculate the output output feature map denoted as \(y\), using the \(\theta_{i,j}\in\mathbb{R}^{K}\) as Chebyshev coefficients.
\[y=\sum_{k=1}^{K}g_{\theta_{k}}(\mathbf{L})x \tag{11}\]
where the output can be multiple channels.
### _GCN for Marginal Price Prediction and Forecasting_
Inspired by the graph signal \(x\) in (10), we propose to utilize Chebyshev GCN to predict the marginal prices, i.e., \(\lambda_{i}\) of (3), instead of solving a nonconvex optimization problem. In our problem, our input graph signals are the voltage magnitude and voltage angles. Our graph shift operator \(\mathbf{L}\) is the absolute value of system matrix \(\mathbf{Y}\), i.e., \(\mathbf{L}=|\mathbf{Y}|\). The process of training and testing processes can be summarized as follows.
1. Utilize the Interior Point Method (IPM) to solve the Lagrangian problem in (2) to obtain the optimal \(\lambda_{i}^{*},i\in\mathcal{N}\).
2. Train the graph neural networks by regarding \(|V_{i}|,i\in\mathcal{N}\) and \(|\theta_{i}|,i\in\mathcal{N}\) as inputs, and \(\lambda_{i},i\in\mathcal{N}\) as the outputs.
3. After training the GCN, we can predict the unseen \(\lambda_{i},i\in\mathcal{N}\) by putting the new-arrival measurements \(|V_{i}|,i\in\mathcal{N}\) and \(|\theta_{i}|,i\in\mathcal{N}\).
This framework can be also implemented for the forecasting problem if the power demands are time-series measurements. The only difference is Step 2, i.e., regarding \(|V_{i}(t)|,i\in\mathcal{N}\) and \(|\theta_{i}(t)|,i\in\mathcal{N}\) as inputs, and the future \(\lambda_{i}(t+1),i\in\mathcal{N}\) as the outputs.
## IV Analysis and numerical results
We will consider the IEEE 118-bus standard testing cases with the real-world renewable energies and EV charging powers for analysis. The 118-bus system has been used to represent the real-world demand, and for each demand vector the real and imaginary parts were separated.
### _System Setting_
The code was run on a machine powered by the Apple M1 chip, using 16 GB RAM. Most of the computations were run under an upper bound of 2 hours. The Python environment was built using the Conda package system and Python version 3.10.8. Pytorch 1.13.0 was used to build the GNN models and to subsequently benchmark them. We consider two benchmarks, i.e., fully connected neural networks (FCNN) and 1st-order approximated graph neural network [12].
### _Prediction of Marginal Prices_
In order to predict marginal prices, three approaches were used based on PyTorch's classic models: Fully-Connected Neural Networks (FCNNs), GNN (Graph Neural Networks) and Chebyshev GCN. Figure 1 compares the performances of the three methods and subsequently the improvements throughout the stages. In particular, Chebyshev GCN predicts the optimal marginal prices with \(4.5717e^{-5}\) MSEs, while GNN has 0.0016 and FCNN has \(2.3304e^{-4}\) MSEs. The GNN performance has been significantly better in both prediction and forecasting. We also illustrate the performance of FCNN, GNN, and Chebyshev GCN in Figs. 2, 3 and 4. It shows that Chebyshev GCN can predict the ground-truth marginal prices with very high accuracy.
show that ChebyGCN performs slightly better than FCNN. This is because forecasting is a time-series problem, while ChebyGCN does not consider the temporal correlations. We also illustrate three examples of FCNN, GNN and Chebyshev GCN for the forecasting marginal prices in Figs. 6-8. The Chebyshev GCN still produces a promising result that approximates the ground-truth marginal prices.
obtained by considering time-series data, which would require Recurrent Neural Networks (RNNs) and Long Short-Term Memory (LSTM).
|
2301.10569 | Spatio-Temporal Graph Neural Networks: A Survey | Graph Neural Networks have gained huge interest in the past few years. These
powerful algorithms expanded deep learning models to non-Euclidean space and
were able to achieve state of art performance in various applications including
recommender systems and social networks. However, this performance is based on
static graph structures assumption which limits the Graph Neural Networks
performance when the data varies with time. Spatiotemporal Graph Neural
Networks are extension of Graph Neural Networks that takes the time factor into
account. Recently, various Spatiotemporal Graph Neural Network algorithms were
proposed and achieved superior performance compared to other deep learning
algorithms in several time dependent applications. This survey discusses
interesting topics related to Spatiotemporal Graph Neural Networks, including
algorithms, applications, and open challenges. | Zahraa Al Sahili, Mariette Awad | 2023-01-25T13:17:46Z | http://arxiv.org/abs/2301.10569v2 | # Spatio-Temporal Graph Neural Networks: A Survey
###### Abstract
Graph Neural Networks have gained huge interest in the past few years. These powerful algorithms expanded deep learning models to non-Euclidean space and were able to achieve state of art performance in various applications including recommender systems and social networks. However, this performance is based on static graph structures assumption which limits the Graph Neural Networks' performance when the data varies with time. Spatio-temporal Graph Neural Networks are extension of Graph Neural Networks that takes the time factor into account. Recently, various Spatio-temporal Graph Neural Network algorithms were proposed and achieved superior performance compared to other deep learning algorithms in several time dependent applications. This survey discusses interesting topics related to Spatio-temporal Graph Neural Networks, including algorithms, applications, and open challenges.
graph neural networks, temporal, spatiotemporal graphs, time series.
## 1 Introduction
Graph Neural Networks (GNNs) are a class of deep learning models that are specifically designed to operate on graph-structured data. These models leverage the graph topology to learn meaningful representations of the nodes and edges of the graph. GNNs are an extension of traditional convolutional neural networks, and they have been shown to be effective in tasks such as graph classification, node classification, and link prediction. One of the key advantages of GNNs is their ability to maintain good performance even as the size of the underlying graph grows, thanks to the independence of the number of learnable parameters from the number of nodes in the graph.
GNNs have been extensively used in various domains such as including recommendation systems, drug discovery and biology, and resource allocation in autonomous systems. However, these models are limited to static graph data, where the graph structure is fixed. In recent years, there has been an increased interest in time-varying graph data, which appears in various systems and carries valuable temporal information. Applications of time-varying graph data include multivariate time series data, social networks, and audio-visual systems.
To address this need, a new family of GNNs has emerged: spatio-temporal GNNs, which takes into account both the spatial and temporal dimensions of the data by learning temporal representations of the graph structure. In this survey, we provide a comprehensive review on the state-of-the-art spatio-temporal GNNs. We start by giving a brief overview of the
different types of spatio-temporal GNNs, and their underlying assumptions. We then delve into the specific algorithms used in spatio-temporal GNNs in greater detail, while also providing a useful taxonomy for grouping these models. We also provide an overview of the various applications of spatio-temporal GNNs, highlighting the key areas where these models have been used to achieve state-of-the-art results. Finally, we discuss open challenges and future research directions in the field.
In summary, this survey aims to provide a comprehensive and in-depth look at spatio-temporal GNNs, highlighting the current state of the field, the key challenges that still need to be addressed, and the exciting future possibilities for these models.
## 2 Background
Graph Neural Networks (GNNs) are a class of neural networks that operate on graph-structured data. They are designed to learn from and make predictions on graph-structured data, such as social networks, molecular structures, and transportation networks. GNNs can be broadly classified into two categories: spectral and spatial GNNs.
Spectral GNNs make use of the eigenvectors and eigenvalues of the graph Laplacian matrix to define the graph convolutional operation. A popular example of a spectral GNN was presented in [2], which utilizes the graph Laplacian to define a convolutional operation applied to the node features. On the other hand, Spatial GNNs use the graph structure to define the convolutional operation and typically involve updating the node representations based on the representations of its neighboring nodes. Examples of Spatial GNNs include Graph Attention Network (GAT)[3] and Graph Isomorphism Network (GIN)[4]. GAT uses attention mechanisms to weigh the importance of different neighbors when updating node representations, while GIN is an extension of GCN that uses a multi-layer perceptron to update node representations, rather than using the graph Laplacian.
Recently, there has been a significant advancement in GNNs with the introduction of Graph Transformer [5], which is an extension of the transformer architecture to operate on graph-structured data. The Graph Transformer uses self-attention mechanisms to update node representations and has been shown to achieve state-of-the-art results on several graph-based tasks.
In conclusion, GNNs are a powerful tool for working with graph-structured data and have demonstrated great potential in a wide range of tasks. The choice of GNN architecture depends on the specific task and the properties of the graph data. Spectral GNNs like GCN are better suited for tasks that involve node classification and link prediction, while spatial GNNs like GAT and GIN are more suitable for tasks that involve graph classification and node clustering. With the recent development of Graph Transformer, it has also shown great potential in graph-based tasks.
## 3 Algorithms
Spatio-temporal graph neural networks can be classified from algorithmic perspective as spectral based and spatial based. Another classification category is the method time variant was introduced: wether using another machine learning algorithm or defining time within the graph structure.
### Hybrid Spatio-Temporal Graph Neural Networks
Hybrid Spatio-temporal graph neural networks constitute of two main components: a spatial component and a time component (Figure1).
#### 3.1.1Spatial Module
In hybrid Spatio-temporal graph neural networks, graph neural network algorithms are used to model the spatial dependencies in the data.
#### 3.1.1 Spectral Graph Neural Networks
Spectral GNNs are based on the spectral definition of convolution operation. Early Spatio-temporal GNNs heavily relied on this spectral definition. For example, Yu et al., used Chebyshev GNN in the STGCN algorithm [6]. In addition, Cao et al. used Spectral graph convolution to model the space domain in his StemGNN [7]. Recently, Simeunivic et al. used spectral GCNs in his both algorithms: GCLSTM and CGTransfo [8].
#### 3.1.1.2 Spatial Graph Neural Networks
With the advances in spatial graph neural networks research, various researchers used spatial GNNs to model the spatial domain in spatio-temporal GNNs.Chen et al[9], used recurrent graph neural network(RGNN) with skip connection to model spatial dependencies in traffic forecasting. However, Wu et al. used Graph Convolution neural networks GCNs with skip connection in his MTGNN algorithm [10]. Additionally, GCN was used in the Structural RNN algorithm [11].
Graph neural networks with attention mechanism (GAT) was used in [12, 13]. In A2GNN, Huang et al. used the GAT with auto graph learner to improve the forecasting performance [2]. Moreover, Kan et al. used GAT cascaded with a graph transformer and a heriraical pooling mechanism in the HST-GNN implementation [13].
More advanced spatiotemporal GNN algorithms were used for space modelling in [14, 15]. Oreshkin et al. used Gated graph neural networks in the FG-GAGA algorithm. In contrast, Graph Isomorphism network was used by Kim et al. to model brain connectivity in brain graph representation.
#### 3.1.1.3 Graph Transformers
Two algorithms relied on graph transforms to model spatial dependencies: TransMOT and Forecaster [16, 17].In addition, Kan et al. accompanied his GAT with a Graph transformer in his HST-GNN architecture [13].
#### 3.1.2 Temporal Module
To model the time domain, various machine learning algorithms can be involved.
#### 3.1.2.1 1d-Cnn
Yu et al. used a 1D-CNN to account for the time domain in his STGCN algorithm. Moreover, Wu et al [10], used an inception layer in the MTGNN implementation[6]. Also, Cao et al. used 1D CNNs with GLU units for the temporal module [7].
#### 3.1.2.2 Recurrent Neural Networks
Recurrent neural networks and its variants as Gated Recurrent units (GRUs) and Long Short Terms Memory units (LSTMs) were widely adopted in hybrid spatio-temporal GNNs to model the time domain. Jain et al used RNNs in the structural RNN algorithm [11]. On the other side, Oreshkin et al. used GRUs in FG-GAGA GNN while Chen et al. [9] used both GRUs and LSTMs in the MResGNN algorithm. In addition, [8, 12, 13, 18] all used LSTMs as time modules. In the HST-GNN algorithm, Kan et al. used 2 LSTMs with an attention mechanism within a wider encoder decoder architecture [13].
#### 3.1.2.3 Transformers
Recently, a huge focus was imposed on transformer architectures that were used to accompany for the time domain. Transformer was used by [8, 15, 16, 17] in the TransMOT, Forecaster, STAGIN, and GCTransfo respectively.
### 3.2 Solo-Graph Neural Networks
Another method to model time in spatio-temporal graph neural networks is to define the temporal frame within the GNN itself. Multiple approaches were proposed including: defining time as an edge, inputting time as a signal to the GNN, time modeled as a subgraph, and sandwiching other machine learning architectures inside the GNN (Figure 2).
Figure 1: Hybrid based Spatio-Temporal Graph Neural Networks
3.2.ITme as Edge
Kapoor et al. [19] used spatial GCN with skip connections to forecast covid. In this algorithm, time was defined as an edge and locations as graph nodes. Additionally, time was defined as an edge in USTGCN algorithms which modified the space adjacency matrix to a space-time adjacency matrix [20].
3.2.2Time as Signal
Time as an input signal was used in GNN pure based spatio-temporal GNNs. Zhang et al. used temporal hierarchy modelling to input time to the GAT [21]. The algorithm also included graph trimming and convolution diffusion to improve the performance. Moreover, Shen et al. used a gated dilated casual block for the temporal input [22]. The output of this block was inputted to a dynamic GCN in parallel with the output of a similar double block for the spatial domain. Time was also inputted as a signal in the CasualGNN algorithm [23]. The algorithm is based on dynamic graph neural network with attention mechanism and a casual module.
3.2.3Time as Subgraph
Li et al modelled time as a subgraph within a graph isomorphism network (GIN) [24]. Moreover, Shao et al. used temporal similarity graph to account for the temporal domain, which was added to other spatial graphs to form a multigraph set that constructed the ASTGCN framework [25].
3.2.4Time using Sandwiching
Karimi et al. used two 1D-CNNs to model time.In this architecture, the 1D-CNNs were sandwiched inside the GCN architecture as sub modules [11].
Time as Filter
In Space time Graph Neural Network, both time and space were introduced as multivariate integral Lipschitz filters inside the GCN [26].
Fig. 2: Time Modelling in GNN only algorithm
\begin{tabular}{|l|l|c|c|c|c|c|c|} \hline Author & Name & Hybrid & GNN & Spectral & Spatial & Spatial Module & Time \\ & & Algorithm & only & Based & Based & & Module \\ \hline Yu et al. & STGCNGCN & \(\surd\) & & \(\surd\) & & Chebychev GNN & 1D- \\ & & & & & & & CNN \\ \hline Nicolivioiu et al. & RSTGCN & \(\surd\) & & & \(\surd\) & Custom(3D CNN) & LSTM \\ \hline Chen et al. & MResGNN & \(\surd\) & & & \(\surd\) & RGNN & GRU \& LSTM \\ \hline Kapoor et al & - & & \(\surd\) & & \(\surd\) & GCN; time added as an edge \\ \hline Wu et al. & MTGNN & \(\surd\) & & & \(\surd\) & GCN & Inceptio \& layer \\ \hline Li et al. & Unified GNN & & \(\surd\) & & \(\surd\) & GIN; time added as a subgraph module \\ \hline Cao et al. & StemGNN & \(\surd\) & & \(\surd\) & Spectral GCN & 1D- \\ & & & & & CNN \\ \hline Oreshkin et al. & FG-GAGACN & \(\surd\) & & \(\surd\) & GGCN & GRU \\ \hline Jain et al. & Structural RNN & \(\surd\) & & \(\surd\) & & RNN \\ \hline Karimi et al. & St-GNN & & \(\surd\) & & \(\surd\) & GCN with 1DCNN sandwiched in the GNN \\ \hline Chu et al. & TransMOT & \(\surd\) & & \(\surd\) & Graph transformer & Transfor mer \\ \hline Kim et al. & Forecaster & \(\surd\) & & \(\surd\) & Graph Transformer & Transfor mer \\ \hline Kim et al. & STAGIN & \(\surd\) & & \(\surd\) & GIN & Transfor mer \\ \hline Zhang et al. & ST-GDN & & \(\surd\) & & \(\surd\) & GAT;time as input using temporal \\ & & & & & heriracky modeling & \\ \hline Shao et al. & ASTGCN & & \(\surd\) & & \(\surd\) & Multi graph set;Time as Temporal \\ & & & & & similarity graph & \\ \hline Huang et al. & A2GNN & \(\surd\) & & \(\surd\) & GAT & LSTM \\ \hline Simeunovic et al. & GCLSTM & \(\surd\) & & \(\surd\) & & Spectral GCN & LSTM \\ \hline Simeunovic et al. & GCTransfo & \(\surd\) & & \(\surd\) & & Spectral GCN & Transfor mer \\ \hline Kan et al. & HST-GNN & \(\surd\) & & \(\surd\) & Graph Transformer & LSTM \\ & & & & & \& gGAT & \\ \hline Hadous et al. & Space Time GNN & & \(\surd\) & & \(\surd\) & GCN; time as a filter \\ \hline Shen et al. & T2GNN & & \(\surd\) & & \(\surd\) & Dynamic GCN: time as input \\ & & & & & signal \\ \hline Wang et al. & CasualGNN & & \(\surd\) & & GAT:time as a signal & \\ \hline Roy et al. & USTGCN & & \(\surd\) & & \(\surd\) & GCN;time as an edge \\ \hline \end{tabular}
_Table I.STGNN Algorithms_
## 4 Applications
### Multivariette Time series Forecasting
Motivated by the power of GNNs in handling relational dependencies [10], spatio-temporal GNNs were widely applied in multivariate time series forecasting. Applications include traffic forecasting, Covid forecasting, PV power consumption, RSU communication, and seismic applications.
#### 4.1.1Traffic
Transportation is considered a very important factor in every person's life [6]. Based on a study conducted in 2015, U.S. drivers spend a daily average of 48 minutes behind the wheel [27]. Thus, accurate real-time forecast of traffic conditions is of dominant importance for road users, private sectors and governments. However, traditional machine learning forecast systems fails to satisfy accuracy conditions due to the high nonlinearity and complexity of traffic flow [6]. In contrast and based on the power of GNNs in handling non linearities, spatiotemporal graph neural networks were widely applied in traffic forecasting in both aspects: long term and short-term predictions [6, 7, 9, 12, 14, 17, 20, 21, 22, 24].
#### 4.1.2Pandemic Forecasting
In a state of pandemic, the ability to accurately forecast caseload is extremely important to country level or the individual level [19]. With conventional algorithms considering forecasting pandemic cases as a closed loop based on previous cases and considering the spatial dependencies between neighborhoods in effecting pandemics, spatio-temporal graph neural networks were used to accompany for both space and time in pandemics. Several spatio-temporal graph neural network algorithms were proposed and found to achieve state of art COVID forecasting in the United States, United Kingdom, Germany, and worldwide [7, 10, 19, 23]
#### 4.1.3pv
Due to the rapid increase in installation of commercial PV power plants, the operation and planning for reliable performance of PV systems is a crucial challenge [28].Ensuring reliable performance includes monitoring the slow loss of electricity output and effective planning based on the PV power output. This reliability can be achieved by accurate power forecasting. Based on the ability of GNNs in capturing spatial and temporal dependencies, spatio-temporal graph neural networks were widely adopted to forecast PV power [7.8, 11] and were able to achieve superior performance over other forecasting algorithms.
#### 4.1.4RSU communication
As a special type of base station, Road Side Units (RSU) can be deployed at low cost and effectively alleviate the communication burden of regional Vehicular Ad-hoc Networks (VANETs) [29]. Unfortunately, due to the limited energy storage and peak hour communication demands in VANETs, it is crucial for RSUs to adjust their participation in communication according to the requirements and allocate energy reasonably to balance the workload. Zheng et al. proposed a spatio-temporal graph neural network algorithm the forecast RSU network load through inputting the historical information around RSU and the topological relationship between RSU [29].
#### 4.1.5Seismic
Bloemheuvel et al. used spatio-temporal graph neural networks to predict earthquakes. The proposed approach achieved superior performance over other traditional machine learning algorithms.
### Human Object Interaction
Learning in the space-time domain remains a very challenging problem in machine learning and computer vision [18]. The main challenge is how to model interactions between objects and higher level concepts, within the large spatio-temporal context[18]. In such a difficult learning task it is critical to efficiently model the spatial relationships, the local appearance, and the complex interactions and changes that take place over time. Nicolicioiu et al. introduced a spatio-temporal graph neural network model, recurrent in space and time, suitable for capturing both the local appearance and the complex higher-level interactions of different entities and objects within the changing world scene [18].
### Dynamic graph representation
Temporal graph representation learning has been considered a very important aspect in graph machine learning [15, 31]. With limitations of existing methods in capturing powerful representations due to reliance on discrete snapshots of the temporal graph, Wen et al. proposed a dynamic graph representation learning method using spatio-temporal graph neural networks [31]. Moreover, Kim et al. used spatio-temporal GNNs to dynamically represent brain graphs [15].
_Multiple object tracking_
In addition, tracking multiple objects in videos heavily depend on on modeling the spatial-temporal interactions between objects [16]. Chu et al. proposed a spatio-temporal graph neural network algorithm that models spatial and temporal interactions among the objects [16].
### Sign Language Translation
Sign languages, which engage visual-manual modalities to convey meanings, are the primary communication tools for the deaf and hard-of-hearing community [13]. To reduce the communication gap between spoken language and sign language users, machine learning is involved. Traditionally, neural machine translation has been heavily adopted while more advanced methods are needed to capture the spatial properties in sign languages. Kan et al. presented a spatio-temporal graph neural network-based translation system, that is powerful in capturing spatial and temporal structures of sign language which led to state of art performance compared to traditional neural machine translation methods [13].
### Technology growth ranking
Understanding the growth rate of technologies is a core key in technology sector business strategy [32]. In addition, predicting the growth rate of technologies and the relations to each other informs business decision-making in terms of product definition, marketing strategies, and research and development [32]. Cummings et al. proposed a methodology to predict technology growth ranking from social networks using spatio-temporal graph neural networks [32].
### Knowledge Graphs and Social Networks
Real world graphs like social networks and knowledge graphs are dynamic. For example, in a social network, new users join over time and users interact with each other through messages and posts reactions[33]. In addition, new events appear with time in knowledge graphs. To account for the evolving dynamic properties in graphs, [33] introduced a
temporal graph neural networks that can handle billions of nodes and edges and can jointly learn the temporal, structural, and contextual relationships on dynamic graphs.
### Audio Visuals and Emotion Perception
Effective dimension prediction from multi-modal data is becoming an increasingly challenging and important research area [33]. For example, discriminative features from multiple modalities are critical to accurately recognize emotional states. Motivated by their spatial and temporal power, Chen et al. investigated spatio-temporal graph neural networks in audio visuals [33]. The framework achieved superior performance compared to traditional deep learning frameworks when experimented on emotional recognition applications. In addition, Bhattacharya et al. proposed a spatio-temporal graph neural network that leverages emotion perception [34].
## 5 Open Challenges
### Automation
Automation in machine learning is an important research topic. AutoML, is a promising direction to improve the algorithms performance and scale up the models training. With available techniques that are explored in neural networks, expansion of this research area to spatio- temporal graph neural networks is a promising direction.
### Benchmarking
Spaio-temporal algorithms lack benchmark datasets. Every proposed spatio-temporal graph neural network algorithm is trained on a custom dataset and not compared to compartment models. This might be caused by the lack of benchmarks which make it time consuming to retrain all compartment models. Presenting a benchmark for spatio-temporal data is very important in understanding the algorithms and comparing their performance.
### Augmentation
Neural Networks heavily rely on large data. As neural network extension, spatio-temporal graph neural networks need abundant data for superior performance. Wang et al. attempted to augment temporal graphs using MeTA (Memory Tower Augmentation) module that provide adaptively augmented inputs for every prediction. Nevertheless, augmenting spatio-temporal data is still considered a very challenging task which advances in it can lead to huge improvement in this research era [35].
### Privacy/Federated Learning
Privacy of users is an ethical concern. Federated learning methods are the leading approach to privacy preserving algorithms. With a single federated learning research attempt, more approaches are critically needed in the spatio-temporal graph neural networks era.
### Pretraining and Transfer Learning
Neural networks rely of abundant data to perform efficiently. In scarce data setting, transfer learning is considered the most affordable approach to achieve high performance. In spatio-temporal graph neural networks, transfer learning is still a challenge. Lou and Panagopoulos et al. proposed transfer learning frameworks for Covid forecasting and Highway traffic
modeling and forecasting approaches are critical for intelligent transportation systems [36, 37]. Advances in this field include investigating transferability of spatio-temporal graph neural networks added to proposing similarity metrics and methods that mitigate negative transfer.
### Acceleration
Training spatio-temporal graph neural networks is a challenge especially on graph with billions of parameters like social networks. Acceleration research can help achieve faster training for the models in addition to reducing the computational complexity of models. In [38],Zhou et al. attempted to accelerate the spatio-temporal graph neural networks through TGL, a unified framework for large-scale offline training. The implementation included a design a Temporal-CSR data structure and a parallel sampler to efficiently sample temporal neighbors to form raining mini-batches. TGL achieved similar or better accuracy compared to other spatio-temporal GNNs with an average of 13\(\times\) speedup. However, further advances in the field to speed up of training and decrease the computational complexity is still and urgent research stream especially with graph sizes increasing with the data hype.
## 6 Conclusion
Graph Neural Networks have gained huge interest in the past few years. These powerful algorithms expanded deep learning models to non-Euclidean space. However, GNNs are limited to static graph structures assumption which limits the Graph Neural Networks' performance when the data varies with time.Spatio-temporal Graph Neural Networks are extension of Graph Neural Networks that takes the time factor into account. In this survey, we conduct a comprehensive overview of spatio-temporal graph neural networks. First, we provide a taxonomy which groups spatio-temporal graph neural networks into two categories based on the method time variant is introduced. We also discuss a wide range of applications of spatio-temporal graph neural networks. Finally, we suggest future directions based on current open challenges in spatio-temporal GNNs.
|
2307.07370 | AIC-AB NET: A Neural Network for Image Captioning with Spatial Attention
and Text Attributes | Image captioning is a significant field across computer vision and natural
language processing. We propose and present AIC-AB NET, a novel
Attribute-Information-Combined Attention-Based Network that combines spatial
attention architecture and text attributes in an encoder-decoder. For caption
generation, adaptive spatial attention determines which image region best
represents the image and whether to attend to the visual features or the visual
sentinel. Text attribute information is synchronously fed into the decoder to
help image recognition and reduce uncertainty. We have tested and evaluated our
AICAB NET on the MS COCO dataset and a new proposed Fashion dataset. The
Fashion dataset is employed as a benchmark of single-object images. The results
show the superior performance of the proposed model compared to the
state-of-the-art baseline and ablated models on both the images from MSCOCO and
our single-object images. Our AIC-AB NET outperforms the baseline adaptive
attention network by 0.017 (CIDEr score) on the MS COCO dataset and 0.095
(CIDEr score) on the Fashion dataset. | Guoyun Tu, Ying Liu, Vladimir Vlassov | 2023-07-14T14:25:26Z | http://arxiv.org/abs/2307.07370v1 | # AIC-AB Net: A Neural Network for Image Captioning with Spatial Attention and Text Attributes
###### Abstract
Image captioning is a significant field across computer vision and natural language processing. We propose and present AIC-AB Net, a novel Attribute-Information-Combined Attention-Based Network that combines spatial attention architecture and text attributes in an encoder-decoder. For caption generation, adaptive spatial attention determines which image region best represents the image and whether to attend to the visual features or the visual sentinel. Text attribute information is synchronously fed into the decoder to help image recognition and reduce uncertainty. We have tested and evaluated our AIC-AB Net on the MS COCO dataset and a new proposed Fashion dataset. The Fashion dataset is employed as a benchmark of single-object images. The results show the superior performance of the proposed model compared to the state-of-the-art baseline and ablated models on both the images from MSCOCO and our single-object images. Our AIC-AB Net outperforms the baseline adaptive attention network by **0.017** (CIDEr score) on the MS COCO dataset and **0.095** (CIDEr score) on the Fashion dataset.
image captioning, neural networks, spatial attention, text attributes
Regular Research Paper
## I Introduction
The significant growth in web images has brought plenty of opportunities for computational understanding of images. Automatic image captioning is crucial for many applications, including image searching, categorizing, and indexing, and it has attracted attention from academia and industry. One can split the image captioning task into two parts (1) image recognition to detect and recognize objects in an image and (2) caption generation to summarize the extracted information and put it into text that humans understand.
Significant successes have been achieved in the problem of image captioning using Deep Learning (DL). Many works on image captioning have applied DL methods to images containing multiple objects and rich contextual information. As a result, various image captioning methods have been proposed, such as Visual space-based model [1], multimodal space-based model [2], dense captioning [3], whole scene-based model, encoder-decoder architecture-based model [4], compositional architecture-based model [5], attention-based model [6], semantic concept-based model [7], stylized captions [8]. We address the following two problems (1) generating captioning on single-object images; (2) combining semantic text attributes and adaptive attention.
The first blind spot of the previous studies is that they focused on general multi-object images, whereas generating captioning on single-object images is barely studied. Two features distinguish single-object images from general images. First, single-object images contain more small details, thus requiring a higher recognition resolution. Second, the generated descriptions include more adjectives and nouns. In this work, we apply DL models on a fashion dataset of 144,422 images from 24,649 products. This dataset is used as a benchmark of single-object images. Each image has only one fashion item, and its caption describes that item, including its category, color, texture, and other details.
Secondly, previous DL approaches either boost image captioning with semantic concept [5, 9] or make use of attention encoder-decoder framework [6, 10], i.e., two frameworks are used separately and cannot inform each other. We propose a novel attribute-image-combined attention-based neural network architecture (AIC-AB Net1) based on the adaptive attention network [10]. It combines the semantic concept-based architecture with the spatial attention-based architecture. In AIC-AB Net, the text attributes are fed into each step of the LSTM decoder as an additional input when generating the captions. The attributes are obtained by an auxiliary CNN classifier.
Footnote 1: [https://github.com/guoyuntu/Image-Captioning-On-General-Data-And-Fashion-Data](https://github.com/guoyuntu/Image-Captioning-On-General-Data-And-Fashion-Data)
The major contributions of our work are as follows.
1. We propose an Attribute-Image-Combined Attention-Based Network (AIC-AB Net) that combines the adaptive attention architecture and text attributes in an encoder-decoder framework and, as a consequence, improves the accuracy of image captioning compared to state-of-the-art alternatives.
2. We evaluate AIC-AB Net and several other DL models on a single-object dataset, the Fashion dataset, containing 144,422 images from 24,649 products.
## II Related work
Prior works, e.g., [11, 12, 4, 13], use DL encoder-decoder methods, attention-based and semantic concept-based DL models for automated image captioning.
**Encoder-Decoder for Image Captioning**. The existing caption generation methods include template-based image captioning [14], retrieval-based image captioning [15], and language model caption generation [16, 6, 17]. Most methods [11, 12, 18, 19] use Deep Learning. An encoder-decoder, a popular approach to tackling language tasks, such as machine translation, can be used for image captioning to encode visual information and decode it in a natural language. A network of this category extracts global image features from the hidden activations of a CNN and feeds them into an LSTM to generate a caption as a sequence of words; one word at each step depends on a context vector, the previous hidden state, and the previously generated words [16].
**Attention-based Networks**. Following the trends to use the encoder-decoder architecture on image captioning, methods based on attention mechanisms [10, 6, 20] have been increasingly popular as they provide computer vision algorithms with the ability to know where to look. Instead of considering the image as a whole scene, an attention-based network dynamically focuses on various parts of the input image while generating captions.
An adaptive attention network is an encoder-decoder-based approach for image captioning. The decoding stage is split into two parts. First, the Spatial Attention Network outputs a context vector \(c_{t}\) that depends on the feature map \(V\) extracted from the encoder and the hidden state \(h_{t}\) of the LSTM decoder. It could be considered as the attention map. The second part is Visual Sentinel, which can fall back on when it chooses not to attend to the image. This visual sentinel \(s_{t}\) is dependent on the input \(x_{t}\), the hidden state \(h_{t-1}\), and the memory cell \(m_{t}\). Then, the new adaptive context vector is modeled as \(\mathbf{c}_{t}=\sum_{i=1}^{k}\alpha_{t}v_{ti}\). \(\mathbf{c}_{t}\) and \(h_{t}\) determine the conditional probability for each time step of LSTM.
**Semantic concept-based Models**. The idea of semantic concept-based models is to extract a set of semantic concept proposals. These concepts, combined with visual features and hidden states, are used to generate the captions in the decoding stage. Karpathy et al. [7] proposed a model, in which dependency tree relations are applied in training to map the sentence segments with the image regions with a fixed window context. Wu et al. [13] proposed a network, including high-level semantic concepts explicitly. It adds an intermediate attribute prediction layer in an encoder-decoder framework to extract from images attributes used to generate semantically rich image captions. The proposed technique in this paper is partially inspired by [9], where Ting et al. suggested that the high-level attributes are more semantically rich and easily translated into understandable human sentences. Moreover, the best method is to feed attribute representations and visual features as a joint input to LSTM at each time step. The prior works are limited to using only one architecture, either semantic concept-based or spatial attention-based. We propose AIC-AB Net that combines both structures so that they can communicate with each other and, as a consequence, improve the accuracy of image captioning.
## III AIC-AB Net: Attribute-Image-Combined Attention-Based Network
We present our Attribute-image-combined Attention-based Network (AIC-AB Net), a novel encoder-decoder neural framework for image captioning. AIC-AB Net is an end-to-end network that tackles image captioning in the meantime, generates the attention map of the image. Fig. 1 shows the network architecture. We extract image features using pre-trained ResNet-152 [21], which implements residual learning units to alleviate the degradation of deep neural networks. We freeze the first six layers and take the last convolutional layer as visual features. We believe the features extracted retain both object and interaction information from the images.
Formally, let us denote the whole dataset as \(\mathfrak{D}=\{(\mathbf{X}_{i},\mathbf{y}_{i})(i=1,2,...,N)\) where \(\mathbf{X}_{i}\) denotes the \(i\)-th image and \(\mathbf{y}_{i}=(y_{1},y_{2},...,y_{t})\) denotes its caption label as a sequence of words. In an encoder-decoder framework, LSTM network plays the role of decoder and each conditional probability is modeled as:
\[\sum_{t=1}^{L}\log p(y_{t}|y_{1},y_{2},...y_{t-1},\mathbf{X})=f(\mathbf{h}_{t},\mathbf{c}_{t}) \tag{1}\]
where \(f\) is a nonlinear function that outputs the probability of \(y_{t}\). \(\mathbf{c}_{t}\) is the visual context vector at time step \(t\) extracted from image \(\mathbf{X}\). \(\mathbf{h}_{t}\) denotes the hidden state at \(t\). For LSTM, \(\mathbf{h}_{t}\) could be modeled as:
\[\mathbf{h}_{t}=LSTM(\mathbf{x}_{t},\mathbf{h}_{t-1},\mathbf{m}_{t-1}) \tag{2}\]
where \(\mathbf{x}_{t}\) is the input feature map, \(\mathbf{m}_{t-1}\) is the memory cell vector at \(t-1\).
### _Text Attribute Extractor_
The first step in adding the text attributes into the LSTM decoder is to extract a set of words that are possible to attend in the image's description. These words may belong to the following parts of speech, including nouns and adjectives. As suggested by [5], we build the vocabulary \(V\) using the 1000 most common words of the training captions. Given a vocabulary of attributes, the next step is to detect these words from images. We train the text attribute extractors using a CNN-based model. An image passes the pre-trained VGG-16 model, and we express the Cov5 layer as the input feature map, which is fed into a 2-layer CNN following with one fully connected layer. The possibility \(p_{i}^{w}\) of image \(x_{i}\) containing word \(w\) is computed by a sigmoid layer:
\[p_{t}^{w}=\frac{1}{1+\exp(-(v_{t}^{w}\phi(b_{i})+u_{w}))} \tag{3}\]
here \(\phi(b_{i})\) is the fully connected representation for image \(b_{i}\), \(v_{t}^{w}\) and \(u_{w}\) are the associated weights and bias with word \(w\)
Due to the highly imbalanced ratio of the positive labels (5 words per image) vs. negative, The loss function used for training the detector is
\[\mathbf{\hat{c}}_{i}^{C}=-\beta_{p}(p(x_{i})\log(q_{i}))+\beta_{n}(1-p(x_{i}))( \log(1-q(x_{i}))) \tag{4}\]
where \(\beta_{p}\) and \(\beta_{n}\) are class weights assigned for giving higher penalty over false positive predictions. Due to the very unbalanced labeling strategy, \(\beta_{p}=100\hat{\beta}_{n}\).
### _Attribute-combined Model_
With injecting the high-level attributes into the adaptive attention framework, we propose our AIC-AB Net (Fig. 1). In our model, the decoder is modified by additionally integrating visual information and high-level attributes. As Fig. 2 shows, the encoded image features are fed from the start of the LSTM and text attributes are fed into each time step. Accordingly, given the attribute representation \(\mathbf{A}\) the calculation of the hidden state in each time step is converted from Eq. (2) to:
\[\mathbf{h}_{t}=LSTM(\mathbf{x}_{t},\mathbf{A},\mathbf{h}_{t-1},\mathbf{m}_{ t-1}) \tag{5}\]
Adding text attributes to the original adaptive attention framework. The architecture of AIC-AB Net at one time step is demonstrated in Fig. 3. The probability over the vocabulary at time step \(t\) can be computed as:
\[\mathbf{p}_{t}=\mathrm{softmax}(\mathbf{W}_{p}(\hat{\mathbf{c}}_{t}+\mathbf{ h}_{t})) \tag{6}\]
where \(\mathbf{W}_{p}\) is weight parameter to be learnt.
## IV Evaluation Setup
To evaluate the proposed AIC-AB Net, we have conducted evaluation experiments using multi-object and single-object image datasets and have compared the performance of our AIC-AB Net with competing baselines.
### _Datasets and Preprocessing_
In our evaluation, we have used two image datasets, the MS COCO dataset [22] and a fashion image dataset.
**MS COCO Dataset**[22] contains 328K images with a total of 2.5M object instances. For the image captioning task, five caption descriptions labeled for each image are used as ground truth. We use the MS COCO dataset to compare the performance of AIC-AB Net with a state-of-the-art work [10].
**Fashion Dataset** is our single-object fashion image dataset scraped in open websites from different fashion vendors, including Uniqlo, Toteme-Studio, Bestseller, Drykorn, Jindenberg, Joseph-fashion, Marc-o-polo, Rodebjer, Tigerofsweden, and Vince. The raw data contains 1,511,916 images from 194,453 fashion products. Each data includes a text label consisting of various amounts of sentences. Before the image captioning task, we have conducted a data cleaning task to remove images with invalid or unrelated captions. The cleaned dataset includes 144,422 images from 24,649 products. Each data is labeled with only one caption description sentence. Products are allowed to map to the same caption, and there are 10,091 unique captions in this dataset.
**Preprocessing**. We apply the same split for COCO and Fashion datasets: 70% of the data for training, 15% for validation, and 15% for testing. We resize all the images used in experiments to \(224\times 224\) with bilinear interpolation. We also create two variations for the Fashion dataset. The one-vendor condition focuses on the largest product vendor, Bestseller. Further called the **Fashion Bestseller dataset**, this subset contains 89,756 images from 19,385 products. The amount of unique captions is 8,448. The second condition employs
Fig. 1: Overview of AIC-AB Net. It extracts from the images the visual features and the attribute information. The former pass to the adaptive attention architecture [10], the latter are fed into the LSTM decoder at every time step.
all images in the dataset, which we further call the **Fashion 9 vendors dataset**. The reason behind it is that we found the same vendor usually describes their product in similar text form and style. For all three datasets, COCO, Fashion Bestseller, and Fashion 9 vendors, we automatically generate five attributes for each image with the following method to train the attribute extractor. First, we build an attribute vocabulary comprising 1000 most common words (nouns and adjectives) from the caption text. Then, we choose five words in the caption which occur in the vocabulary as attributes for each data.
### _Hyperparameters; Baseline and Ablated Models_
**Text Attribute Extractor**. The convolution layer of the text attribute extractor has kernel size \(5\times 5\) and stride \(1\times 1\). The max pooling layer has kernel size \(8\times 8\) and stride \(0\). We train the text attribute extractor for 10 epochs.
**AIC-AB Net network**. In the decoding stage, words in captions are embedded into 255-dimension vectors, and words of attributes are embedded into 51-dimension vectors using the default word embedding function provided by PyTorch. The hidden size is set as \(512\). Adam optimizer with learning rate decay is employed to train the model. The parameters is set as: \(\alpha=0.8\), \(\beta=0.999\), \(learning\_rate=4e-4\). The decay of learning rate is modeled as:
\[l_{r}^{E+1}=l_{r}^{E}*0.5^{\frac{E-20}{80}},E>20 \tag{7}\]
where \(l_{r}^{E}\) is the learning rate in epoch \(E\). We train the extractor for 50 epochs.
We compare AIC-AB Net to the following baseline and ablated models. 1.
1. **Adaptive**: the state-of-the-art method, Adaptive [10]. Note that our model without the text attributes (i.e., being ablated) is the same as Adaptive.
2. **The Vanilla Encoder-Decoder (Vanilla-ED)**: an ablated AIC-AB Net, where we remove the adaptive attention architecture and attribute information while keeping a CNN-based encoder and LSTM-based decoder.
3. **The Text Attributes Only (Attr-Only)**: a second ablated AIC-AB Net, where we feed the attribute information into the LSTM decoder but remove the attention architecture.
## V Results and Discussion
We evaluate image captioning on the MS COCO dataset, the Fashion Bestseller dataset, and the Fashion 9 vendors dataset. Table I reports the evaluation results for these three datasets, where B-\(n\) is BLEU score that uses up to \(n\)-grams. In each column, higher is better.
We observe that our AIC-AB Net achieves the best performance compared to all three ablated versions. The ablation study reveals the complementarity of all constituents of AIC-AB Net. In terms of CIDEr score, the Vanilla Encoder-Decoder network underperforms AIC-AB Net by 0.143, 0.319, and 0.148; the Adaptive attention network by 0.017, 0.095, and 0.095; the Attributes-combined model by 0.107, 0.201, and 0.142. Note that adaptive attention architecture improves the performance better than the attribute information. These results indicate that two components indeed complement each other, and their co-existence crucially benefits the caption generation.
The three experimental conditions establish a comprehensive spectrum. The general image dataset, MS COCO, is the most complex and contains multi objects in each image, for which the CIDEr score is the lowest across the datasets. We only compare the CIDEr score because it is the only metric that keeps a stable scale when the number of captions varies. The fashion dataset contains one single object per image. The Fashion Bestseller dataset is simpler than the Fashion 9 vendors dataset. Although the effectiveness of our network is still obvious, the performance gap widens as the task gets more complicated.
Although the attributes-combined model obtains a similar CIDEr score on the COCO dataset compared with the adaptive attention model, it observably underperforms on other scores. CIDEr focuses more on semantical correctness, while others
Fig. 3: An illustration of AIC-AB Net generating the \(t\)-th word \(y_{t}\). The input is the encoded image features \(V\) and attribute information \(A\).
Fig. 2: A simplified diagram of our attribute-image-combined network (AIC-AB Net).
reflect on grammaticality correctness [23]. These results indicate that attribute information provides significant semantic information. However, to demonstrate these attributes in the generated captions, the model achieves this at the expense of grammatical correctness as a result on MS COCO shows "attr-only" gains a similar CIDEr score as "Adaptive" but its BLEU scores are significantly poorer. Interestingly, it does not happen on the Fashion dataset. We argue that this is because of the small number of captions. The sentence pattern is easier to recognize on the Fashion dataset. However, this effect is not shown in our AIC-AB Net network. It reveals the attention architecture, especially the sentinel gate, corrects the bias brought by the attributes. The two components indeed complement each other.
On the fashion dataset, we observe that our model achieves better performance on the Fashion Bestseller dataset than the Fashion 9 vendors dataset, with an improvement of 0.892 (CIDEr score). This observation is the opposite of the regular pattern in which the increased data size improves the ML model's performance. The reason is the distinct styles and forms of captions from different vendors. The huge gaps in captions from one vendor to another are caused by the sub-standard labeling of the Fashion 9 vendors dataset.
### _Attention Distribution Analysis_
To better understand our model, we also visualize the image attention distributions \(\alpha\) for the generated caption. Using bilinear interpolation and pyramid expansion, we sample the attention map to the image size (\(224\times 224\)). Fig. 4 shows the generated captions and the image attention distribution for specific words in the caption. The first five cases are success cases, and the last case shows a failure example. We see that our model learns to pay attention to the specific region when generating different words in the caption, which corresponds strongly with human intuition. Note that on the failure case, although our model fails to focus on the region of the sleeves when generating "sleeves", it still successfully recognizes the position of the printed stripe.
Since the COCO dataset provides the ground truth of objects' bounding box, it can be used to evaluate the performance of attention map generation. The spatial intersection over union (sIOU) score is used to measure localization accuracy. Given the word \(w_{t}\) and its corresponding attention map \(\alpha_{t}\), we first segment the regions of the image with its attention value more extensive than a pre-class threshold \(th\) (after the map is standardly normalized to scale [0,1), where we set as \(0.6\). Then we take the bounding box covering the largest connected component in this segmentation map as the predicted attention region. We report the sIOU between the predicted bounding box and the ground truth for the top 20 most frequent COCO object categories, as Fig. 4(a) shows. The average localization accuracy for the "Adaptive" is 0.415, and 0.419 for our AIC-AB Net. This implies that the attribute information benefits the attention map generation as a combined model. We also observe that our AIC-AB Net and its attention-only version have a similar trend. They both perform well on informative visual objects and large objects such as "cat", "train", "bed", and "bus", while they have poor performance on small objects such as "sink" and "clock". We argue that it is because our attention map is extracted from \(7\times 7\) spatial map, which loses plenty of resolution and detail. This defect is remarkably exposed when detecting small objects. This reason can explain the wrong attention map on the Fashion dataset as well, where the majority of words are descriptions of details and refer to small regions on the image.
Since the bounding box ground truth is missing in the Fashion dataset, we apply statistic analysis for 5 typical words as quantitative analysis, _hood_, _cap_, _pants_, _dresses_, and _sleeves_. In common cases, _hood_ and _cap_ only show on the upper part of an image, _pants_ and _dresses_ only on the lower part, and _sleeves_ only on the left and right sides. We assume these regions are their ground truth respectively and apply the same approach as explained above to measure the localization accuracy. Fig. 4(b) reports the result of the Fashion dataset. We observe that AIC-AB Net outperforms on the first 4 words than the word _sleeves_ and shows a similar trend with the adaptive attention model.
## VI Conclusion
This work has been motivated by the task of generating captions for single-object fashion images and inspired by the adaptive attention architecture [10] and semantic
\begin{table}
\begin{tabular}{l c c c c c c} \hline Model & B-1 & B-2 & B-3 & B-4 & METEOR & ROUGE-L & CIDEr \\ \hline & (a) Scores of image captioning on the MS COCO dataset & & & & \\ Adaptive & **0.743** & **0.572** & 0.424 & 0.313 & 0.265 & 0.546 & 1.088 \\ Vanilla-ED & 0.729 & 0.556 & 0.409 & 0.299 & 0.249 & 0.531 & 0.952 \\ Attr-Only & 0.671 & 0.495 & 0.366 & 0.276 & 0.255 & 0.535 & 1.059 \\ AIC-AB Net & 0.730 & 0.554 & **0.424** & **0.339** & **0.279** & **0.550** & **1.105** \\ & (b) Scores of image captioning on the Fashion Bestseller dataset & & & & & \\ Adaptive & 0.365 & 0.306 & 0.275 & 0.255 & 0.185 & 0.334 & 2.094 \\ Vanilla-ED & 0.345 & 0.284 & 0.251 & 0.231 & 0.173 & 0.316 & 1.870 \\ Attr-Only & 0.350 & 0.290 & 0.258 & 0.240 & 0.179 & 0.321 & 1.988 \\ AIC-AB Net & **0.385** & **0.316** & **0.289** & **0.280** & **0.190** & **0.349** & **2.189** \\ & (c) Scores of image captioning on the Fashion 9 vendors dataset & & & & & \\ Adaptive & 0.276 & 0.218 & 0.193 & 0.178 & 0.121 & 0.256 & 1.202 \\ Vanilla-ED & 0.264 & 0.216 & 0.179 & 0.165 & 0.115 & 0.238 & 1.149 \\ Attr-Only & 0.268 & 0.210 & 0.184 & 0.169 & 0.117 & 0.242 & 1.155 \\ AIC-AB Net & **0.290** & **0.231** & **0.202** & **0.191** & **0.125** & **0.268** & **1.297** \\ \hline \end{tabular}
\end{table} TABLE I: Evaluation Scores of Image Captioning
concept [9]. Toward this end, we present Attribute-Image-Combined Attention-Based Network (AIC-AB Net). We have evaluated AIC-AB Net on the MS COCO and the Fashion datasets. Our experiments indicate that the ability to locate the relevant region of an image when generating different words and the combination with the attribute information is crucial for accurate caption generation.
Further research could explore two directions. First, according to the results of transfer learning, we suggest creating an up-to-standard labeling system for the Fashion dataset, which will benefit the consistency of the data and the robustness of the models trained on it. Second, we argue that segmenting the images into more regions will improve the performance since, in some cases, our model cannot pay attention to the accurate region when generating words.
Image captioning is a challenging and promising task for the Internet industry and computer vision. We believe this work represents a significant step in improving image captioning and breeds useful applications in other domains.
Fig. 4: Visualization of generated captions and image attention maps on the Fashion Bestseller dataset. The text is the captions generated by AIC-AB Net. Different colors denote a correspondence between masked regions and underlined words. The first 5 cases are success cases; the last case is a failure.
Fig. 5: Localization accuracy over generated captions. Adaptive is the baseline model (an ablated version of AIC-AB Net); AIC-AB Net is our model. |
2303.01590 | Technical report: Graph Neural Networks go Grammatical | This paper introduces a framework for formally establishing a connection
between a portion of an algebraic language and a Graph Neural Network (GNN).
The framework leverages Context-Free Grammars (CFG) to organize algebraic
operations into generative rules that can be translated into a GNN layer model.
As CFGs derived directly from a language tend to contain redundancies in their
rules and variables, we present a grammar reduction scheme. By applying this
strategy, we define a CFG that conforms to the third-order Weisfeiler-Lehman
(3-WL) test using MATLANG. From this 3-WL CFG, we derive a GNN model, named
G$^2$N$^2$, which is provably 3-WL compliant. Through various experiments, we
demonstrate the superior efficiency of G$^2$N$^2$ compared to other 3-WL GNNs
across numerous downstream tasks. Specifically, one experiment highlights the
benefits of grammar reduction within our framework. | Jason Piquenot, Aldo Moscatelli, Maxime Bérar, Pierre Héroux, Romain raveaux, Jean-Yves Ramel, Sébastien Adam | 2023-03-02T21:27:54Z | http://arxiv.org/abs/2303.01590v4 | # Technical report : Graph Neural Networks go Grammatical
###### Abstract
This paper proposes a new GNN design strategy. This strategy relies on Context-Free Grammars (CFG) generating the matrix language MATLAB. It enables us to ensure both WL-expressive power, substructure counting abilities and spectral properties. Applying our strategy, we design Grammatical Graph Neural Network G\({}^{2}\)N\({}^{2}\), a provably 3-WL GNN able to count at edge-level cycles of length up to 6 and able to reach band-pass filters. A large number of experiments covering these properties corroborate the presented theoretical results.
## 1 Introduction
In the last few years, characterising the expressive power of Graph Neural Networks (GNNs) has become a major concern in order to theoretically rank the plethora of existing models and to design new provably powerful GNNs [1].
In this field, the Weisfeiler-Lehman hierarchy, based on the eponymous polynomial-time test [2], is the most common way to characterise architectures. An important result using this tool has been the proof that the architectures most used in practice, i.e. Message Passing Neural Networks (MPNNs) [3, 4], are at most as powerful as the first-order Weisfeiler-Lehman test (1-WL) [1, 5].
To go beyond the 1-WL limit, [1, 6] propose some generalisations of GNNs relying on the manipulation of higher-order tensors. In [7], a model called \(k\)-IGN is proven to be equivalent to \(k\)-WL. With \(k\) big enough, \(k\)-IGN is theoretically a universal approximator. However, when \(k\) increases, the computational and memory complexities grow exponentially. To overcome this limitation, the same paper describes Provably Powerful Graph Network (PPGN), that mimics the second-order Folklore Weisfeiler-Lehman test (2-FWL), known to be equivalent to 3-WL.
Even if the WL hierarchy has been useful to evaluate GNNs and to search for better models, most of the time 3-WL and 1-WL are only bounds of expressive power. A recent way to refine this WL hierarchy is to study the substructures GNNs can count in a graph [8, 9]. Following this research direction, [9] were able to classify some recent models that were categorised by [10] between 1-WL and 3-WL.
Both the WL hierarchy and the counting power only focus on the structure of the graph, neglecting the effect of models on the signal handled by the nodes. As shown in [11], the majority of spatially designed MPNNs work as low-pass filters while spectrally designed ones can reach band-pass filters. Yet, such band-pass filters can be useful for certain downstream tasks. Thus, the spectral ability of GNN models is a complementary way of measuring the expressive power of a model.
Taking into consideration the three characterization aspects mentioned above, this paper proposes a new GNN design strategy and a new model called Grammatical Graph Neural Network (G\({}^{2}\)N\({}^{2}\)) resulting from this strategy. Our strategy relies on the MATLAB language introduced in [12] and more particularly on the fragments of MATLAB called \(\operatorname{ML}\left(\mathcal{L}_{1}\right)\) and \(\operatorname{ML}\left(\mathcal{L}_{3}\right)\), shown to be as expressive as 1-WL and 3-WL in [13]. Starting from the operations sets \(\mathcal{L}_{1}\) and \(\mathcal{L}_{3}\), we propose to build Context-Free Grammars (CFG) able to
generate \(\mathrm{ML}\left(\mathcal{L}_{1}\right)\) and \(\mathrm{ML}\left(\mathcal{L}_{3}\right)\). Since the amount of possible operations in the corresponding CFGs is pretty high, those CFGs are reduced, keeping the equivalence with 1-WL and 3-WL. From the variables of those reduced CFGs, GNN inputs can easily be deduced. Then, the rules of the CFGs determine the GNN layers update rules and the readout functions. A direct benefit of this design strategy is that GNN abilities can be deduced from the study of the language derived from the CFG. As an illustration, one of our subsequent contributions is the proposition of algebraic expressions in \(\mathrm{ML}\left(\mathcal{L}_{3}\right)\) which, applied on the adjacency matrix, are able to count cycles of length up to 6 and chordal cycles at edge-level.
Our strategy provably ensures that our \(\mathrm{G}^{2}\mathrm{N}^{2}\) model is (i) **exactly as expressive as \(3\)-WL** since it inherits expressive power of \(\mathrm{ML}\left(\mathcal{L}_{3}\right)\), (ii) **able to count important substructures both at node-, graph- and edge-levels**, surpassing the counting abilities of existing MPNNs and subgraph MPNNs and (iii) **able to approximate low-pass, high-pass and band-pass filters** in the spectral domain while most models and in particular PPGN cannot experimentally approximate band-pass filters.
These theoretical results are confirmed by numerous experiments on various well-known dedicated graph datasets. Moreover, we also show that the proposed reduced CFGs can describe many existing MPNNs and even PPGN.
The paper is structured as follows. In section 2, after introducing WL, MATLAB and CFGs, we describe our GNN design strategy and present the resulting \(\mathrm{G}^{2}\mathrm{N}^{2}\) architecture. We also show that our strategy generalizes both existing MPNNs and PPGN. Then, in section 3, we theoretically study the counting power and the spectral ability of \(\mathrm{ML}\left(\mathcal{L}_{3}\right)\). Section 4 validates the theoretical analysis of section 3 through an extensive experimental evaluation of \(\mathrm{G}^{2}\mathrm{N}^{2}\).
## 2 From CFG to the \(3\)-Wl \(\mathrm{G}^{2}\mathrm{N}^{2}\)
Let \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) be a graph with \(n\) nodes, \(\mathcal{V}=\left[\!\left[1\,,n\right]\!\right]\) and an arbitrary number of edges \(\mathcal{E}\subset\mathcal{V}\times\mathcal{V}\). The adjacency matrix \(A\in\{0,1\}^{n\times n}\) represents the connectivity of \(\mathcal{G}\), \(\mathrm{I}\in\{0,1\}^{n\times n}\) is the identity matrix and \(J\in\{0,1\}^{n\times n}\) denotes a matrix filled with ones except for the diagonal.
### MATLAB and the Weisfeiler-Lehman hierarchy
**Definition 2.1** (Matlang [12]): MATLAB is a matrix language with an allowed operation set \(\{+,\cdot,\odot,\ \ \mathbf{{}^{T}},\mathrm{Tr},\mathrm{diag},1,\times,f\}\) denoting respectively matrices addition, usual and element-wise multiplications, transpose and trace computations, diagonal matrix creation from a vector, column vector of 1 generation, scalar multiplication and element-wise custom function applied on scalars, vectors or matrices. Restricting the set of operations to a subset \(\mathcal{L}\), one can define a fragment of MATLAB \(\mathrm{ML}\left(\mathcal{L}\right)\). \(s(X)\in\mathbb{R}\) is a sentence in \(\mathrm{ML}\left(\mathcal{L}\right)\) if it consists of any possible consecutive operations in \(\mathcal{L}\) operating on a given matrix \(X\) and resulting in a scalar value. _As an example, \(s(X)=1^{\mathbf{T}}\left(X\odot\mathrm{diag}\left(1\right)\right)1\) is a sentence of \(\mathrm{ML}\left(\{\cdot,\ \mathbf{{}^{T}},1,\mathrm{diag},\odot\}\right)\) computing the trace of \(X\)._
Applying sentences on the adjacency matrix (i.e. setting \(X=A\)) allows linking the distinguishing power of the WL test and fragments of MATLAB. The equivalences between fragments from \(\mathcal{L}_{1}=\{\cdot,\ \mathbf{{}^{T}},1,\mathrm{diag}\}\) and \(\mathcal{L}_{3}=\{\cdot,\ \mathbf{{}^{T}},1,\mathrm{diag},\odot\}\) and respectively the 1-WL and 3-WL tests are shown in [13]. Two graphs are indistinguishable by the 1-WL and the 3-WL tests if and only if their adjacency matrices are indistinguishable by any sentences of respectively \(\mathrm{ML}\left(\mathcal{L}_{1}\right)\) and \(\mathrm{ML}\left(\mathcal{L}_{3}\right)\).
Seen through the lens of MATLAB, a GNN tries to learn an appropriate sentence to solve a downstream task. Thus, to inherit the 1-WL (resp. 3-WL) language expressivity, we must ensure that the architecture can generate every sentence in \(\mathrm{ML}\left(\mathcal{L}_{1}\right)\) (resp. \(\mathrm{ML}\left(\mathcal{L}_{3}\right)\)) for a given number of layers. However, integrating the whole set of elementary operations directly will result in an inefficient architecture: each elementary operation corresponds in a layer to a large number of parameters and some can be attained through others.
To overcome this issue, we propose to use Context-Free Grammar (CFG) as a tool to reduce the number of elementary operations while keeping the ability to produce all sentences in those languages. As far as we know, it is the first time that a CFG is used to build a GNN.
### Context-Free Grammar and Language
**Definition 2.2** (Context-Free Grammar):
A Context-Free Grammar (CFG) \(G\) is a \(4\)-tuple \(\left(V,\Sigma,R,S\right)\) with \(V\) a finite set of variables, \(\Sigma\) a finite set of terminal symbols, \(R\) a finite set of rules \(V\rightarrow\left(V\cup\Sigma\right)^{*}\), \(S\) a start variable. _Note that \(R\) completely describes a CFG with the convention that the start variable is placed on the top left._
**Definition 2.3** (Derivation):
Let \(G\) be a CFG. For \(u,v\in\left(V\cup\Sigma\right)^{*}\), define \(u\implies v\) if \(u\) can be transformed into \(v\) by applying one rule and \(u\ \overset{*}{\implies}\ v\) if \(u\) can be transformed into \(v\) by applying an arbitrary number of rules in \(G\).
**Definition 2.4** (Context-Free Language):
\(B\) is a Context-Free Language (CFL) if there exists a CFG \(G\) such that \(B=L(G):=\left\{w\ \middle|\ w\in\Sigma^{*}\text{ and }S\ \overset{*}{\implies}\ w\right\}\).
In a CFG, sentences are produced by applying rules to variables until only terminal symbols remain. The variables are the symbols on the left side of a rule and the terminal symbol is any of the other ones. Figure 1 shows how the CFG \(G_{\mathcal{L}_{1}}\) produces the sentence \(1^{\mathbf{T}}A\mathrm{diag}\left(A1\right)1\).
The next two subsections focus on reducing \(\mathrm{ML}\left(\mathcal{L}_{1}\right)\) and \(\mathrm{ML}\left(\mathcal{L}_{3}\right)\) CFGs.
### From CFG to GNN
CFG objects and GNN elements are linked as follows. Variables correspond to layers inputs and outputs. Rules translate as update equations and readout functions. Terminal symbols relate to model inputs. In order to exploit the equivalence between WL and \(\mathrm{ML}\left(\mathcal{L}\right)\), the adjacency matrix \(A\) is a terminal symbol of the grammar.
The following proposition is necessary for the proof of Theorem 2.1.
**Proposition 2.1**:
_For any square matrix of size \(n^{2}\), \(\mathrm{ML}\left(\mathcal{L}_{1}\right)\) can only produce square matrices of size \(n^{2}\), row and column vectors of size \(n\) and scalars._
Proof.: Let \(M\) be a square matrix of size \(n^{2}\), we first need to prove that \(\operatorname{ML}\left(\mathcal{L}_{1}\right)\) can produce square matrices of size \(n^{2}\), row and column vectors of size \(n\) and scalars.
Yet \(1(M)\) is a column vector of size \(n\), \(1(M)^{\mathbf{T}}\) is a row vector of size \(n\), \(1(M)^{\mathbf{T}}\cdot 1(M)\) is a scalar and \(\operatorname{diag}\left(1(M)\right)\) is a square matrix of size \(n^{2}\).
Then let \(N\in\mathbb{R}^{n\times n}\), \(v\in\mathbb{R}^{n}\), \(w\in\left(\mathbb{R}^{n}\right)^{*}\), and \(s\in\mathbb{R}\) be words \(\operatorname{ML}\left(\mathcal{L}_{1}\right)\) can produce, we have
\[M\cdot N \in\mathbb{R}^{n\times n} M\cdot v \in\mathbb{R}^{n} w\cdot M\in\left(\mathbb{R}^{n}\right)^{*} w\cdot v \in\mathbb{R}\] \[v\cdot w \in\mathbb{R}^{n\times n} 1(v) \in\mathbb{R}^{n} v^{\mathbf{T}} \in\left(\mathbb{R}^{n}\right)^{*} 1(w) \in\mathbb{R}\] \[M^{\mathbf{T}} \in\mathbb{R}^{n\times n} w^{\mathbf{T}} \in\mathbb{R}^{n} s\cdot w\in\left(\mathbb{R}^{n}\right)^{*} \operatorname{diag}\left(s\right) \in\mathbb{R}\] \[\operatorname{diag}\left(v\right) \in\mathbb{R}^{n\times n} 1(M) \in\mathbb{R}^{n} s\cdot s \in\mathbb{R}\] \[v\cdot s \in\mathbb{R}^{n} 1(s) \in\mathbb{R}\]
Since this is an exhaustive list of all operations \(\operatorname{ML}\left(\mathcal{L}_{1}\right)\) can produce with these words, we can conclude.
**Theorem 2.1** (\(\operatorname{ML}\left(\mathcal{L}_{1}\right)\) reduced CFG )
_The following CFG denoted r-\(G_{\mathcal{L}_{1}}\) is as expressive as \(1\)-WL._
\[V_{c}\rightarrow\operatorname{diag}\left(V_{c}\right)V_{c}\ \mid\ AV_{c}\ \mid\ 1 \tag{1}\]
Proof.: Proposition 2.1 leads to only four variables. \(M\) for the square matrices, \(V_{c}\) for the column vectors, \(V_{r}\) for the row vectors and \(S\) for the scalars. We define a CFG \(G_{\mathcal{L}_{1}}\) :
\[S \rightarrow(V_{r})(V_{c})\ |\ \operatorname{diag}\left(S\right)\ |\ SS \tag{2}\] \[V_{c} \to MV_{c}\ |\ (V_{r})^{\mathbf{T}}\ |\ V_{c}S\ |\ 1\] \[V_{r} \to V_{r}M\ |\ (V_{c})^{\mathbf{T}}\ |\ SV_{r}\] \[M \to MM\ |\ (M)^{\mathbf{T}}\ |\ \operatorname{diag}\left(V_{c} \right)\ |\ (V_{c})(V_{r})\ |\ A\]
As any sentences produced by \(\operatorname{ML}\left(\mathcal{L}_{1}\right)\) can obviously be derived from \(G_{\mathcal{L}_{1}}\), \(\operatorname{ML}\left(\mathcal{L}_{1}\right)=L(G_{\mathcal{L}_{1}})\). For any scalar \(s,s^{\prime}\), since \(\operatorname{diag}\left(s\right)\) and \(s\cdot s^{\prime}\) produce a scalar, the only way to produce a scalar from another variable is to pass through a vector dot product. It implies that to generate scalars, we only need to be able to generate vectors. We can then reduce \(G_{\mathcal{L}_{1}}\) by removing \(S\) and setting \(V_{c}\) as starting variable.
\[V_{c} \to MV_{c}\ |\ (V_{r})^{\mathbf{T}}\ |\ 1\] \[V_{r} \to V_{r}M\ |\ (V_{c})^{\mathbf{T}}\] \[M \to MM\ |\ (M)^{\mathbf{T}}\ |\ \operatorname{diag}\left(V_{c} \right)\ |\ (V_{c})(V_{r})\ |\ A\]
To ensure that the start variable is \(V_{c}\), a mandatory subsequent operation will be \(MV_{c}\) for any matrix variable \(M\). As a consequence, by associativity of the matrix multiplication, \(MM\) and \((V_{c})(V_{r})\) can be removed from the rule of \(M\).
\[V_{c} \to MV_{c}\ |\ (V_{r})^{\mathbf{T}}\ |\ 1\] \[V_{r} \to V_{r}M\ |\ (V_{c})^{\mathbf{T}}\] \[M \to(M)^{\mathbf{T}}\ |\ \operatorname{diag}\left(V_{c}\right)\ |\ A\]
Since \(\operatorname{diag}\) produces symmetric matrices and \(A\) is symmetric, \((M)^{\mathbf{T}}\) does not play any role here. As a consequence, we can then focus on the column vector and we obtain r-\(G_{\mathcal{L}_{1}}\)
### From r-\(G_{\mathcal{L}_{3}}\) to \(3\)-Wl Gnn
The following propositions are used in the proof of theorem 2.2.
**Proposition 2.2**:
_For any square matrix of size \(n^{2}\), \(\operatorname{ML}\left(\mathcal{L}_{3}\right)\) can only produce square matrices of the same size, row and column vectors of size \(n\) and scalars._
Proof.: Since \(\mathcal{L}_{1}\subset\mathcal{L}_{3}\), we only need to check the words the rule associated to the matrix Hadamard product can produce. Let \(M\in\mathbb{R}^{n\times n}\) and \(N\in\mathbb{R}^{n\times n}\) be words \(\operatorname{ML}\left(\mathcal{L}_{3}\right)\) can produce, we have \(M\odot N\in\mathbb{R}^{n\times n}\). We can conclude.
**Proposition 2.3**:
_For any square matrix \(M\), column vector \(v\) and row vector \(w\), we have_
\[M\odot\left(v\cdot w\right)=\operatorname{diag}\left(v\right)M\operatorname{ diag}\left(w\right)\]
Proof.: Let \(M\) be a square matrix, \(v,w\) be respectively column and row vectors, we have for any \(i,j\),
\[\left(M\odot\left(v\cdot w\right)\right)_{i,j} =M_{i,j}(v\cdot w)_{i,j}\] \[=v_{i}M_{i,j}w_{j}\] \[=\sum_{l}\operatorname{diag}\left(v\right)_{i,l}M_{l,j}w_{j}\] \[=\left(\operatorname{diag}\left(v\right)M\right)_{i,j}w_{j}\] \[=\sum_{l}(\operatorname{diag}\left(v\right)M)_{i,l}\operatorname {diag}\left(w\right)_{l,j}\] \[=\left(\operatorname{diag}\left(v\right)M\operatorname{diag} \left(w\right)\right)_{i,j}\]
We only use the scalar product commutativity here.
**Theorem 2.2** (\(\operatorname{ML}\left(\mathcal{L}_{3}\right)\) reduced CFG ):
_The following CFG r-\(G_{\mathcal{L}_{3}}\) is as expressive as \(3\)-WL._
\[V_{c} \to MV_{c}\ |\ 1 \tag{3}\] \[M \to(M\odot M)\ |\ MM\ |\ \operatorname{diag}\left(V_{c}\right)\ |\ A\]
Proof.: \(\mathcal{L}_{3}=\{\cdot,\ ^{\mathbf{T}},1,\operatorname{diag},\odot\}\). Proposition 2.2 leads to the use of a CFG \(G_{\mathcal{L}_{3}}\) with the same variables as \(G_{\mathcal{L}_{1}}\). We
\[S \to(V_{r})(V_{c})\ |\ \operatorname{diag}\left(S\right)\ |\ SS\ |\ (S \odot S)\] \[V_{c} \to(V_{c}\odot V_{c})\ |\ MV_{c}\ |\ (V_{r})^{\mathbf{T}}\ |\ V_{c}S\ |\ 1\] \[V_{r} \to(V_{r}\odot V_{r})\ |\ V_{r}M\ |\ (V_{c})^{\mathbf{T}}\ |\ SV_{r}\] \[M \to(M\odot M)\ |\ MM\ |\ (M)^{\mathbf{T}}\ |\ \operatorname{diag} \left(V_{c}\right)\ |\ (V_{c})(V_{r})\ |\ A\]
As seen in the proof of Theorem 2.1, the scalar variable and rules can be removed. Since \(\operatorname{diag}\left(v\right)w=v\odot w\) for any vector \(v,w\), the vector Hadamard product can be removed from the vector rules. Proposition 2.3 allows to remove \(V_{c}V_{r}\) from the rules of \(M\) since the results of subsequent mandatory operations \(MM\) or \(MV_{c}\) can be obtained with other combinations.
\[V_{c} \to MV_{c}\ |\ (V_{r})^{\mathbf{T}}\ |\ 1\] \[V_{r} \to V_{r}M\ |\ (V_{c})^{\mathbf{T}}\] \[M \to(M\odot M)\ |\ MM\ |\ (M)^{\mathbf{T}}\ |\ \operatorname{diag} \left(V_{c}\right)\ |\ A\]
Since the remaining matrix rules preserve symmetry, \((M)^{\mathbf{T}}\), the variable \(V_{r}\) and its rules can be removed. \(G_{\mathcal{L}_{3}}\) can be reduced into r-\(G_{\mathcal{L}_{3}}\).
From variables to layer input/outputIn order to design G\({}^{2}\)N\({}^{2}\) from r-\(G_{\mathcal{L}_{3}}\), the variables \(M\) and \(V_{c}\) must appear in our architecture. \(V_{c}\) states for nodes embedding, \(M\) for edges embedding. Each layer \(l\) takes as inputs a matrix \(H^{(l)}\) stacking multiple node vectors on the second dimension and a three order tensor \(\mathcal{C}^{(l)}\) stacking square matrices on the third dimension. \(H^{(l)}\) and \(\mathcal{C}^{(l)}\) appear in red in figure 2 depicting a layer of G\({}^{2}\)N\({}^{2}\).
From rules to G\({}^{2}\)N\({}^{2}\) layer update functionsA given layer should implement \(M\) rules and \(V_{c}\) rules of r-\(G_{\mathcal{L}_{3}}\) on different matrix and vector arguments. As the number of layers is finite, in order to maintain expressive power, multiple instances of the rules \((M\odot M)\), \((MM)\), \(\operatorname{diag}{(V_{c})}\) should be computed. To provide arguments to each instance of the rules in parameterised quantities \(b_{\odot}\),\(b_{\odot}\),\(b_{\operatorname{diag}}\), linear combinations \(L_{i}\) of slices of \(\mathcal{C}^{(l)}\) and slices of \(H^{(l)}\) are learned, as shown in figure 2 before each matrix rules.
The output tensor \(\mathcal{C}^{(l+1)}\) with a selected third dimension size is produced by a MultiLayer Perceptron from the concatenation of all the rules results matrices. The output \(H^{(l+1)}\) is provided by the node aggregation of \(H^{(l)}\) with the matrix slices of \(\mathcal{C}^{(l+1)}\) implementing \(MV_{c}\) depicted as agg in figure 2.
Formally, the update equations are :
\[\mathcal{C}^{(l+1)}=mlp(\mathcal{C}^{(l)}|L_{1}(\mathcal{C}^{(l)} )\odot L_{2}(\mathcal{C}^{(l)})| \tag{4}\] \[L_{3}(\mathcal{C}^{(l)})\cdot L_{4}(\mathcal{C}^{(l)})| \operatorname{diag}(L_{5}(H^{(l)}))),\] \[H^{(l+1)}=\sum_{i=1}^{S^{(l+1)}}\mathcal{C}^{(l+1)}_{i}H^{(l)}W^ {(l,i)}. \tag{5}\]
Where all \(L_{i}\) are linear blocks acting on the third dimension of the tensor \(\mathcal{C}^{(l)}\) or the second dimension of \(H^{(l)}\). In particular \(L_{1,2}:\mathbb{R}^{S^{(l)}}\rightarrow\mathbb{R}^{b^{(l)}_{\odot}}\), \(L_{3,4}:\mathbb{R}^{S^{(l)}}\rightarrow\mathbb{R}^{b^{(l)}_{\odot}}\), \(L_{5}:\mathbb{R}^{f^{(l)}}\rightarrow\mathbb{R}^{b^{(l)}_{\operatorname{diag}}}\) and \(mlp:\mathbb{R}^{S^{(l)}+b^{(l)}_{\odot}+b^{(l)}_{\operatorname{\odot}}+b^{(l )}_{\operatorname{diag}}}\rightarrow\mathbb{R}^{S^{(l+1)}}\), and \(W^{(l,i)}\in\mathbb{R}^{f^{(l)}_{n}\times f^{(l+1)}_{n}}\) acts on the second dimension of the feature matrix.
From r-\(G_{\mathcal{L}_{3}}\) to G\({}^{2}\)N\({}^{2}\)The inputs of the architecture illustrated in figure 3 are \(H^{(0)}\) and \(\mathcal{C}^{(0)}\). \(H^{(0)}\) of size \(n\times f_{n}\) is the feature nodes matrix. It corresponds to the terminal symbol 1. \(\mathcal{C}^{(0)}\) is a stacking on the third dimension of the identity matrix \(\mathbb{I}=\operatorname{diag}{(1)}\), the adjacency matrix \(A\) and the extended adjacency tensor \(E\) of size \(n\times n\times f_{e}\), where \(f_{e}\) is the number of edge features. \(\mathcal{C}^{(0)}\in\mathbb{R}^{n\times n\times(e_{f}+2)}\) corresponds to the terminal symbol \(A\).
Figure 2: **Model of a G\({}^{2}\)N\({}^{2}\) layer.** From left to right, the \(S^{(l)}\) slices of the tensor input \(\mathcal{C}^{(l)}\) are linearly combined into \(2b_{\odot}+2b_{\odot}\) different matrices. Products of pairs of these matrices are then computed reflecting the rules \((M\odot M)\), \((MM)\) of r-\(G_{\mathcal{L}_{3}}\). Similarly, slices of \(H^{(l)}\) are linearly combined into \(b_{\operatorname{diag}}\) vectors transformed as diagonal matrices reflecting the rule \(\operatorname{diag}(V_{c})\). The concatenation of all these matrices and the slices of the input tensor are feed into a MultiLayer Perceptron to produce the \(S^{(l+1)}\) slices of the output tensor \(\mathcal{C}^{(l+1)}\). Finally, the slices of \(\mathcal{C}^{(l+1)}\) are aggregated to the node embeddings \(H^{(l)}\) to compute \(H^{(l+1)}\), reflecting rule \(MV_{c}\).
After the last layer, an invariant or equivariant function is applied on both \(H^{(l_{\text{\tiny{end}}})}\), the diagonal and the non-diagonal of \(\mathcal{C}^{(l_{\text{\tiny{end}}})}\).
Expressive power of \(\text{G}^{2}\text{N}^{2}\)The following theorem shows that our design strategy ensures that \(\text{G}^{2}\text{N}^{2}\) is \(3\)-WL.
**Theorem 2.3**: \(\text{G}^{2}\text{N}^{2}\) _is exactly as powerful as \(3\)-WL test._
Proof.: To prove that \(\text{G}^{2}\text{N}^{2}\) is exactly as powerful as \(3\)-WL, we show that \(\text{G}^{2}\text{N}^{2}\) at layer \(l\) can produce any matrices and vectors \(\text{r-}G_{\mathcal{L}_{3}}\) can produce, after \(l\) iterations.
It is true for \(l=1\). Indeed, at \(\text{r-}G_{\mathcal{L}_{3}}\) first iteration, we obtain the matrices \(\text{I}\), \(A\), \(A^{2}\) and the vectors \(1\) and \(A1\). Since any of \(L_{i}(\mathcal{C}^{(0)})\) for \(i\in\llbracket 1\,,6\rrbracket\) is a linear combination of \(A\) and \(\text{I}\), \(\text{G}^{2}\text{N}^{2}\) can produce those vectors and matrices in one layer.
Suppose that there exists \(l>0\) such that \(\text{G}^{2}\text{N}^{2}\) can produce any of the matrices and vectors \(\text{r-}G_{\mathcal{L}_{3}}\) can after \(l\) iterations. We denote by \(\mathcal{A}_{l}\) the set of those matrices and by \(\mathcal{V}_{l}\) the set of those vectors. At the \(l+1\)-th iteration, we have \(\mathcal{A}_{l+1}=\{M\odot N,MN,\text{diag}\left(V_{c}\right)|M,N\in\mathcal{ A}_{l}\,V_{c}\in\mathcal{V}_{l}\}\) and \(V_{l+1}=\{MV_{c}|M\in\mathcal{A}_{k},V_{c}\in\mathcal{V}_{l}\}\). Let \(M,N\in\mathcal{A}_{l}\) and \(V_{c}\in\mathcal{V}_{l}\) then by hypothesis \(\text{G}^{2}\text{N}^{2}\) can produce \(M,N\) at layer \(l\). Since \(L\) produces at least two different linear combinations of matrices or vectors in respectively \(\mathcal{A}_{l}\) and \(\mathcal{V}_{l}\), \(MN\), \(M\odot N\), \(MV_{c}\) and \(\text{diag}\left(V_{c}\right)\) are reachable at layer \(l+1\). Thus \(\mathcal{A}_{l+1}\) is included in the set of matrices \(\text{G}^{2}\text{N}^{2}\) can produce at layer \(l+1\) and \(V_{l+1}\) is included in the set of vectors \(\text{G}^{2}\text{N}^{2}\) can produce at layer \(l+1\).
From \(\text{r-}G_{\mathcal{L}_{3}}\) to MPNNs and PPGNWe have already shown that most MPNNs can be written with operations in \(\text{r-}G_{\mathcal{L}_{1}}\), since \(\mathcal{L}_{1}\subset\mathcal{L}_{3}\) it stands for \(\text{r-}G_{\mathcal{L}_{3}}\). PPGN can also be written with \(\text{r-}G_{\mathcal{L}_{3}}\). Indeed, at each layer PPGN applies the matrix multiplication on matched matrices on the third dimension, operation included in \(\text{r-}G_{\mathcal{L}_{3}}\). The node features are stacked on the third dimension as diagonal matrices, the diag operation is also included in \(\text{r-}G_{\mathcal{L}_{3}}\). The readout of PPGN last update takes two inputs, the diagonal matrices and the non-diagonal matrices which are computed with the Hadamard product between these matrices and the identity. The Hadamard product is included in \(\text{r-}G_{\mathcal{L}_{3}}\). As all operations in PPGN are included, \(\text{r-}G_{\mathcal{L}_{3}}\) generalises PPGN. Actually, the following CFG describes PPGN :
\[M\to MM\mid\text{diag}\left(1\right)\;\mid M\odot\text{I}\mid M\odot J\mid A \tag{6}\]
Because of matrix product, \(\text{G}^{2}\text{N}^{2}\) has a temporal complexity \(O\left(n^{3}\right)\) and a memory complexity \(O\left(n^{2}\right)\) and thus it has the same complexity as PPGN.
The WL hierarchy is a way to categorize GNN, but the capacity of GNN to count substructures and the ability to approximate certain types of filters in the spectral domain can also categorize GNN. As \(\text{G}^{2}\text{N}^{2}\) is directly built from \(\text{r-}G_{\mathcal{L}_{3}}\), it inherits its expressive powers. That is why the following section focuses on \(\text{ML}\left(\mathcal{L}_{3}\right)\) capacity to count substructures and spectral response.
Figure 3: **Model of \(\text{G}^{2}\text{N}^{2}\) architecture from the graph to the output**. One can see that each layer updates the nodes and the edges embedding and that the readout function acts separately on the diagonal and the non-diagonal of \(\mathcal{C}^{(k)}\) and \(H^{(k)}\).
## 3 Substructure counting and spectral response of \(\mathbf{G}^{2}\mathbf{N}^{2}\)
### Counting substructures
As defined by [8, 9], the count of a substructure \(S\) in a graph \(\mathcal{G}\), denoted by \(C_{S}(\mathcal{G})\), is the total number of non-equivalent substructures \(S\) occurring as subgraphs of \(\mathcal{G}\). In the same way, we denote by \(C_{S}(\mathcal{G},i)\) (resp. \(C_{S}(\mathcal{G},i,j)\)) the count of substructure involving the node \(i\) (resp. the edge \((i,j)\)) in \(\mathcal{G}\). For symmetric substructures like cycles or cliques, there is no ambiguity about the position of a node or of an edge. But for non-symmetric ones, the position in the substructure is important, thus we define ad-hoc substructure counting. The adjacency matrix \(A\in\{0,1\}^{n\times n}\) represents the connectivity of \(\mathcal{G}\), \(\mathrm{I}\in\{0,1\}^{n\times n}\) is the identity matrix and \(J\in\{0,1\}^{n\times n}\) denotes a matrix filled with ones except for the diagonal which contains zeros. Figure 4 shows substructure counting at edge and node level for non-symmetric substructures.
We denote by \(e_{S}^{i}\) the number of edges involving node \(i\) in \(S\) and \(n_{S}\) the number of nodes in \(S\). We have the following relations:
\[C_{S}(\mathcal{G},i) =\frac{1}{e_{S}^{i}}\sum_{j}C_{S}(\mathcal{G},i,j)\quad,\forall\ i \in\llbracket 1\,,n\rrbracket, \tag{7}\] \[C_{S}(\mathcal{G}) =\frac{1}{n_{S}}\sum_{i}C_{S}(\mathcal{G},i). \tag{8}\]
Hence, edge-level counting is more expressive than node-level counting, itself more expressive than graph-level counting.
### \(\mathrm{ML}\left(\mathcal{L}_{3}\right)\) counting power
The graph-level counting power of 1-WL and 3-WL is partially covered in [14] : 3-WL can count paths up to length 6 and cycles up to length 7 and cannot count 4-cliques. [15, 16] proposed graph-level counting formulas for cycles up to length 7. In this subsection, algebraic formulas (sentences of \(\mathrm{ML}\left(3\right)\) applied to A) that count substructures at edge level are proved. To the best of our knowledge, for edge-level substructure counting such formulas have not been proposed in the literature. The following lemma is needed for the proof of theorem 3.2.
**Lemme 3.1**: _Let \(A\) be the adjacency matrix, then \(A^{k}\odot J\) computes the number of non-closed walks of size \(k\) between two nodes._
Proof.: Assuming that \((A^{k})_{i,j}\) computes the number of walks of size \(k\) from \(i\) to \(j\), we have that \((A^{k}\odot J)_{i,j}\) computes the number of non-closed walks of size \(k\) from \(i\) to \(j\), since the diagonal of \(J\) is filled with 0.
**Theorem 3.2** (Path counting at edge-level): _For \(2\leqslant l\leqslant 5\), there exist matrices \(X_{l}\) in \(\mathrm{ML}\left(\mathcal{L}_{3}\right)\) where \((X_{l})_{i,j}\) gives the number of \(l\)-paths between nodes \(i\)
Figure 4: Edges and nodes counting in non-symmetric substructures, here a 2-chain and a chordal cycle. One can see in the 2-chain, from equation (8) in one case \(n_{S}=1\) and in the other \(n_{S}=2\), and in the chordal cycle, from equation (7) in one case \(e_{S}^{i}=1\) and in the other \(e_{S}^{i}=2\).
_and \(j\)._
\[X_{2} =A^{2}\odot J \tag{9}\] \[X_{3} =A^{3}\odot J-A(A^{2}\odot\mathrm{I})-(A^{2}\odot\mathrm{I})A+A\] (10) \[X_{4} =A^{4}\odot J-(A(A^{2}\odot\mathrm{I}-2\mathrm{I})A)\odot J\] (11) \[\quad-(A^{2}\odot\mathrm{I})X_{2}-X_{2}(A^{2}\odot\mathrm{I})\] \[\quad-A(A^{3}\odot\mathrm{I})-(A^{3}\odot\mathrm{I})A+3A^{2}\odot A\] \[X_{5} =A^{5}\odot J-A(A^{2}\odot\mathrm{I})(A^{2}\odot\mathrm{I}- \mathrm{I})\] (12) \[\quad-(A^{2}\odot\mathrm{I})A(A^{2}\odot\mathrm{I})-(A^{2}\odot \mathrm{I}-\mathrm{I})(A^{2}\odot\mathrm{I})A\] \[\quad-(A(A^{2}\odot\mathrm{I}-\mathrm{I})X_{2})\odot J\] \[\quad-(X_{2}(A^{2}\odot\mathrm{I}-\mathrm{I})A)\odot J\] \[\quad-(A^{2}\odot\mathrm{I})X_{3}-X_{3}(A^{2}\odot\mathrm{I}- \mathrm{I})\] \[\quad-(A^{3}\odot\mathrm{I})X_{2}-X_{2}(A^{3}\odot\mathrm{I})\] \[\quad-(A(A^{3}\odot\mathrm{I})A)\odot J-A\odot A^{2}\] \[\quad+3(A(A\odot A^{2})+(A\odot A^{2})A)\odot J\] \[\quad-A\mathrm{diag}\left((A\odot X_{3})1\right))-\mathrm{diag} \left((A\odot X_{3})1\right)A\] \[\quad+3A\odot X_{3}+3A\odot A^{2}\odot(A^{2}-(A^{2}>0))\]
The idea of the proof is to compute the number of non-closed walks of size \(l\) with \(A^{l}\odot J\) and then to enumerate the non-closed walks of size \(l\) that are not \(l\)-paths (non-closed walk with exactly \(l\) edges).
Proof.: In the following, for \(2\leqslant k\leqslant 5\), we will enumerate the different non-closed walks of size \(k\) that are not a path of size \(k\).
* For \(k=2\), the only non-closed walk of size \(2\) is the path of size \(2\). Thus lemma 3.1 leads to the conclusion that \(A^{2}\odot J\) computes the number of \(2\)-paths between two nodes. \[X_{2}=A^{2}\odot J\]
* For \(k=3\), two non-closed walks of size \(3\) are not a \(3\)-path: one can do a walk of size \(1\) and a closed walk of size \(2\) or the opposite. The first case is provided by \(A(A^{2}\odot\mathrm{I})\) and the second case by its transpose \((A^{2}\odot\mathrm{I})A\). Crossing an edge three times belongs to both of these cases, thus subtracting \(A\) from the sum of these matrices provides the exact number of such walks linking two nodes. Figure 5 resume this for a better understanding. \[X_{3}=A^{3}\odot J-A(A^{2}\odot\mathrm{I})-(A^{2}\odot\mathrm{I})A+A\]
Figure 5: Enumeration of non-closed walks of size \(3\). Values in nodes represent the number of each type of non-closed walk linking the red node to the others.
* For \(k=4\), there are five non-closed walks of size \(4\) that are not a \(4\)-path. One can do a walk of size \(1\), a closed walk of size \(2\), and then a walk of size \(1\). Directly from what we did for \(k=3\), \(A(A^{2}\odot I)A\) computes the number of such walks between two nodes, we just have to do the Hadamard product with \(J\) to avoid the closed walks. Doing a closed walk of size \(2\) and then a \(2\)-path or the opposite is another way to do such walks. By analogy with the proof for \(k=3\), we can compute this with \((A^{2}\odot I)(A^{2}\odot J)+(A^{2}\odot J)(A^{2}\odot I)-A^{2}\odot J\). Since such walks occur in the first case, we have to subtract another time \(A^{2}\odot J\). Finally, doing a walk of size \(1\) followed by a closed walk of size \(3\) or the opposite are the two last ways to obtain non-closed walks of size \(4\). Still, in analogy with \(k=3\), the number of such walks is computed by \(A(A^{3}\odot I)+(A^{3}\odot I)A-A\odot A^{2}\). Since there is an overlap of such walks and the ones in the previous paragraph between two nodes in the same triangle, \(2A\odot A^{2}\) has to be removed. Since \(A^{4}\odot J\) computes the number of non-closed walks between two nodes, after factorisation, we obtain the formula of the theorem. \[X_{4} =A^{4}\odot J-(A(A^{2}\odot I-2I)A)\odot J\] \[\quad-(A^{2}\odot I)X_{2}-X_{2}(A^{2}\odot I)\] \[\quad-A(A^{3}\odot I)-(A^{3}\odot I)A+3A^{2}\odot A\]
* For \(k=5\), there are many more non-closed walks of size \(5\) but the idea remains the same: count such walks and then subtract the ones that occur in two or more of such walks. First, we can decompose a non-closed walk of size \(5\) with two closed walks of size \(2\) and a walk of size \(1\), this number is computed by \[A(A^{2}\odot I)(A^{2}\odot I-I)+(A^{2}\odot I)A(A^{2}\odot I)\] \[\quad+(A^{2}\odot I-I)(A^{2}\odot I)A.\] We subtract the identity in \(A^{2}\odot I-I\) to remove the overlap walks in those three terms which are computed by \(A(A^{2}\odot I)+(A^{2}\odot I)A\). Second, we can decompose the non-closed walks of size \(5\) with a walk of size \(1\), a closed walk of size \(2\), and finally a \(2\)-path in this order or another. Such compositions of walks are computed by \[(A(A^{2}\odot I-I)X_{2}+X_{2}(A^{2}\odot I-I)A)\odot J.\] The Hadamard product by \(J\) prevents the count of closed walks of such a composition. Third, we can still do a closed walk of size \(2\) and then a \(3\)-path in any order. This is computed by \((A^{2}\odot I)X_{3}+X_{3}(A^{2}\odot I-I)\). Here we remove \(X_{3}\) since it is counted twice. Fourth, the next decomposition is a closed walk of size \(3\) and a \(2\)-path in each order or a walk of size \(1\), followed by a closed walk of size \(3\) and a walk of size \(1\). This is computed by \((A^{3}\odot I)X_{2}+X_{2}(A^{3}\odot I)+A(A^{3}\odot I)A\), to which we subtract \(3(AC_{3}+C_{3}A)\odot J-C_{3}\) to avoid counting such walks in a triangle with \(C_{3}=A\odot X_{2}\). Fifth, the last decomposition is a walk of size \(1\) and crossing a \(4\)-cycle in each order. It is computed by \(A\mathrm{diag}\,(C_{4}1)+\mathrm{diag}\,(C_{4}1)\,A-3C_{4}\) with \(C_{4}=A\odot X_{3}\). And eventually, if a chordal cycle occurs, we remove \(3A\odot A^{2}\odot(A^{2}-(A^{2}>0))\) to avoid counting walks
already counted.
\[X_{5} =A^{5}\odot J-A(A^{2}\odot\mathrm{I})(A^{2}\odot\mathrm{I}-\mathrm{I})\] \[\quad-(A^{2}\odot\mathrm{I})A(A^{2}\odot\mathrm{I})-(A^{2}\odot \mathrm{I}-\mathrm{I})(A^{2}\odot\mathrm{I})A\] \[\quad-(A(A^{2}\odot\mathrm{I}-\mathrm{I})X_{2})\odot J\] \[\quad-(X_{2}(A^{2}\odot\mathrm{I}-\mathrm{I})A)\odot J\] \[\quad-(A^{2}\odot\mathrm{I})X_{3}-X_{3}(A^{2}\odot\mathrm{I}- \mathrm{I})\] \[\quad-(A^{3}\odot\mathrm{I})X_{2}-X_{2}(A^{3}\odot\mathrm{I})\] \[\quad-(A(A^{3}\odot\mathrm{I})A)\odot J-A\odot A^{2}\] \[\quad+3(A(A\odot A^{2})+(A\odot A^{2})A)\odot J\] \[\quad-A\mathrm{diag}\left((A\odot X_{3})1\right))-\mathrm{diag} \left(((A\odot X_{3})1\right)A\] \[\quad+3A\odot X_{3}+3A\odot A^{2}\odot(A^{2}-(A^{2}>0))\]
It concludes the proof.
If there exists a \(l\)-path between a node \(i\) and one of its neighbors \(j\) then the edge \((i,j)\) is part of a \((l+1)\)-cycle (closed walk with exactly \(l\) edges). As a consequence, the following proposition is proved. Figure 6 shows the link between path counting and cycle counting at edge level.
**Proposition 3.1** (cycle counting at edge level):
_For \(3\leqslant l\leqslant 6\), using \(X_{l-1}\) from theorem 3.2 the following formula computes a matrix \(C_{l}\) where \((C_{l})_{i,j}\) gives the number of \(l\)-cycles \((i,j)\) is within._
\[C_{l}=A\odot X_{l-1} \tag{13}\]
Another substructure that can be counted by \(\mathrm{ML}\left(\mathcal{L}_{3}\right)\) at edge level is the chordal cycle.
**Theorem 3.3** (chordal cycle counting at edge level):
_The following matrix computes the number of edges shared by two triangles_
\[\frac{1}{2}A\odot A^{2}\odot(A^{2}-(A^{2}>0)) \tag{14}\]
Proof.: The idea of the proof is that the quantity
\[\frac{1}{2}(A^{2}\odot(A^{2}-(A^{2}>0)))_{i,j}=\binom{A_{i,j}^{2}}{2}\]
Figure 6: Hadamard product allows to compute 4-cycle counting at edge level from adjacency and \(X_{3}\).
computes the selection of two different 2-paths linking nodes \(i\) and \(j\), thinking about the number of combinations. In fact, out of the diagonal, it computes the number of squares nodes \(i\) and \(j\) are sharing without being adjacent in the square subgraph. Then the Hadamard product with \(A\) acts like a condition function, since it returns the values already computed before only if \((i,j)\in\mathcal{E}\).
From equations (7) and (8), one can easily deduce formulas to count at both node and graph levels.
**Theorem 3.4** (path counting at node level): _For \(2\leqslant l\leqslant 5\), using \(X_{l}\) from theorem 3.2 the following vector word computes the number of \(l\)-paths starting from a node._
\[X_{l}1 \tag{15}\]
Proof.: One can see that this formula ensued directly from equation 7. Indeed, in this case, \(e_{S}=1\) because the tip of a path has only one neighbor.
It is almost the same for cycle counting at node level except that in this case \(e_{S}=2\) for every node.
**Theorem 3.5** (cycle counting at node level): _For \(3\leqslant l\leqslant 6\), using \(C_{l}\) from theorem 3.1 the following vectorial word computes the number of \(l\)-cycles a node is within._
\[\frac{1}{2}C_{l}1 \tag{16}\]
As \(\mathrm{ML}\left(\mathcal{L}_{3}\right)\) can count chordal cycles at edge level, it can count it at node level, and because it only counts one edge in this substructure, the previous formula occurs.
**Theorem 3.6** (Chordal cycle at node level): _The following vectorial word computes the number of couple nodes shared by two triangles_
\[\frac{1}{2}(A\odot A^{2}\odot(A^{2}-(A^{2}>0)))1 \tag{17}\]
To get the graph level counting sentences for all the previous substructures, it is necessary to transcript equation 8 into \(\mathrm{ML}\left(\mathcal{L}_{3}\right)\). The transcription comes naturally.
\[C_{S}(\mathcal{G})=\frac{1}{n_{S}}1^{\mathbf{T}}\begin{pmatrix}C_{S}(\mathcal{ G},1)\\ \vdots\\ C_{S}(\mathcal{G},n)\end{pmatrix} \tag{18}\]
As a consequence, we have the following sentences for counting substructures at the graph level.
**Theorem 3.7** (path counting at graph level): _For \(2\leqslant l\leqslant 5\), using \(X_{l}\) from theorem 3.2 the following sentence computes the number of \(l\)-paths in a graph._
\[\frac{1}{2}1^{\mathbf{T}}X_{l}1 \tag{19}\]
Since there are two tips in the path, \(n_{S}=2\).
**Theorem 3.8** (cycle counting at graph level): _For \(3\leqslant l\leqslant 6\), using \(C_{l}\) from theorem 3.1 the following sentence computes the number of \(l\)-cycles in a graph._
\[\frac{1}{2l}1^{\mathbf{T}}C_{l}1 \tag{20}\]
It is clear that there are \(l\) nodes in an \(l\)-cycle and so \(n_{S}=l\).
**Theorem 3.9** (Chordal cycle at graph level): _The following sentence computes the number of chordal cycles in a graph_
\[\frac{1}{4}1^{\mathbf{T}}(A\odot A^{2}\odot(A^{2}-(A^{2}>0)))1 \tag{21}\]
### Spectral response of \(\mathrm{ML}\left(\mathcal{L}_{3}\right)\)
The graph Laplacian is the matrix \(L=D-A\) (or \(L=\mathrm{I}-D^{-\frac{1}{2}}AD^{-\frac{1}{2}}\) for the normalised Laplacian) where \(D\) is the diagonal degree matrix. Since \(L\) is positive semidefinite, its eigendecomposition is \(L=U\mathrm{diag}\left(\lambda\right)U^{\mathbf{T}}\) with \(U\in\mathbb{R}^{n\times n}\) orthogonal and \(\lambda\in R_{+}^{n}\). By analogy with the convolution theorem, one can define graph filtering in the frequency domain by \(\tilde{x}=U\mathrm{diag}\left(\Omega(\lambda)\right)U^{\mathbf{T}}x\) where \(\Omega\) is the filter applied in the spectral domain.
**Lemme 3.10**: _Given \(A\) the adjacency matrix of a graph, \(\mathrm{ML}\left(\mathcal{L}_{3}\right)\) can compute the graph Laplacian \(L\) and the normalised Laplacian \(L_{n}\) of this graph._
Proof.: \(\mathrm{ML}\left(\mathcal{L}_{3}\right)\) can produce \(A^{2}\odot\mathrm{I}\) which is equal to \(D\). Thus it can compute \(L=D-A\). For the normalised Laplacian, since the point-wise application of a function does not improve the expressive power of \(\mathrm{ML}\left(\mathcal{L}_{3}\right)\)[13], \(D^{-\frac{1}{2}}\) is reachable by \(\mathrm{ML}\left(\mathcal{L}_{3}\right)\). Thus, the normalised Laplacian \(D^{-\frac{1}{2}}LD^{-\frac{1}{2}}\) can be computed.
As in [11], we define the spectral response \(\phi\in\mathbb{R}^{n}\) of \(C\in\mathbb{R}^{n\times n}\) as \(\phi(\lambda)=\mathrm{diagonal}(U^{\mathbf{T}}CU)\) where diagonal extracts the diagonal of a given square matrix. Using spectral response, [11] shows that most existing MPNNs act as low-pass filters while high-pass and band-pass filters are experimentally proved to be necessary to increase model expressive power.
**Theorem 3.11**: _For any continuous filter \(\Omega\) in the spectral domain of the normalised Laplacian, there exists a matrix in \(\mathrm{ML}\left(\mathcal{L}_{3}\right)\) such that its spectral response approximate \(\Omega\)._
Proof.: The spectrum of the normalised Laplacian is included in \(\left[0,2\right]\), which is compact. Thanks to Stone-Weierstrass theorem, any continuous function can be approximated by a polynomial function. We just have to ensure the existence of a matrix in \(\mathrm{ML}\left(\mathcal{L}_{3}\right)\) such that its spectral response is a polynomial function.
For \(k\in\mathbb{N}\), the spectral response of \(L^{k}\) is \(\lambda^{k}\) since we have
\[U^{\mathbf{T}}L^{k}U =U^{\mathbf{T}}(U\mathrm{diag}\left(\lambda\right)U^{\mathbf{T}} )^{k}U\] \[=U^{\mathbf{T}}U\mathrm{diag}\left(\lambda\right)^{k}U^{\mathbf{ T}}U=\mathrm{diag}\left(\lambda\right)^{k}\]
From Lemma 3.10, \(\mathrm{ML}\left(\mathcal{L}_{3}\right)\) can compute \(L\), and thus it can compute \(L^{k}\) for any \(k\in\mathbb{N}\). Since \(\mathrm{ML}\left(\mathcal{L}_{3}\right)\) can produce all the matrices with a monotone spectral response and since the function that gives the spectral response to a given matrix is linear, \(\mathrm{ML}\left(\mathcal{L}_{3}\right)\) can produce any matrices with a polynomial spectral response.
As said before, \(\mathrm{G}^{2}\mathrm{N}^{2}\) inherits all the theorems presented in this section related to \(\mathrm{ML}\left(\mathcal{L}_{3}\right)\). The next section aims to answer 4 main questions through experimental evaluation of \(\mathrm{G}^{2}\mathrm{N}^{2}\):
**Q1**: Is \(\mathrm{G}^{2}\mathrm{N}^{2}\) able to distinguish 1-\(\mathrm{WL}\) or 3-\(\mathrm{WL}\) equivalent pairs of non-isomorphic simple graphs?
**Q2**: Can \(\mathrm{G}^{2}\mathrm{N}^{2}\) learn to count cycles of length 3, 4, 5, 6 and chordal cycles at edge level?
**Q3**: Can \(\mathrm{G}^{2}\mathrm{N}^{2}\) learn low-pass, high-pass and band-pass filters in the spectral domain?
**Q4**: Can \(\mathrm{G}^{2}\mathrm{N}^{2}\) generalize on downstream tasks of graph classification and regression?
## 4 Experiments
For reproducibility, we fixed \(S^{(l)}=b_{\otimes}^{(l)}=b_{\odot}^{(l)}=b_{\mathrm{diag}}^{(l)}\). At each layer, the MLP depth is always 1. The search for hyperparameters is conducted on a grid of learning rates \(\{10^{-4},5.10^{-4},10^{-3}\}\), and of learning rate decays \(\{.8,.85,.9\}\) with a patience of 5.
Q1: Is G\({}^{2}\)N\({}^{2}\) able to distinguish \(1\)-WL or \(3\)-WL equivalent pairs of non-isomorphic simple graphs?
The EXP dataset contains 600 pairs of non-isomorphic 1-WL equivalent graphs [17], while SR25 is composed of 105 different pairs of non-isomorphic 3-WL equivalent graphs [18]. We adopt the evaluation procedure proposed in [18] by comparing the graph representations obtained from each pair using random weights and reporting successfully distinguished pairs. In this experiment, there is no learning, the weights are initialised with different random seeds. We apply a forward of the models on each graph of a pair and then compare the embedding of these graphs. If the norm of the difference of the embedding is less than \(10^{-3}\) we consider that the graphs cannot be distinguished. We report the number of distinguished pairs. For this experiment, \(S^{(0)}=2\), \(F_{n}^{(0)}=1\) since the graph are only structural. For \(l\in\{1,2\}\), we have \(S^{(l)}=F_{n}^{(l)}=16\), and \(S^{(3)}=10\), \(F_{n}^{(3)}=10\). There is no decision layer in this case.
We compare G\({}^{2}\)N\({}^{2}\) to GCN, GAT [19], GIN [5], GNNML1 that are known to be upper bounded by 1-WL. The results of these models are reported under the 1-WL-bounded GNN row in table 1. As expected, models bounded by 1-WL achieve 0% accuracy on both datasets. We also compare to models stricly stronger than 1-WL but weaker than 3-WL ID-GNN [20], NGNN [21], GNNAK+ [22] and GNNML3 [18]. The results of these models are reported under the 3-WL-bounded GNN row in table 1. As expected, models bounded by 3-WL achieve 100% accuracy on EXP. Since CHEBNET [23] is a spectrally designed model, despite its closeness to 1-WL-bounded GNN, it manages to distinguish pairs in EXP having different maximum eigenvalues. Last, we compare G\({}^{2}\)N\({}^{2}\) to PPGN known to be 3-WL equivalent and I\({}^{2}\)-GNN[9]. For SR25, the experiments corroborate theoretical results as I\({}^{2}\)-GNN manages to distinguish all pairs of graphs, while all the other models upper-bounded by 3-WL fail.
Q2: Can G\({}^{2}\)N\({}^{2}\) learn to count cycles of length 3, 4, 5, 6 and chordal cycles at edge level?
For these experiments, we use the RandomGraph dataset [8] with the partitioning: train 1500 graphs, validate 1000 and test 2500. We create the ground truth according to formulas in theorem 3.1. The task is a regression on edges aiming to approximate the edge-level counting of 5 different substructures. For this experiment, \(S^{(0)}=2\), \(F_{n}^{(0)}=1\) since the graph are only structural. We use 3,4,4,5,7 layers respectively for triangle, 4-cycle, chordal cycle, 5-cycle, and 6-cycle counting tasks. For \(l\in\llbracket 1\,,7\rrbracket\), \(S^{(l)}=F_{n}^{(l)}=16\). and at last for the decision layer, we apply 2 fully connected layers of size 32 and 1. The loss is an absolute error.
To keep every metric in the same order of magnitude, we used normalised MAE [9], where the MAE value is divided by the standard deviation of the label. Results are given in table 2. They support theorem 3.1: G\({}^{2}\)N\({}^{2}\) learns an approximation of \(C_{l}\) and thus **can learn to count cycles at edge-level**. The slight increase of the metric for the 5-cycle and the 6-cycle can be explained by the fact that we use much more layers for these tasks at the cost of slower convergence, as \(C_{5}\) and \(C_{6}\) need much more operations.
\begin{table}
\begin{tabular}{l l l} \hline \hline Method & EXP & SR25 \\ \hline
1-WL-bounded GNN & 0\% & 0\% \\ CHEBNET & 87\% & 0\% \\
3-WL-bounded GNN & **100\%** & 0\% \\ PPGN & **100\%** & 0\% \\ G\({}^{2}\)N\({}^{2}\) & **100\%** & 0\% \\ I\({}^{2}\)-GNN & **100\%** & **100\%** \\ \hline \hline \end{tabular}
\end{table}
Table 1: The accuracy on EXP and SR25 datasets denotes the ratio of pairs of non isomorphic respectively 1-WL equivalent and 3-WL equivalent graphs that are separate by the model.
Q3: Can G\({}^{2}\)N\({}^{2}\) learn low-pass, high-pass and band-pass filters in the spectral domain?
For these experiments, we use the protocol and datasets of [18]. It aims to answer Q3 through a node regression problem. The dataset is composed of three 2D grids of size 30x30, one for training, testing and validation. We use \(R^{2}\) score to compare the models' performance. We use 3 layers of G\({}^{2}\)N\({}^{2}\) with \(S^{(l)}=32\) and \(f_{n}^{(l)}=16\) for \(l\in\{1,2,3\}\). Our readout function is the identity over the last vectorial embedding. We finally apply two fully connected layers on the output of the readout function and then use MSE loss to compare the output to the ground truth.
Table 3 reports the comparison of G\({}^{2}\)N\({}^{2}\) to MLP, GCN, GAT, GIN, CHEBNET, PPGN, GNNML1 and GNNML3, citing the results from [18]. As CHEBNET and GNNML3 are spectrally designed, they manage to learn low-pass and high-pass filters, with a metric higher than 0.99. PPGN, GNNML1 and G\({}^{2}\)N\({}^{2}\) are the only non-spectrally designed models that can learn high-pass filters with a metric higher than 0.98. It shows that only a part of G\({}^{2}\)N\({}^{2}\) grammar is needed to obtain high-pass filters. Only G\({}^{2}\)N\({}^{2}\) and the spectrally-designed GNNML3 and CHEBNET reach a metric higher than 0.81 when learning the band-pass filter. This result supports theorem 3.11. The results are not higher than 0.83 because the ground truth band-pass filter used for the experiment is very sharp.
Q4: Can G\({}^{2}\)N\({}^{2}\) generalize on downstream tasks of graph classification and regression?
For a regression task, we evaluate our model on a graph learning benchmark called the QM9 dataset [24, 25]. QM9 is composed of 130K small molecules and consists of 12 graph regression tasks. As in [7], the dataset is split into training, validation and test datasets randomly with a respective ratio of 0.8, 0.1 and 0.1. We use a model that predicts the whole targets (G\({}^{2}\)N\({}^{2}\)(12)).
G\({}^{2}\)N\({}^{2}\) results are compared to those in [9] and in [7] including 1-GNN and 1-2-3-GNN [1], DTNN [25], DeepLRP [8], NGNN, I\({}^{2}\)-GNN and PPGN predicting the twelve targets and one target at a time. The metric is the MAE on the test of the best validation model. The implemented architecture use 3 layers of G\({}^{2}\)N\({}^{2}\) with \(S^{(l)}=f_{n}^{(l)}=128\) for \(l\in\{1,2,3\}\). The readout function is a sum over the components of \(H^{(3)}\), of the diagonal and the non-diagonal parts of \(\mathcal{C}^{(3)}\). Finally, 3 fully connected layers are applied before using an absolute error loss. Since our goal is to study the expressive power of our model, we intentionally use a model that fits on a simple GPU. The setting of PPGN uses the same matrix embedding dimension (PPGN(12sm)) for comparison.
Partial results focusing on the three better models are given in table 4, complete results are given in table
\begin{table}
\begin{tabular}{l l l l} \hline \hline triangle & 4-cycle & 5-cycle & 6-cycle & chordal cycle \\ \hline
3.99e-04 & 4.55e-04 & 2.93e-03 & 3.58e-03 & 1.56e-04 \\ \hline \hline \end{tabular}
\end{table}
Table 2: G\({}^{2}\)N\({}^{2}\) normalised MAE on counting substructures at edge level.
\begin{table}
\begin{tabular}{l l l l} \hline \hline Method & Low-pass & High-pass & Band-pass \\ \hline MLP & 0.9749 & 0.0167 & 0.0027 \\ GCN & 0.9858 & 0.0863 & 0.0051 \\ GAT & 0.9811 & 0.0879 & 0.0044 \\ GIN & 0.9824 & 0.2934 & 0.0629 \\ CHEBNET & **0.9995** & 0.9901 & **0.8217** \\ PPGN & 0.9991 & **0.9925** & 0.1041 \\ GNNML1 & 0.9994 & 0.9833 & 0.3802 \\ GNNML3 & **0.9995** & **0.9909** & **0.8189** \\ G\({}^{2}\)N\({}^{2}\) & **0.9996** & **0.9994** & **0.8206** \\ \hline \hline \end{tabular}
\end{table}
Table 3: \(R^{2}\) score on spectral filtering node regression problems. Results are median of 10 different runs.
5. Note that G\({}^{2}\)N\({}^{2}\) obtains the best results on four targets when predicting the twelve at once, including \(R^{2}\), which is the hardest target to predict for other GNNs. The fact that the convergence does not go further for the other predictions can be explained by the size of our architecture. As shown by table 4, PPGN encounters the same issue when predicting the twelve targets with a small architecture.
For graph classification, we evaluate G\({}^{2}\)N\({}^{2}\) on the classical TUD benchmarks [26], using the evaluation protocol of [5]. Results of GNNs and Graph Kernel are taken from [27]. Since the number of edge and node features is very different from one dataset to another, we use from 3 to 5 layers of G\({}^{2}\)N\({}^{2}\) with \((S^{(l)},f_{n}^{(l)})\in\{8,16,32,64\}^{2}\). The readout function is the same as in regression tasks. We apply three fully connected layers before using a loss function adapted to the task.
to study the expressive power of G\({}^{2}\)N\({}^{2}\), we did not push our architecture to its limits. It should be a part of our future work. The parameter setting for each of the 6 experiments related to this dataset can be found in table 7.
## 5 Conclusion
This paper proposes a GNN design strategy based on the reduction of Context-Free Grammars 1-WL and 3-WL equivalent. This strategy enables to generate and analyse most of the existing models, but also to build new models with a targeted expressive power. In this context, the paper presents G\({}^{2}\)N\({}^{2}\) which is theoretically shown to be as expressive as the 3-WL test, to have strong substructures counting abilities, and to have a powerful spectral expressive power. A large number of experiments covering these properties corroborate the theoretical results. Beyond these results, we are convinced that our design strategy opens the door to models surpassing the 3-WL equivalence, taking as root a language manipulating tensors of greater order as the Tensor Language proposed in [34]. Moreover we are confident that grammars far from the WL equivalence could lead to another generation of GNNs taylored for specific problems.
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline parameters & MUTAG & PTC & Proteins & NCI1 & IMDB-B & IMDB-M \\ \hline \(f_{n}^{(0)}\) & 7 & 22 & 3 & 37 & 1 & 1 \\ \(S^{(0)}\) & 8 & 2 & 2 & 2 & 2 & 2 \\ number of layer\(=l_{m}\) & 3 & 3 & 3 & 3 & 3 & 3 \\ \(f_{n}^{(l)}l\in\llbracket 1,l_{m}\rrbracket\) & 16 & 32 & 16 & 16 & 32 & 32 \\ \(S^{(l)}\)\(l\)\(l\) & 16 & 32 & 8 & 64 & 32 & 32 \\ decision layer dimension & 256/128/1 & 512/256/1 & 512/256/1 & 512/256/1 & 512/256/1 & 512/256/3 \\ loss & BCEloss & BCEloss & BCEloss & BCEloss & BCEloss & CEloss \\ \hline \hline \end{tabular}
\end{table}
Table 7: G\({}^{2}\)N\({}^{2}\) parameters detail for each dataset in our experiments on TU
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline Dataset & MUTAG & PTC & Proteins & NCI1 & IMDB-B & IMDB-M \\ \hline WL kernel [28] & 90.4\(\pm\)5.7 & 59.9\(\pm\)4.3 & 75.0\(\pm\)3.1 & **86.0\(\pm\)1.8** & 73.8\(\pm\)3.9 & 50.9\(\pm\)3.8 \\ GNTK [29] & 90.0\(\pm\)8.5 & 67.9\(\pm\)6.9 & 75.6\(\pm\)4.2 & 84.2\(\pm\)1.5 & 76.9\(\pm\)3.6 & 52.8\(\pm\)4.6 \\ DGCNN [30] & 85.8\(\pm\)1.8 & 58.6\(\pm\)2.5 & 75.5\(\pm\)0.9 & 74.4\(\pm\)0.5 & 70.0\(\pm\)0.9 & 47.8\(\pm\)0.9 \\ IGN [6] & 83.9\(\pm\)13.0 & 58.5\(\pm\)6.9 & 76.6\(\pm\)5.5 & 74.3\(\pm\)2.7 & 72.0\(\pm\)5.5 & 48.7\(\pm\)3.4 \\ GIN [5] & 89.4\(\pm\)5.6 & 64.6\(\pm\)7.0 & 76.2\(\pm\)2.8 & 82.7\(\pm\)1.7 & 75.1\(\pm\)5.1 & 52.3\(\pm\)2.8 \\ PPGNs [7] & 90.6\(\pm\)8.7 & 66.2\(\pm\)6.6 & 77.2\(\pm\)4.7 & 83.2\(\pm\)1.1 & 73.0\(\pm\)5.8 & 50.5\(\pm\)3.6 \\ Natural GN [31] & 89.4\(\pm\)1.60 & 66.8\(\pm\)1.79 & 71.7\(\pm\)1.04 & 82.7\(\pm\)1.35 & 74.8\(\pm\)2.01 & 51.3\(\pm\)1.50 \\ WEGL [32] & N/A & 67.5\(\pm\)7.7 & 76.5\(\pm\)4.2 & N/A & 75.4\(\pm\)5.0 & 52.3\(\pm\)2.9 \\ GIN+GraphNorm [33] & 91.6\(\pm\)6.5 & 64.9\(\pm\)7.5 & 77.4\(\pm\)4.9 & 82.7\(\pm\)1.7 & 76.0\(\pm\)3.7 & N/A \\ GSNs [27] & **92.2\(\pm\)7.5** & 68.2\(\pm\)7.2 & 76.6\(\pm\)5.0 & 83.5\(\pm\)2.0 & **77.8\(\pm\)3.3** & **54.3\(\pm\)3.3** \\ G\({}^{2}\)N\({}^{2}\) & 92.0\(\pm\)4.3 & **71.8\(\pm\)6.7** & **77.8\(\pm\)3.2** & 80.2\(\pm\)2.1 & 76.8\(\pm\)2.8 & 54.0\(\pm\)2.93 \\ \hline \hline \end{tabular}
\end{table}
Table 8: Results on TUD dataset. The metric is accuracy, the higher, the better. |
2301.04285 | TAPS: Topology-Aware Intra-Operator Parallelism Strategy Searching
Algorithm for Deep Neural Networks | TAPS is a Topology-Aware intra-operator Parallelism strategy Searching
algorithm that generates intra-operator parallelism strategies by considering
both intra-node and inter-node bandwidth. Most of the existing auto-parallelism
works use the communication volume as the communication cost directly when
generating strategies, which we prove to be sub-optimal in multi-nodes cases.
We design a topology-aware cost model for multi-node intra-operator parallelism
strategy searching. Numerical experiments demonstrate that TAPS can generate
strategies with up to 85% fewer communication costs, which outperform the
latest baselines. | Peng Liang, Hao Zheng, Teng Su, Linbo Qiao, Dongsheng Li | 2023-01-11T03:11:45Z | http://arxiv.org/abs/2301.04285v1 | TAPS: Topology-Aware Intra-Operator Parallelism Strategy Searching Algorithm for Deep Neural Networks
###### Abstract
TAPS is a Topology-Aware intra-operator Parallelism strategy Searching algorithm that generates intra-operator parallelism strategies by considering both intra-node and inter-node bandwidth. Most of the existing auto-parallelism works use the communication volume as the communication cost directly when generating strategies, which we prove to be sub-optimal in multi-nodes cases. We design a topology-aware cost model for multi-node intra-operator parallelism strategy searching. Numerical experiments demonstrate that TAPS can generate strategies with up to 85% fewer communication costs, which outperform the latest baselines.
## 1 Introduction
Large-scale Deep Learning (DL) models have been a huge hot topic in recent years for their great performance improvements in fields like [3, 9, 16], which is a result of scaling up model sizes and dataset sizes. For example, PaLM with 540 billion parameters is trained with a corpus of 780 billion tokens that represent a wide range of natural language use cases [5].
As the model size significantly increases, training models with a single device or even within a node is no longer practical. Thus, researchers use distributed deep learning to train these models [19]. Manual strategies like [17] have been widely used in training transformer-based models for their good performance. However, it is often not optimal because optimal parallelism strategies vary when the model or training environment changes, in which case researchers and engineers may need to redesign strategies.
To relieve us from the parallelism design procedure, researchers propose auto-parallelism algorithms [4, 7, 23] that can find decent strategies given a specific model and environment. These algorithms first model parallelism strategies' communication costs and then use a dynamic programming or an integer linear programming (ILP) method to find the optimal strategy.
As model size grows larger, a single node can no longer hold an entire large-scale model. Thus, using multi-nodes to train a model becomes necessary. Our key observation is that in a multi-node environment, the bandwidth within a node (intra-node bandwidth) and across nodes (inter-node bandwidth) are different, and the intra-node bandwidth is much higher than inter-node bandwidth in most cases. However, existing searching algorithms model the communication cost using the communication volume directly, ignoring the difference between the bandwidths and resulting in sub-optimal strategies. Based on this observation, we propose a topology-aware parallelism strategy searching algorithm called TAPS, which can capture the difference between intra-node and inter-node communication and thus generates better parallelism strategies.
We first construct a topology-aware cost model, which can determine the inter-node communication times as well as the topology-aware communication cost given a communication axis of a tensor. Then we formalize the strategy searching problem as an integer linear programming problem, after which we use a third-party solver to solve the final strategy decision.
In summary, we make the following contributions:
* We prove that the volume-based communication cost model is insufficient to generate optimal intra-operator parallelism strategy in multi-nodes cases.
* We provide a heuristic solution in optimizing tensor redistribution sequences.
* We analyze the communication in multi-node environments and propose a topology-aware communication cost model, which can calculate more accurate communication costs of a parallelism strategy of an operator.
* We design and implement TAPS, a strategy-searching algorithm that works for distributed DL.
* We numerically evaluate TAPS on several models of different configurations. We compare TAPS with volume-based searching. Our experiments show that TAPS can find strategies with up to 85% fewer communication costs.
## 2 Background
### Existing Parallelism Methods
Since Hinton [8] trained AlexNet using two GPUs in 2012, researchers have proposed many parallelism methods, including data parallelism (DP), model parallelism (MP), and pipeline parallelism(PP).
#### 2.1.1 Data Parallelism
Data parallelism partition and distribute the data across devices that has a replicated model. Each device computes the gradients using the split data and uses communication like AllReduce or Broadcast to synchronize the gradients or model parameters with other devices. So
that after every iteration, the models on all workers are the same.
#### 2.1.2 Model Parallelism
Model parallelism partition the model parameters across devices and make devices process the same data. Model parallelism produces partial-sum or sliced results when the parameter matrix is partitioned row-wisely and column-wisely, respectively. Row-wise MP (Row-MP) requires synchronization to unify the operator's results on different devices. Column-wise MP (Column-MP) does synchronization only in backward propagation.
#### 2.1.3 Pipeline Parallelism
Pipeline parallelism partition operators in a model into several stages and let devices hold only one or a few of them. Meanwhile, PP splits a mini-batch of data into several micro-batches and feeds them one by one into the first stage. When a stage finishes its computation, it sends the result to its next stage. Different stages can be handled simultaneously; thus, PP forms a pipeline that can improve performance.
### Intra- and Inter-Operator Parallelism
Alpa [23] catalog existing parallelism methods into two orthogonal categories: intra-operator and inter-operator parallelism. Intra-operator parallelisms are parallelism schemes that partition an operator's involved tensors along some dimensions, assign the resulting partitioned computation to multiple devices, and let them execute different parts of the computation simultaneously. From this view, we can treat data parallelism as a scheme that partitions an operator's input and output tensor along the batch-size axis; we can treat Row-MP as a scheme that partitions an operator's input and weight tensor along the channel-in axis; we can treat Column-MP as a scheme that partitions weight tensor and output tensor along the channel-out axis. Inter-operator parallelism, including pipeline parallelism, partitions models into several stages with multiple operators.
This paper focuses on generating multi-dimensional intra-operator parallelism strategies in multi-node environments.
### Strategy Searching Algorithm
Researchers have proposed methods to search parallelism strategies automatically. ToFu[21], TensorOpt[4], and Alpa[23] generate intra-operator parallelism strategies by minimizing the overall communication cost of a computation graph under the observation that all different strategies of an operator have the same computation cost. ToFu and TensorOpt adapt the dynamic programming algorithm that OptCNN[7] propose to produce better results. Alpa formalizes the searching problem as an integer programming problem and uses a solver to handle the solution progress. However, they assume the bandwidths of clusters are equal everywhere, ignoring the difference between the intra-node bandwidth and inter-node bandwidth. This assumption may limit the searching algorithm to find the optimal strategies, as, in large-scale clusters, intra-node bandwidth is much higher than inter-node bandwidth. In this paper, we propose a topology-aware communication cost model aware of the intra-node and inter-node bandwidth, which helps generate more fine-grained strategies.
## 3 Overview
TAPS is an algorithm that generates intra-operator parallelism strategies by minimizing the communication cost of the computation graph. TAPS takes a computation graph \(G=(V,E)\) and device graph \(D=(V_{D},E_{D})\) as inputs, and output a partition set \(P\), which consists of strategy decisions of every operator \(v_{i}\in V\) in \(G\). The computation graph contains operator information, like shapes and operator types. The device graph indicates the device types and the bandwidth between devices. TAPS gives a solution in two steps: First, TAPS creates an auxiliary graph where each node indicates an operator with a specific strategy and computes the weights for each edge \((u,v)\) in the auxiliary graph, which equals the intra-operator communication cost of \(v\) plus tensor redistribution communication cost between \(u\) and \(v\). Then, TAPS formalizes the searching problem as an integer linear programming problem using the information in the auxiliary graph and uses a third-party solver to solve the optimal strategy.
## 4 Communication Cost Model
In this section, we give the details of our topology-aware communication cost model. We first illustrate the details of the volume-based cost model. Based on the volume-based cost model, we calculate the corresponding topology-aware communication cost using the volumes and effective bandwidth.
### Volume-based cost model
Previous works [4, 18, 20] model the communication cost of each strategy by symbolically computing the their communication volume. The communication volume of an operator consists of intra-operator communication and inter-operator communication. Intra-operator communication reduces the partial sums generated in computing. Inter-operator communication transforms tensor to fit the succeeding operator's strategy.
#### 4.1.1 Intra-operator communication
Taking MatMul as an example, its forward computation is shown as Eq.1, and its backward computation is shown as Eq.2 and Eq.3.
\[Y =XW \tag{1}\] \[\delta W =X^{T}E_{y}\] (2) \[E_{x} =E_{y}W^{T} \tag{3}\]
Let \(d\), \(r\), \(c\) denote the data parallelism (DP)[10], Row-MP, and Column-MP [17] degrees of a MatMul operator, respectively; \(p=drc\) denotes the total device number and is the power of 2.
Then we split the \(X\) and \(W\) matrices like:
\[X=\begin{bmatrix}X_{11}&X_{12}&\ldots&X_{1r}\\ X_{21}&X_{22}&\ldots&\vdots\\ \vdots&\vdots&\ddots&\vdots\\ X_{d1}&X_{d2}&\ldots&X_{dr}\end{bmatrix},\ \ \ W=\begin{bmatrix}W_{11}&W_{12}&\ldots&W_{1c}\\ W_{21}&W_{22}&\ldots&\vdots\\ \vdots&\vdots&\ddots&\vdots\\ W_{r1}&W_{r2}&\ldots&W_{rc}\end{bmatrix}.\]
After splitting the matrices \(X\) and \(W\), we distribute their sub-blocks to corresponding devices. As Figure 1.(a) shows, where each cube represents a device, each sub-block of \(X\) is replicated along axis \(c\), and \(W\) is replicated along axis \(d\). As Figure 1.(b)(c) shows, we then compute the local results of \(Y\) on each device and communicate them to form the final matrix \(Y\). The communication is a reduction operation of local results and is mathematically equivalent to Eq. 4.
\[Y_{ij}=\sum_{k=1}^{r}X_{ik}W_{kj} \tag{4}\]
Final matrix \(Y\) is split like:
\[Y=\begin{bmatrix}Y_{11}&Y_{12}&\ldots&Y_{1c}\\ Y_{21}&Y_{22}&\ldots&\vdots\\ \vdots&\vdots&\ddots&\vdots\\ Y_{d1}&Y_{d2}&\ldots&Y_{dc}\end{bmatrix},\]
where each sub-block \(Y_{ij}\) is replicated along \(d\) axis.
Suppose we are using a bandwidth optimal Ring-AllReduce algorithm [15], the communication volume of a MatMul operator accumulating results of \(Y\) on each device (i.e., the volume of Row-MP) is:
\[V_{AR}^{Y}=\frac{2(device\_num-1)}{device\_num}data\_size=\frac{2(r-1)b\ out}{drc}. \tag{5}\]
Similarly, we give the communication volume of DP and Column-MP in a Matmul operator by computing the communication volume of acuumulating results of \(\delta W\) and \(E_{X}\), respectively, which are:
\[V_{AR}^{\delta W}=\frac{2(d-1)in\ out}{drc}, \tag{6}\]
\[V_{AR}^{Ex}=\frac{2(c-1)b\ in}{drc}. \tag{7}\]
Finally, the overall communication volume of a MatMul operator is:
\[Volume=\frac{2((d-1)in\ out+(r-1)b\ out+(c-1)b\ in)}{drc} \tag{8}\]
#### 4.1.2 Inter-operator communication
Inter-operator communication happens when there are tensor redistributions between two operators. Tensor redistributions are sequences that consist of several redistribution operators like All-Gather, Slice, and All-To-All. In this subsection, we give our solution for generating proper redistribution operator sequences.
Let \(O_{out},O_{in}\) denote two operators and \(T\) denote the output \(n\)-dimensional tensor of \(O_{out}\) and the input tensor of \(O_{in}\). \(S_{T}=[s_{0},s_{1},...,s_{n-1}]\) is the shape of \(T\) before partition. Suppose the depths of device matrix of \(O_{out}\) and \(O_{in}\) is \(h_{out}\) and \(h_{in}\). The device matrix in \(O_{out}\) and \(O_{in}\) are \(\mathcal{D}_{out}=[d_{out,h_{out}-1},d_{out,h_{out}-2},...,d_{out,0}]\) and \(\mathcal{D}_{in}=[d_{in,h_{in}-1},d_{in,h_{in}-2},...,d_{in,0}]\), respectively. The tensor maps of \(T\) in \(O_{out}\) and \(O_{in}\) are \(M_{in}=[m_{out,0},m_{out,1},m_{out,n-1}]\) and \(M_{out}=[m_{in,0},m_{in,1},...,m_{in,n-1}]\), respectively. To do the tensor redistribution, the device matrices and tensor shapes of \(O_{in}\) and \(O_{out}\) must be the same. We unify them by two steps. In step 1, we unify device matrices by factorizing some dimensions in two device matrices, which may result in a shape inconsistency of \(T\) in two operators. Thus in step 2, we need to unify the tensor shape under the unified device matrix additionally. Note that the two-step unification does not change the physical distribution of a Tensor. Table 1 shows an example of unifying a 2-dimensional tensor between \(O_{out}\) and \(O_{in}\). In step 1, we factorize "8" in two device matrices and replace them by the factorizing results \([4,2]\) and \([2,4]\) for \(O_{out}\) and \(O_{in}\), respectively. Meanwhile, we must change the tensor maps and shapes as we modify device matrices. Since the tensor shapes change in step 1, we need to unify it again before we infer tensor redistribution operators. In step 2, we reshape the tensor in \(O_{in}\) and \(O_{out}\) to make them have the same shape and modify tensor maps simultaneously.
After unifying the device matrix and tensor shape, we can infer the redistribution operators. A naive way to do the redistribution is to AllGather along all the workers and then partition along axes that are not repetitive. To reduce the communication cost, we use a heuristical algorithm 1 to generate tensor redistribution operators. Our algorithm contains three optimizations. First, we only AllGather along the necessary axes of the tensor, which are partitioned in \(O_{out}\) and replicated in \(O_{in}\). Second, we rearrange the redistribution sequence, putting dependent Slice before AllGather to reduce the communication volume that AllGather produces. Third, we replace the implicit permutations (i.e., AllGather and Slice along the same axis in the device matrix) with AllToAll operators, thus further reducing the communication volume. In Algorithm 1, \(InferSlice\) finds all necessary Slice-Op and appends them to the operator sequence \(S\). If there is no more SliceOp, \(InferSlice\) sets \(S\_flag\) to \(False\). Similarly, \(InferAll2All\) and \(InferAllGather\) do the same things for AllToAllOp and AllGatherOp. Table 3 shows an example of using above mentioned three optimizations to fine-tune the redistribution sequence.
Finally, we obtain the inter-operator communication volume of such tensor redistribution by accumulating the communication volumes of redistribution operators within sequence \(S\). Suppose we are using bandwidth optimal Ring-AllGather
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline
**Step** & **Operator** & **Device Matrix** & **Tensor Map** & **Tensor Shape** \\ \hline \multirow{2}{*}{0: Initial} & \(O_{out}\) & \([2,8]\) & \([1,0]\) & \([s_{0},s_{1}]\) \\ \cline{2-5} & \(O_{in}\) & \([8,2]\) & \([1,0]\) & \([s_{0},s_{1}]\) \\ \hline \hline \multirow{2}{*}{1: Unifying device matrix} & \(O_{out}\) & \multirow{2}{*}{\([2,4,2]\)} & \([2,1,0]\) & \([s_{0},4,s_{1}/4]\) \\ \cline{2-5} & \(O_{in}\) & & \([2,1,0]\) & \([2,s_{0}/2,s_{1}]\) \\ \hline \hline \multirow{2}{*}{2: Unifying tensor shape} & \(O_{out}\) & \multirow{2}{*}{\([2,4,2]\)} & \([2,-1,1,0]\) & \multirow{2}{*}{\([2,s_{0}/2,4,s_{1}/4]\)} \\ \cline{2-5} & \(O_{in}\) & & \([2,1,0,-1]\) & \\ \hline \end{tabular}
\end{table}
Table 1: Unifying Device Matrix and Tensor Shape of Tensor \(T\)
\begin{table}
\begin{tabular}{c|c|c} \hline
**Step** & **Operation** & **Tensor Map** & **Communication Volume** \\ \hline \hline \multirow{6}{*}{0: Initial} & \(AllGather(d_{1},1)\) & \([-1,-1,2,-1,3]\) & \multirow{6}{*}{\(\frac{d_{1}d_{2}d_{3}-1}{d_{1}d_{2}d_{3}}Size(T)\)} \\ \cline{2-2} & \(AllGather(d_{2},2)\) & \([-1,-1,-1,-1,3]\) & \\ \cline{2-2} & \(AllGather(d_{3},4)\) & \([-1,-1,-1,-1]\) & \\ \cline{2-2} & \(Slice(d_{1},0)\) & \([1,-1,-1,-1,-1]\) & \\ \cline{2-2} & \(Slice(d_{1},0)\) & \([1,-1,-1,-1,-1]\) & \\ \cline{2-2} & \(Slice(d_{0},3)\) & \([1,-1,0,-1]\) & \\ \cline{2-2} & \(Slice(d_{3},4)\) & \([1,-1,-1,0,3]\) & \\ \hline \hline \multirow{6}{*}{1: Remove} & \(AllGather(d_{1},1)\) & \([-1,-1,2,-1,3]\) & \multirow{6}{*}{\(\frac{d_{1}d_{2}-1}{d_{1}d_{2}d_{3}}Size(T)\)} \\ \cline{2-2} & \(AllGather(d_{2},2)\) & \([-1,-1,-1,-1,3]\) & \\ \cline{2-2} & \(Slice(d_{1},0)\) & \([1,-1,-1,-1,3]\) & \\ \cline{2-2} & \(Slice(d_{0},3)\) & \([1,-1,-1,0,3]\) & \\ \cline{2-2} & \(Slice(d_{0},3)\) & \([-1,1,2,0,3]\) & \\ \cline{2-2} & \(AllGather(d_{1},1)\) & \([-1,-1,2,0,3]\) & \\ \cline{2-2} & \(Slice(d_{1},0)\) & \([1,-1,2,0,3]\) & \\ \cline{2-2} & \(AllGather(d_{2},2)\) & \([1,-1,-1,0,3]\) & \\ \hline \hline \multirow{6}{*}{3: Replace} & \(Slice(d_{0},3)\) & \([-1,1,2,0,3]\) & \multirow{6}{*}{\(\frac{d_{1}d_{2}-1}{d_{2}d_{3}d_{3}}Size(T)\)} \\ \cline{2-2} & \(AllGather(d_{1},1)\) & \([-1,-1,2,0,3]\) & \\ \cline{1-1} \cline{2-2} & \(Slice(d_{1},0)\) & \([1,-1,2,0,3]\) & \\ \cline{1-1} \cline{2-2} & \(AllGather(d_{2},2)\) & \([1,-1,-1,0,3]\) & \\ \hline \end{tabular}
\end{table}
Table 2: Tensor Redistribution Between \(M_{from}=[-1,-1,2,-1,3]\) and \(M_{to}=[1,-1,-1,0,3]\)
Figure 1: Multi dimensional Intra-P of a MatMul operator on 8 devices where \(d=r=c=2\)
algorithm; the communication volume of AllGather is:
\[V_{AG}=(device\_num-1)data\_size; \tag{9}\]
For AllToAll operators, each device only needs to send \(\frac{1}{device\_num}\) different data to each other in the communication group. Thus, the communication volume of AllToAll is:
\[V_{A2A}=\frac{(device\_num-1)data\_size}{device\_num}. \tag{10}\]
As we can see in Table 2, the communication volume of the operator sequence that Algorithm 1 generates is much smaller.
We then blend the bandwidth difference into the volume-based cost model to form our topology-aware cost model.
```
Data:\(\mathcal{D}=[d_{h-1},d_{h-2},...,d_{0}]\); \(M_{from}=[m_{from,0},m_{from,1},...,m_{from,n-1}]\); \(M_{to}=[m_{to,0},m_{to,1},m_{to,n-1}]\) Result: Redistribution operator sequence \(S\).
1while\(M_{from}\neq M_{to}\)do
2\(S\_flag\gets True\);
3while\(S\_flag\)do
4\(S\_flag\gets InferSlice(M_{from},M_{to},S)\);
5\(A2A\_flag\gets True\);
6while\(A2A\_flag\)do
7\(A2A\_flag\gets InferAllAll(M_{from},M_{to},S)\);
8
9 end while
10\(AG\_flag\gets InferAllGather(M_{from},M_{to},S)\);
11if\(AG\_flag==False\)then
12\(AllGatherFirstUndoneDim(M_{from},M_{to},S)\);
13
14 end while
```
**Algorithm 1**Optimized Tensor Redistribution
### Topology-aware cost model
Based on volume-based cost model, we develop a topology-aware cost model that can additionally consider the bandwidth difference when calculating communicaiton costs. TAPS uses this topology-aware cost model to generate more fine-grained strategies.
Our observation is that in multi-node environment, we can do multiple intra-node communications of different communication groups simultaneously, and they can all fully utilize the bandwidth; But for inter-node communications, they need to share the links between nodes, thus lowering the effective bandwidth of each communication group. Figure 2.(a) shows a 2 DGX-V100 nodes environment, where intra-node communication uses high-bandwidth NVLink and inter-node communication uses 100GBps InfiniBand. In Figure 2.(b), we do communication along the axis 0. There are 8 communication groups, which are (GPU0, GPU1), (GPU2, GPU3) and so on. Each of them has an individual NVLink to use and thus the effective bandwidth equals the bandwidth of NVLink. The case in Figure 2.(c) also has 8 communication groups, which are (GPU0, GPU8), (GPU1, GPU9) and so on. However, all of them need to transport data via the only inter-node link (i.e., red line in the figure). Since they are communicating simultaneously, we need to divide the bandwidth by 8. Thus, the effective bandwidth is \(12.5/8=1.5625GB/s\) in this case.
Based on this observation, TAPS computes the number the inter-node communication groups within a node for AllReduce, AllGather, and AllToAll operators to obtain the effective bandwidth for every communication group. TAPS computes the communication costs by dividing communication volumes by effective bandwidths.
```
Data:Device Matrix \(\mathcal{D}=(d_{h-1},d_{h-2},...,d_{0})\), Tensor Map \(M=(m_{0},m_{1},...,m_{s-1})\), device number in a node \(local\_device\_num\) Result:Number of inter-node communication group \(ct\) remain_devices \(\leftarrow\)\(local\_device\_num\); \(Total\_devices\gets product(\mathcal{D})\); \(parallel\_degree\gets Total\_devices\); \(device\_in\gets 1\); for\(k\gets 0\)to\(s-1\)do \(parallel\_degree\gets parallel\_degree\div d_{m_{k}}\); for\(k\gets 0\)to\(h-1\)do if\(not\ M.find(k)\)and\(remain\_devices>1\)then
10if\(remain\_devices>d_{k}\)then
11\(device\_in\gets device\_in\times d_{k}\);
12
13else
14\(device\_in\gets remain\_devices\div d_{k}\);
15
16 end if
17if\(device\_in\geq parallel\_degree\)then
18\(ct\gets 0\);
19
20elseif\(device\_in>1\)then
21\(ct\gets local\_device\_num\div device\_in\);
22
23else
24\(ct\gets local\_device\_num\);
```
**Algorithm 2**Infer number of inter-node communication groups within a node for AllReduce
#### 4.2.1 AllReduce
For an arbitrary AllReduce operator, we first compute the inter-communication times of its input tensor using Algorithm 2. Algorithm 2 takes the device matrix, tensor map of the communicated tensor, and the number of devices in a node as inputs, then infer the number of communication groups that need to do inter-node communication.
The inter-communication times indicate how many communication groups do inter-node communications for a tensor simultaneously. For example, suppose we are executing a MatMul operator with strategy \((d,r,c)=(8,2,2)\) with different device maps as shown in Figure 3. The same-color cubes are devices within a node. In Figure 3(a), there are 4 different \(\delta W\)
partitions within a node, and they all need to communicate with other nodes. Thus, the inter-communication times \(ct_{\textit{BW}}\) is 4 in this case. The tensor \(\delta W\)'s inter-communication times \(ct_{\textit{BW}}\) are 4, 0, and 2 in Figure 3(a)(b)(c), respectively.
Using the result of \(ct\), we can compute the effective bandwidth \(B_{e}\):
\[B_{e}=\left\{\begin{array}{ll}B_{intra}&ct_{T}=0,\\ B_{inter}/ct_{T}&ct_{T}>0,\end{array}\right. \tag{11}\]
where \(B_{intra}\) is the intra-node communication bandwidth, and \(B_{inter}\) is the inter-node communication bandwidth.
Finally, The communication cost of an AllReduce operator is:
\[C_{AR}=V_{AR}/B_{e}. \tag{12}\]
#### 4.2.2 AllGather
Different from AllReduce, AllGather uses Algorithm 3 to compute the inter-communication times of AllGather. Algorithm3 can compute the repetitive degree \(r\) of AllGather in a device node and infer how many devices of a communication group are within a node. Using this information, it then outputs the \(ct\) values of corresponding AllGather. TAPS also uses Eq.11 to compute the \(B_{e}\) for AllGather. The communication of an AllGather operator is:
\[C_{AG}=V_{AG}/B_{e}. \tag{13}\]
#### 4.2.3 AllToAll
Unlike AllReduce and AllGather, which utilize ring topology to communicate, AllToAll uses peer-to-peer (P2P) communication to exchange data within a communication group. While each node in AllReduce and AllGather has only one send link and receive link, each node in AllToAll establishes \(p-1\) send and receive links that connect other nodes in the communication group, where \(p\) is the device number of the AllToAll communication group. This may influence the communication volume we use to compute communication costs. Therefore, we need to recompute the communication
Figure 3: Device Distributions of Different Device Map and Device Matrix
Figure 2: Intra-communication and Inter-communication on 2 DGX-V100 nodes
volume for AllToAll. Suppose among \(p\) devices, \(k\) devices are within a node, and the tensor size is \(T_{S}\). Then for any device in a node, there are \(k-1\) intra-node communication with volume \(T_{S}/p\), and \(p-k\) inter-node communication with volume \(T_{S}/p\). These \(k\) devices accumulate established \((p-k)k\) connections to other nodes, each transport \(T_{s}/p\) volume of data. Then for this AllToAll communication group, the inter-node communication volume via inter-node link is \(k(p-k)T_{S}/p\). Suppose there are \(l\) devices in a node. Then there are \(l/k\) different AllToAll communication groups within a node. We additionally suppose the repeat degree of them is \(r\). The repetitive tensor slices could share the same communication results by synchronizing within a node using high-bandwidth links. Then in a node, \(ct=l/(kr)\) groups simultaneously uses the inter-node bandwidth to communication, unless \(p\) equals \(k\). The values \(k\) and \(r\) can also be inferred using Algorithm 3. Effective bandwidth \(B_{e}\) is computed using Eq.11. Thus, the communication cost of AllToAll is:
\[C_{A2A}=\left\{\begin{array}{ll}\frac{V_{A2A}}{B_{e}}&p=k,\\ \frac{k(p-k)}{p-1}\frac{V_{A2A}}{B_{e}}&p>k,\end{array}\right. \tag{14}\]
```
Data: Computation graph \(G=(V,E)\), device graph \(D=(V_{D},E_{D})\) Result: Auxiliary graph \(G_{A}=(V_{A},E_{A})\)
1\(V_{A}\leftarrow\emptyset,E_{A}\leftarrow\emptyset\);
2\(device\_num\leftarrow|V_{D}|\);
3for\((u,w)\in E\)do
4\(U_{A}\) = GenerateStrategySet(\(u\), \(device\_num\));
5\(W_{A}\) = GenerateStrategySet(\(w\), \(device\_num\));
6\(V_{A}=V_{A}\cup U_{A}\cup W_{A}\) ;
7for\(u_{a}\in U_{A}\)do
8for\(w_{a}\in W_{A}\)do
9\(E_{A}=E_{A}\cup\{(u_{a},w_{a})\}\)
10
11 end for
12
13 end for
14
15 end for
```
**Algorithm 4**GenerateAviliaryGraph
## 5 Auxiliary Graph
Auxiliary graph \(G_{A}=(V_{A},E_{A})\) is an extension of computation graph \(G\) where each node \(v_{a}\in V_{A}\) indicates a unique strategy of its original vertex \(v\in V\). We use Algorithm 4 to generate auxiliary graph as Figure 4(a)(b) shows. For each \(v_{a}\in V_{A}\), we label it with the original operator and a unique strategy. The function GenerateStrategySet in algorithm 4 enumerates all possible strategies of the input operator and creates corresponding auxiliary nodes for them. More specifically, GenerateStrategySet will generate \(\sum_{i=1}^{\min(p,n)}i!\binom{p}{i}\binom{n-1}{i-1}\) different parallelism strategies when there is \(p\) different partitionable axes in the operator and the operator is held by \(N=2^{n}\) devices. A parallelism strategy of an operator consists of the parallelism degree and mapping of each axis. We can use the parallelism degrees to determine the partitions of each involved tensor of the operator and place them to corresponding devices according to the mappings. For example, suppose a matrix multiplication (MatMul) operator that does computation \(Y=XW\) can be partitioned along three axes: \(b\) axis, \(in\) axis, and \(out\) axis; The unpartitioned shapes of \(X\), \(W\), and \(Y\) are \((b,in)\), \((in,out)\), \((b,out)\), respectively. Table 3 shows the strategy set GenerateStrategySet generates when it takes the above MatMul operator and a device number of 4 as inputs. Taking \(u_{2}\) in Table 3 for illustration, number 2 in \(out\) axis represent tensor \(W\) and \(Y\) are sliced along \(out\) axis into 2 parts; device map \((-1,1,0)\) indicates the mapping value of \(b\), \(in\), and \(out\) axis are -1, 1, and 0, respectively. -1 here represents tensors replicated along the \(b\) axis. 0 here indicates that the \(out\) axis is partitioned most-innerly in clusters, which may have a high bandwidth when communicating. Device matrix \((1,2,2)\) is calculated by the parallelism degree of each axis and the device map, and it is a hierarchically logical topology of devices.
After creating the vertices and edges of the auxiliary graph, we then compute the communication cost \(C_{u_{a}w_{a}}\) of all \(e_{a}=(u_{a},w_{a})\in E_{A}\) using our topology-aware cost model. The weight of \(e_{a}\) equals the intra-operator cost of \(w_{a}\) plus inter-operator cost between \(u_{a}\) and \(w_{a}\).
Then, we can search strategies by selecting vertices and
edges in the auxiliary graph. Figure 4 shows an example of the search result, where blue vertices are selected strategies.
## 6 Searching Strategies by ILP
We formalize the strategy searching problem as an ILP problem as below shows:
\[\min \sum_{(i,j)\in E_{A}}B_{ij}C_{ij}\] (15) s.t. \[\sum_{v_{a}\in V_{A}}X_{v_{a}}=1, \forall v\in V \tag{16}\] \[\sum_{(i,v_{a})\in E_{A}}B_{iv_{a}}=X_{v_{a}}\times in\_degree(v)\] \[\sum_{(v_{a},k)\in E_{A}}B_{v_{a}k}=X_{v_{a}}\times out\_degree (v),\forall v_{a}\in V_{A}\] (17) \[\sum_{(i,j)\in E_{A}}B_{ij}M_{ij}<Device\_Memory,\] (18) \[B_{ij},X_{k}\in\{0,1\}, \forall(i,j)\in E_{A},\forall k\in V_{A} \tag{19}\]
where \(X_{v_{a}}\), \(B_{ij}\) are to-be-solved bool values that indicates the selection of the vertex \(v_{a}\in V_{A}\) and edge \((i,j)\in E_{A}\). \(C_{ij}\) and \(M_{ij}\) are the communication and memory costs of edge \((i,j)\in E_{A}\). Equation 16 informs the solver that we only select one strategy for all \(v\in V\). Equation 17 limits any \(v_{a}\in V_{A}\) to have the same indegree and outdegree as their original vertex \(v\in V\). To avoid selecting multiple strategies for \(v\in V\), we set the indegree and outdegree of \(v_{a}\in V_{A}\) to zero if it is not selected. Equation 18 limits the solver to produce overall strategies that do not exceed device memory.
Instead of dynamic programming, we use integer linear programming for two reasons. First, dynamic programming methods like [7, 21] cannot capture the overall memory cost during processing, which might generate strategies that exceed memory constraints. Although methods like [4] maintain a communication-memory-cost bound to avoid this drawback, its computation complexity is unacceptable while generating strategies for large-scale models. Second, we can directly use a high-performance third-party solver to solve the ILP problem, which saves our time from optimizing the solver runtime.
## 7 Evaluation
We evaluate TAPS by comparing the communication costs of strategies generated by volume-based searching and topology-aware searching. In our evaluation, we assume the intra-bandwidth equals 60GB/s and the inter-bandwidth equals 6GB/s. These two results are the peak bandwidth we get after testing on two 8-V100 nodes using nccl-tests[1]. Additionally, we assume that all the communication can fully utilize the bandwidth and that we are running in a homogeneous environment.
### Searching Runtime
We test the searching runtime on searching strategies for AlexNet[8] and Megatron-LM[14, 17]. Note that the main body of transformer-based networks consists of several layers with the same structures. Given that the same structures always have the same strategies when the devices they use are homogeneous, we only search strategies for one transformer layer of the networks. The applied solver can solve strategies within a few seconds using a 16-core 3.2GHz Intel i9-12900K CPU. In our searching runtime experiment, we suppose each node has 8 devices.
Table 4 shows some examples of running time of solving strategies, where \(V_{D}\) is the total number of devices, and \(|E_{A}|\) is the number of auxiliary edges. The search time is irrelevant to the number of a model's parameters. Instead, it is relevant to the number of a model's operators and the total device number. For example, the transformer layer of Megatron-LM 1.7B and 3.6B has the same structure but different parameter numbers. We follow the configurations in [14], searching intra-operator strategies for 1.7B, 3.6B both on overall 32 devices. As Table 4 shows, the \(|E_{A}|\) and their time remain in the same order of magnitude.
Figure 4: Auxiliary Computation Graph
### Comparison with Volume-Based Searching
We compare the communication cost between strategies that volume-based searching and topology-aware searching solve. In our experiment, we search strategies for the convolution network AlexNet, and transformer-based networks Megatron-LM. We do the volume-based searching by replacing the communication costs of the auxiliary edges with their corresponding communication volumes, after which we use the same solver to search for the strategies. Then we compute the communication costs of generated volume-based searching results using our topology-aware cost model. The comparison results are
\begin{table}
\begin{tabular}{c|c|c|c}
**Model** & \(|V_{D}|\) & \(|E_{A}|\) & **Time** \\ \hline \hline AlexNet & 8 & \(>3\times 10^{3}\) & \(<0.1\)s \\ AlexNet & 16 & \(>10^{4}\) & \(<0.2\)s \\ AlexNet & 64 & \(>5\times 10^{4}\) & \(<0.8\)s \\ Megatron-LM 1.7B & 32 & \(>3\times 10^{4}\) & \(<0.4\)s \\ Megatron-LM 3.6B & 32 & \(>3\times 10^{4}\) & \(<0.4\)s \\ Megatron-LM 1T & 8 & \(>4\times 10^{3}\) & \(<0.1\)s \\ \end{tabular}
\end{table}
Table 4: Strategy Searching Time of Solver
Figure 5: Comparison of Topology-Aware and Volume-Based Searching on Different Models
Figure 6: Ratios of the Topology-Aware and Volume-Based Searching
shown in Figure 5, where blue bars are the communication costs of volume-based searching results, and orange bars are the communication costs of topology-aware searching results. We take AlexNet, Megatron-LM 1.7B, and 3.6B as examples. As we can see in 5(a), when there is only one node, the communication cost of two different search results will be the same. This is intuitive since no inter-node communication exists in this case. Our experiments show that TAPS can always find strategies that outperform those volume-based searching solve out. In the case of searching strategies for AlexNet on two 8-device nodes, it even reduces the communication cost by 85%. Additionally, we merge all experiments we run into Figure 6, where each point represents a search of a model under a specific device topology. The \(x\)-axis represents the ratio of topology-aware communication volume and volume-based communication cost; The \(y\)-axis represents the ratio of topology-aware communication cost and volume-based communication cost. As we can see, all points in the graph lie on or below the line \(y=1\), which means that topology-aware searching can always find strategies with smaller communication costs than volume-based searching. Moreover, topology-aware searching reduces the communication cost by more than 20% in most cases.
## 8 Related Work and Discussion
**Pipeline Parallelism.** auto parallleism methods like Alpa, Chimera[12] and PipeDream[13] can generate pipeline parallelism strategies that balance the stages on different devices. The searching space of TAPS is orthogonal to pipeline parallelism. Thus we can use TAPS to search intra-operator parallelism strategies for each stage of the pipeline.
**Multi-dimensional Tensor Parallelism.** 2D-TP[22], 3D-TP[2] from Colossal-AI[11] generate intra-operator strategy heuristically. TAPS currently does not support 2D-TP because 2D-TP uses Broadcast and Reduce to finish the communication, while we use AllGather, AllToAll, and Slice instead. TAPS naturally includes strategies of 3D-TP because
**Overlapping Communication and Computation.** In our implementation, we assume that the communication cannot overlap with computation; thus, we can ignore the computation costs. However, in actual training cases, researchers[6] delegate to overlap the computation and communication. It is hard for us to be aware of the overlap degree. A trade-off solution is manually setting the overlap degree for communications of different dimensions. For example, in some cases, the communication of data parallelism can be fully overlapped, then we can set the overlap degree to 1.
**Estimating the Costs using regression models.** Although we assume the bandwidth can be fully utilized, we notice that the effective bandwidth is very low when the size of transferred data is small. This is because, during communication, there are overheads like creating connections and computing average values. Using regression models to simulate the variations of effective bandwidth is a good choice to improve TAPS further.
## 9 Conclusion
We present TAPS, a topology-aware intra-operator parallelism strategy searching algorithm that generates fine-grained intra-operator strategies for multi-node environments. TAPS can generate tensor redistribution operations with fewer communication costs heuristically. TAPS calculates the communication costs of each strategy according to communication volume and effective bandwidth, thus producing more reasonable strategies compared to methods that only consider communication volume. Based on the communication costs, TAPS formalizes the searching problem as an integer linear programming problem by creating and utilizing an auxiliary graph and then solving the result within a few seconds. Compared to volume-based searching algorithms, TAPS can generate strategies with up to 85% fewer communication costs for cases in multi-node environment. The source code of TAPS will be publicly available.
|
2303.03393 | Interpretable Architecture Neural Networks for Function Visualization | In many scientific research fields, understanding and visualizing a black-box
function in terms of the effects of all the input variables is of great
importance. Existing visualization tools do not allow one to visualize the
effects of all the input variables simultaneously. Although one can select one
or two of the input variables to visualize via a 2D or 3D plot while holding
other variables fixed, this presents an oversimplified and incomplete picture
of the model. To overcome this shortcoming, we present a new visualization
approach using an interpretable architecture neural network (IANN) to visualize
the effects of all the input variables directly and simultaneously. We propose
two interpretable structures, each of which can be conveniently represented by
a specific IANN, and we discuss a number of possible extensions. We also
provide a Python package to implement our proposed method. The supplemental
materials are available online. | Shengtong Zhang, Daniel W. Apley | 2023-03-03T21:09:30Z | http://arxiv.org/abs/2303.03393v1 | # Interpretable Architecture Neural Networks for Function Visualization
###### Abstract
In many scientific research fields, understanding and visualizing a black-box function in terms of the effects of all the input variables is of great importance. Existing visualization tools do not allow one to visualize the effects of all the input variables simultaneously. Although one can select one or two of the input variables to visualize via a 2D or 3D plot while holding other variables fixed, this presents an oversimplified and incomplete picture of the model. To overcome this shortcoming, we present a new visualization approach using an interpretable architecture neural network (IANN) to visualize the effects of all the input variables directly and simultaneously. We propose two interpretable structures, each of which can be conveniently represented by a specific IANN, and we discuss a number of possible extensions. We also provide a Python package to implement our proposed method. The supplemental materials are available online.
_Keywords:_ Function Visualization; Interpretable Machine Learning; Neural Network
Introduction
Scientists and engineers often want to visualize the output of a black box function in order to understand and interpret the effects of input variables on the output (aka response) variable. For example, most surrogate models of complex computer simulations are notorious for their black box property and lack interpretability. Visualizing a black box model helps to understand how the inputs affect the predicted response variable, whether the predictive relationships seem intuitively reasonable, and how decisions can be made based on the model.
We denote the black box function by \(f(\mathbf{x})\), where \(\mathbf{x}=(x_{1},x_{2},\cdots,x_{d})\) is a \(d\)-dimensional vector of input variables, and \(f(\mathbf{x})\) is the scalar response. \(f(\mathbf{x})\), as a function of \(\mathbf{x}\), is often referred to as a response surface. In reality, many functions estimated from a large data set or a complex computer simulation model have many input variables, which makes visualization of the joint effects of all the input variables challenging.
Of the existing works that aim to interpret black box models, perhaps the most common and the most closely related works are "partial dependence" (PD) plots (Friedman, 2001) and "individual conditional expectation" (ICE) plots (Goldstein et al., 2015). Each PD plot shows the marginal effect of one or two selected input variables by integrating the response variable over the marginal distribution of the omitted input variables, whereas each ICE plot displays a collection of curves that are functions of a single selected input variable of interest, one curve for each fixed combination of the \(d-1\) omitted input variables. Accumulated local effects (ALE) plots (Apley and Zhu, 2020) improve upon PD plots by offering much faster computations and more accurate results when the input variables are highly correlated. Closely related to ICE plots, trellis plots (Becker et al., 1996, 1994) are a series of plots of the response variable as a function of a pair of selected input variables with the omitted variable(s) held fixed, with a separate plot for each fixed combination of the omitted variables. Although trellis plots typically present a clear and fairly complete picture
of \(f(\mathbf{x})\) for the case of \(d=3\), for \(d>3\) they become cumbersome, since there are too many fixed combinations of the \(d-2\) omitted variables to consider. Whereas PD, ICE, trellis and ALE plots focus on visualizing the effects of one or two variables with each plot, and do not present a clear picture of the interactions between the selected and omitted input variables for large \(d\), our approach aims to visualize the joint effects of all input variables. Moreover, Agarwal et al. (2021) combined deep neural networks with generalized additive models to increase interpretablity. The primary distinction is that to enable easy visualization, the neural network architecture in (Agarwal et al., 2021) is restricted to additive functional relationships of the form \(f(\mathbf{x})=f_{1}(x_{1})+f_{2}(x_{2})+\cdots+f_{d}(x_{d})\) and, therefore, cannot be used to represent and visualize interactions between the input variables.
Other less closely related works focus on calculating a variable importance measure for each input variable. Breiman (2001) introduced the idea of permutation-based feature importance in random forest models, and it was later extended to general black box models (Fisher et al., 2019; Konig et al., 2021). Based on game theory concepts, Lundberg and Lee (2017) used Shapley values to compute feature importance. However, the feature importance measures only provide a scalar numerical score of the importance of each input variable, without revealing how the variables affect the response variable.
Instead of singling out the effects of the original variables/features, some approaches aim to visualize the topological structure of \(f(\mathbf{x})\). These methods largely focus on identifying the number and locations of local minima and maxima of \(f\) and, subsequently, on identifying paths in the input space to traverse between the minima and maxima. Gerber et al. (2010) used the Morse-Smale complex to segment the \(d\)-dimensional input variable space into several non-overlapping regions over which \(f\) is monotonic and located local minima and maxima of \(f\). To visualize the topology of \(f\) over each segmented region, they constructed certain regression surfaces and embedded them in 2D as a simplified representation of \(f\) in each region. This approach was applied to nuclear reactor safety analysis
and visualization (Maljovec et al., 2013). Harvey et al. (2012) further used the Reeb graph to shatter the loops created by the Morse-Smale complex and provide a topologically simpler visualization. Although the local effects of input variables can be interpreted from the embedded regression curve for each segment in the Morse-Smale complex, the global effects are difficult to interpret, especially as the number of segments increases. Moreover, the method is not well suited for visualizing global interactions between a set of inputs.
Our visualization approach is based on the following approximate representation of \(f\):
\[f(x_{1},x_{2},\cdots,x_{d})\approx g(x_{j},h(\boldsymbol{x}_{\backslash j})) \quad\text{for some }j\in\{1,2,\cdots,d\}. \tag{1}\]
The notation "\(\boldsymbol{x}_{\backslash j}\)" represents all the input variables excluding \(x_{j}\). For simplicity, we focus on functions \(f\) defined on the Cartesian product \(\prod\limits_{j=1}^{d}I_{j}\), where each \(I_{j}\in\mathbb{R}\) is a closed interval. In Section 2 we show that for each \(j\in\{1,2,\cdots,d\}\) any continuous \(f\) on \([0,1]^{d}\) can be arbitrarily closely approximated by the structure (1) for some continuous functions \(g\) and \(h\) (although for some \(j\) the resulting functions \(g\) and \(h\) will be better behaved than for other \(j\)). We also describe an approach for estimating \(g\) and \(h\) and for selecting the index \(j\) that leads to a good approximation.
For visualizing \(f\) via (1) with a particular input variable \(x_{j}\) singled out, one can construct a 3D plot of \(f\) vs \(x_{j}\) and \(h\). To illustrate, consider the harmonic wave function from physics
\[f(\boldsymbol{x})=x_{1}*\sin[(2\pi/x_{2})*x_{3}+x_{4}]. \tag{2}\]
Here \(f\) is the displacement of a given point on the wave, \(x_{1}\in[0.5,2]\) is the amplitude, \(x_{2}\in[0.5,2]\) is the wavelength, \(x_{3}\in[0,1]\) is the position of that point (e.g., the distance from the source of the wave), and \(x_{4}\in[0,\pi]\) is the phase of the wave. \(f\) can be represented as (1) with \(x_{1}\) singled out, \(f(\boldsymbol{x})=x_{1}*\sin[h(\boldsymbol{x}_{\backslash 1})]\), and \(h(\boldsymbol{x}_{\backslash 1})=(2\pi/x_{2})*x_{3}+x_{4}\). The corresponding 3D visualization plot (\(f\) vs \(x_{1}\) and \(h\)) is shown in Figure 1(a). Such a plot allows one to visualize the manner in which \(f\) depends on the single variable \(x_{j}\), including how \(x_{j}\) interacts with some latent function (\(h\)) that captures the full extent of the \(x_{j}\) interactions with the other \(d-1\) input variables.
In this paper, we visualize the estimated functions mainly using static 3D plots. However, considering that static 3D plots may be easily misinterpreted due to visual misperception (Cleveland and McGill, 1984), we recommend that users plot the functions using whatever visual rendering they prefer, e.g., using software that allows one to interactively rotate the 3D plots to better avoid misinterpreting or missing features of the plot, supplementing the 3D plot with a 2D heat map with contour lines added, and/or using advanced shading and perspectives. In Figure 1, we also include a 2D heatmap of the same function and a shaded perspective plot using the software _rayshader_(Morgan-Wall, 2022).
In order to understand how \(x_{j}\) interacts with the other individual input variables and, more generally, to understand the joint effect of all \(d\) input variables on \(f\), our approach proceeds hierarchically and approximates \(h\) with a structure analogous to (1) with a second input variable singled out, and so on. We describe this approach in Section 3.1 and refer to it as the "original variable hierarchical" (OVH) structure. Moreover, we also consider a more general structure that assumes the hierarchical \(h\) functions are functions of certain linear combinations of the original inputs that can further enhance visualization. We refer to this as the "disjoint active subspace hierarchical" (DASH) structure and describe it in Section 3.2.
Figure 1: An example of a visualization per structure (1) of the harmonic wave function \(f\) with (a) a 3D plot, (b) a 2D heatmap of the same function and (c) a shaded perspective plot. See the online version for color figures.
The remainder of the paper is organized as follows. In Section 2 we present a theorem to show that any continuous function \(f\) can be arbitrarily closely approximated by a model with the more interpretable structure (1), and we describe our _interpretable architecture neural network (IANN)_ model to estimate the functions \(g\) and \(h\) in (1). In Section 3, we develop the OVH and DASH structures to visualize and interpret the joint effects of all the variables by hierarchically decomposing the latent function \(h\), both of which can be conveniently represented with a specific IANN model. We note that for visualizing complex black box simulation response surfaces that are expensive to evaluate, one should first fit a surrogate model (Sacks et al., 1989; Kennedy and O'Hagan, 2000, 2001) to the simulation data and then use the surrogate model as \(f\). In Section 4 we present algorithms for finding the appropriate ordering of the input variables in the hierarchical decompositions. In Section 5, we provide additional numerical examples to illustrate function visualization using our IANN approach. In Section 6, we discuss a number of potential extensions of the IANN approach.
## 2 IANN Structure for the First Hierarchical Level
This section describes how to use the IANN model to approximate \(f\) by structure (1) and how it facilitates the visualization of \(f\). In Section 2.1, we present Theorem 2.1 to show that the proposed structure (1) can approximate any continuous function on \([0,1]^{d}\) for some continuous functions \(g\) and \(h\). To estimate \(g\) and \(h\), we introduce the IANN architecture to approximate \(f\) for each singled out variable \(x_{j}\) in Section 2.2. In Section 2.3, we compare and draw a connection between PD plots, ICE plots, and the IANN visualization plots to show that the latter has advantages that lead to clearer interpretation of \(f\). This decomposition can be used as a stand-alone approach to visualize the effects of each input (as illustrated in Figure 1), if it is repeated in a nonhierarchical manner for each input \(x_{j}\) for \(j=1,2,\cdots,d\). The main information that is missing in this visualization is the
disentanglement of the joint effects of all inputs, which is the subject of Section 3. For this, the decomposition in this section provides the basic building block.
### IANN approximation theorem
The following theorem (see Appendix D for a proof) guarantees that the interpretable structure we proposed in (1) can approximate any continuous function \(f\) on \([0,1]^{d}\).
**Theorem 2.1**.: _Let \(f:[0,1]^{d}\to\mathbb{R}\) be a continuous function. For any \(\epsilon>0\) and \(j\in\{1,2,\cdots,d\}\), there exist continuous functions \(g\) and \(h\) satisfying:_
\[\big{|}f(\mathbf{x})-g(x_{j},h(\mathbf{x}_{\setminus j}))\big{|}<\epsilon,\quad\forall \mathbf{x}=(x_{j},\mathbf{x}_{\setminus j})\in[0,1]^{d}. \tag{3}\]
More generally, the theorem above holds if \(f\) is defined on the Cartesian product of intervals in each \(x_{j}\).
The significance of the theorem is that the structure (1) can be used to approximate arbitrarily closely a continuous \(f\), and this approximation can then be used to visualize \(f\) as in Figure 1. This visualization is related to ICE plots but provides more clarity in the sense that we discuss in Section 2.3. Moreover, since \(h\) is continuous, one can apply Theorem 2.1 again to show that \(h\) can be closely approximated by the same structure (1), and this process can be repeated to yield the hierarchical decomposition described in Section 3.
Notice that Theorem 2.1 applies for each \(j\in\{1,2,\cdots,d\}\), although there is a caveat that should be pointed out. For some \(j\), the corresponding \(h\) may be so complex that it cannot realistically be estimated and visualized in the subsequent levels. However, for all the real functions \(f\) we have considered, for at least one of the input variables \(x_{j}\) we were able to accurately approximate \(f\) with an \(h\) that was well behaved and could subsequently be approximated via the same structure (1) in the next level. Moreover, our algorithm (in Section 4) automatically finds a good hierarchical ordering of the input variables to yield an
accurate approximation of \(f\), and the accuracy of the approximation can be easily quantified as a check to verify whether the function \(f\) in question can indeed be approximated by an IANN structure.
### IANN architecture
To estimate the functions \(g\) and \(h\), we use the customized neural network architecture depicted in Figure 2, which we refer to as an IANN.
The input layer has \(d\) nodes that represent the \(d\) input variables, and the single node in the output layer is the approximation of the original function \(f(\mathbf{x})\). Due to Theorem 2.1 and the universal approximation theorem of neural networks (Hornik et al., 1989; Cybenko, 1989; Hornik, 1991), with enough layers and nodes the IANN in Figure 2 can approximate any continuous function \(f\) by the structure (1) for some functions \(g\) and \(h\) estimated in the process of fitting the IANN model.
The bottle-neck layer in the middle has only two nodes, representing \(x_{j}\) and \(\hat{h}\). \(x_{j}\) is
Figure 2: IANN architecture representing Eq. (1) at the top hierarchical level. The bottleneck layer in the middle consists of only two nodes, which represents \(x_{j}\) and \(\hat{h}(\mathbf{x}_{\setminus j})\).
directly connected to the first input node by an identity map, and \(\hat{h}\) is the estimation of the latent function \(h(\mathbf{x}_{\backslash j})\) in Eq. (1). In the layers to the left of the bottle-neck layer, we connect the \(d-1\) nodes (which represent the remaining input variables \(\mathbf{x}_{\backslash j}\)) to node \(\hat{h}\) by a fully-connected neural network denoted by \(\hat{h}(\mathbf{x}_{\backslash j})\). In the layers to the right of the bottle-neck layer, we use another fully-connected network (denoted by \(\hat{g}\), with inputs \(x_{j}\) and \(\hat{h}\)) to represent the function \(g\) as an approximation of \(f(\mathbf{x})\).
Denote the training data input/response observations by \(\{(x_{1}^{i},x_{2}^{i},\cdots,x_{d}^{i},f^{i}),\ 1\leq i\leq N\}\) and the IANN response prediction by \(\hat{f}^{i}=\hat{g}(x_{j}^{i},\hat{h}(\mathbf{x}_{\backslash j}^{i}))\), where \(N\) is the number of observations in the training set. The training data were generated using the customized Latin Hypercube sampling described in the Appendix B. Note that even when \(f\) represents a simulation response surface that is expensive to evaluate, \(N\) can be chosen quite large, because \(f\) is first replaced by a surrogate model fit to the simulation data, and generating response observations from the surrogate model is inexpensive. To fit the IANN (i.e., estimate the weights and biases) to the training data, we use standard squared error loss:
\[Loss=\frac{1}{N}\sum_{i=1}^{N}\left|f^{i}-\hat{f}^{i}\right|^{2}. \tag{4}\]
After fitting the IANN, the resulting approximation of \(f\) can be visualized by plotting \(\hat{g}\) as a function of \(x_{j}\) and \(\hat{h}\), as in Figure 1. The fitted IANN serves as an approximation of \(f\). To assess whether the approximation is adequate, we generate a large random sample in the input space to serve as a test set and compute the test \(r^{2}\) by comparing the IANN test predictions \(\hat{f}\) with the actual function values \(f\) at the test inputs.
Returning to the harmonic wave function defined in Eq. (2), we (nonhierarchically) single out each of the four input variables one-by-one to serve as \(x_{j}\) and fit the IANN architecture in Figure 2. The resulting four 3D plots (each of \(g(x_{j},h)\) as a function of \(x_{j}\) and \(h\) for the four different \(x_{j}\)) are displayed in Figure 3. The top left visualization plot has the highest test \(r^{2}\) (99.8%), which suggests that the harmonic wave function can be well approximated by the structure (1) after singling out the amplitude variable \(x_{1}\)
This is consistent with the structure of \(f\) in (2) (which in practice would typically be a more complex and less directly interpretable function, e.g., a surrogate model fit to some complex computer simulation output), since (2) can be written as \(f(\boldsymbol{x})=x_{1}*\sin\left[h(\boldsymbol{x}_{\setminus 1})\right]\) with \(h(\boldsymbol{x}_{\setminus 1})=(2\pi/x_{2})*x_{3}+x_{4}\). From the top-left plot it is clear that the effect of \(x_{1}\) is linear for any fixed \(\boldsymbol{x}_{\setminus 1}\), although the effect of \(x_{1}\) depends strongly on the value of \(\boldsymbol{x}_{\setminus 1}\). For example, for some \(h(\boldsymbol{x}_{\setminus 1})\) the effect of \(x_{1}\) is positive (positive slope for the linear trend), and for other values the effect is negative. Moreover, the slope varies periodically as a sinusoidal function of \(h(\boldsymbol{x}_{\setminus 1})\). Evidently, the IANN representation of \(h(\boldsymbol{x}_{\setminus 1})\) captures the actual function \((2\pi/x_{2})*x_{3}+x_{4}\) quite well, which is more clear from the hierarchical IANN visualization presented later in Section 5.2. The top-left plot in Figure 3 also illustrates how the approach can be used to understand interactions between inputs, as \(x_{1}\) and \(h(\boldsymbol{x}_{\setminus 1})\) clearly have a strong interaction, by which the effect of \(x_{1}\) changes from positive to negative and vice-versa as \(h(\boldsymbol{x}_{\setminus 1})\) varies.
For the other three plots, the relatively low test \(r^{2}\) values suggest that singling out those variables in the IANN cannot approximate the structure (1) well. Therefore, those plots are less reliable and should be used with caution when interpreting the effects of inputs. In this situation, we recommend using only the top-left plot of \(g(x_{1},h(\boldsymbol{x}_{\setminus 1}))\) (with test \(r^{2}=99.8\%\)) at the top level and then using the hierarchical decomposition in Section 3 to visualize the effects of \(\{x_{2},x_{3},x_{4}\}\). We do this in Section 5.2 in a continuation of this example. As pointed out after Theorem 2.1, for a particular \(x_{j}\) the latent function \(h\) may be too complex to estimate via a neural network with limited number of layers and nodes (and also too complex to easily visualize in subsequent levels), which would be revealed by a low associated test \(r^{2}\). But for all real examples that we have considered, the 3D visualization plot of \(g(x_{j},h(\boldsymbol{x}_{\setminus j}))\) for at least one \(x_{j}\) has sufficiently high test \(r^{2}\) to render the visualization reliable. If multiple plots have high test \(r^{2}\), users may rely on any or all of them to interpret the effects of inputs.
### A connection to PD and ICE plots
The most popular method for visualizing the effects of the input variables is partial dependence (PD) plots, which were first introduced in Friedman (2001). Later, Goldstein et al. (2015) proposed ICE plots, which enhance PD plots by displaying a collection of curves for each fixed combination of the omitted input variables. Here we compare and draw a connection between IANN, PD, and ICE plots using the harmonic wave function to illustrate.
Figure 4(a) shows both the PD plot and the ICE plot for the variable \(x_{1}\) using the open-source Python package, _scikit-learn_(Pedregosa et al., 2011; Buitinck et al., 2013). Notice that the PD plot, represented by the dashed line, is the average of all the curves in the ICE plots. There is no tight, direct connection between our IANN plots and ALE plots. Their only connection is via the connection between IANN plots and PD plots, and recognizing that ALE plots are intended to produce something similar to PD plots, albeit in a manner that is much more computationally efficient and less prone to problems when the inputs are highly correlated. Since the inputs have little correlation in the harmonic wave example, the ALE plot (omitted for brevity) is very similar to the PD plot in Figure 4(a). The ICE plots for this example are all straight lines with different slopes, which correctly suggests that the effect of \(x_{1}\) is linear for fixed \(\mathbf{x}_{\backslash 1}\) and that \(x_{1}\) interacts with the
Figure 3: Top-level (non-hierarchical) visualization plots of the harmonic wave function in Eq. (2) with each input singled out in turn. The test \(r^{2}\) is shown above each plot to evaluate the accuracy of IANN approximation represented by the plot.
other variables (since the slopes vary). In comparison, Figure 4(b) shows the IANN plot of \(g(x_{1},h(\mathbf{x}_{\backslash 1}))\) for the same example. Note that each individual curve in the ICE plot represents \(f\) as a function of \(x_{1}\) for a fixed \(\mathbf{x}_{\backslash 1}\). The IANN plot can be viewed as smoothly piecing together all of the ICE plot curves to create a 3D surface that serves as a more structured and interpretable visualization of the effect of \(x_{1}\) on \(f\) and of how \(x_{1}\) interacts with some function (\(h\)) of the remaining variables \(\mathbf{x}_{\backslash 1}\). Three individual curves for three fixed \(h(\mathbf{x}_{\backslash 1})\) values are shown as dashed curves in Figure 4(b), and the three corresponding ICE plot curves for the same three \(\mathbf{x}_{\backslash 1}\) values are indicated in Figure 4(a) by the connecting arrows. Compared to the 2D ICE plot visualization, the 3D IANN visualization provides a clearer picture of the function \(f\) in this case.
Figure 4: Connection between (a) existing methods (ICE and PD plots) and (b) our IANN for visualizing the effects of the amplitude variable \(x_{1}\) in the harmonic wave function. The IANN visualization can be viewed as smoothly piecing together the individual curves in the ICE plots to create a more interpretable 3D surface. The three dashed curves in (b) correspond to the three ICE plot curves indicated by connecting arrows in (a).
## 3 Hierarchical IANN for Visualizing the Joint Effects of All Inputs
In addition to being used as a stand-alone visualization of the effect of a selected input \(x_{j}\), the IANN structure described in Section 2 can be used in a hierarchical manner to similarly decompose and visualize \(h\), and so on. We present two versions of hierarchical decomposition, each of which is intended to visualize the joint effects of all \(d\) inputs. In Section 3.1, we introduce the original variable hierarchical (OVH) structure, and in Section 3.2 we introduce the disjoint active subspace hierarchical (DASH) structure, which is in some sense a generalization of the OVH structure that can simplify visualization when the various functions involved are functions of certain linear combinations of the inputs.
### Original variable hierarchical (OVH) structure
In the structure (1), \(h\) is a function of all the inputs in \(\boldsymbol{x}_{\backslash j}\). To visualize how \(h\) depends on \(\boldsymbol{x}_{\backslash j}\), we can decompose the functions hierarchically (adding subscripts to \(g\) and \(h\) to indicate the hierarchical level) via
\[f(\boldsymbol{x}) \approx g_{1}(x_{j_{1}},h_{1}(\boldsymbol{x}_{\backslash j_{1}}))\] \[h_{i-1}(\boldsymbol{x}_{\backslash(j_{1},\cdots,j_{i-1})}) \approx g_{i}(x_{j_{i}},h_{i}(\boldsymbol{x}_{\backslash(j_{1}, \cdots,j_{i})})),\quad i=2,\cdots,d-1, \tag{5}\]
with \(h_{d-1}(x_{j_{d}})\stackrel{{\text{\tiny def}}}{{=}}x_{j_{d}}\). Here, \(\{j_{1},j_{2},\cdots,j_{d}\}\) represent a permutation ordering of the input indices \(\{1,2,\cdots,d\}\) that will be determined as a preprocessing step prior to fitting the hierarchical IANN. The algorithm for determining the ordering of the inputs is described in Section 4. Similar to the notation \(\boldsymbol{x}_{\backslash j}\), the notation \(\boldsymbol{x}_{\backslash(j_{1},\cdots,j_{i})}\) represents the input variables excluding \(\{x_{j_{1}},\cdots,x_{j_{i}}\}\). Each function \(g_{i}\) is the approximation of the function \(h_{i-1}\) with two inputs: \(x_{j_{i}}\) and \(h_{i}(\boldsymbol{x}_{\backslash(j_{1},\cdots,j_{i})})\). Each function \(h_{i}\) depends on the order \(\{j_{1},j_{2},\cdots,j_{i}\}\) of the inputs selected in the prior iterations, although we omit this in the notation for
simplicity. Since the function \(h\) in Theorem 2.1 is also continuous, we can repeatedly apply Theorem 2.1 to the function \(h_{i-1}\) at each level to justify the hierarchical decomposition (5).
We refer to the first equation in (5) as the _Level 1 representation_ and the corresponding 3D plot or 2D heatmap (\(f(\mathbf{x})\) as a function of \(x_{j_{1}}\) and \(h_{1}(\mathbf{x}_{\setminus j_{1}})\)) as the _Level 1 plot_. Similarly, we refer to the \(i^{th}\) equation in (5) as the _Level \(i\) representation_ and its corresponding 3D plot or 2D heatmap (\(h_{i-1}\) as a function of \(x_{j_{i}}\) and \(h_{i}\)) as the _Level \(i\) plot_, for \(i=2,\cdots,d-1\). This approach produces a total of \(d-1\) hierarchical visualization plots that can collectively be used to understand the joint effects of all \(d\) inputs.
To estimate all the \(h_{i}\) functions simultaneously, we use an IANN architecture with multiple bottle-neck layers as illustrated in Figure 5 for the case of \(d=5\) inputs. The dashed arrows represent the identity map. To simplify the notation, we omit the hat symbols on the \(h_{i}\)'s in the IANN architecture. In Figure 5, the IANN architecture consists of 5 input variables and 4 levels in total. The fourth level is just a fully connected neural network with two input variables and an output that represents the (estimated) function \(h_{3}(x_{j_{4}},x_{j_{5}})\). Likewise, the third level has bottleneck input layer with two inputs (\(x_{j_{3}}\) and \(h_{3}\)) and \(h_{2}\) as its output, the second level has bottleneck input layer with two inputs (\(x_{j_{2}}\) and \(h_{2}\)) and \(h_{1}\) as its output, and the first level has \(x_{j_{1}}\) and \(h_{1}\) as its two bottleneck inputs and the estimated/approximated \(f\) as its output.
Similar to the loss function defined in (4), we use \(L_{2}\) loss to minimize the mean squared error between the IANN output \(\hat{f}\) and the original function \(f\). By estimating the original function \(f(\mathbf{x})\) and all the latent functions \(h_{i}\) simultaneously, the error is less prone to accumulate through the different levels when \(d\) is larger.
To illustrate the OVH IANN approach, consider the following example:
\[f(\mathbf{x})=(5x_{1}+x_{2}+x_{3}+x_{4}+x_{5}-4.5)^{2},\qquad\mathbf{x}\in[0,1]^{5}. \tag{6}\]
In this case, the input ordering \(\{j_{1},j_{2},\cdots,j_{5}\}\) that satisfies the structure (5) is not unique. In fact, any permutation of the input variables satisfies (5). For example, if we choose
\(j_{1}=2\), the response can be represented as:
\[f(\mathbf{x})=g_{1}(x_{2},h_{1}(\mathbf{x}_{\backslash 2}))=(x_{2}+h_{1}(\mathbf{x}_{ \backslash 2}))^{2}, \tag{7}\]
where \(h_{1}(\mathbf{x}_{\backslash 2})=5x_{1}+x_{3}+x_{4}+x_{5}-4.5\). Following the same logic, \(h_{1}\) can be decomposed similarly regardless of which remaining input is chosen as \(x_{j_{2}}\), and so on. Note that the above arguments would hold if any of the five inputs were selected as \(x_{j_{1}}\). In the following, we use the ordering \((j_{1},j_{2},j_{3},j_{4},j_{5})=(1,5,4,2,3)\), which was determined by the algorithm described in Section 4. The visualization 3D plots are shown in Figure 6. The test \(r^{2}\) was 99.97% indicating that the OVH structure (5) provides a good approximation of \(f\).
Figure 5: IANN architecture for the OVH structure for the case of \(d=5\) input variables.
The following are salient points taken from the IANN visualization plots in Figure 6. From the Level 1 plot the function \(f(\mathbf{x})\) appears quadratic in \(x_{1}\) when we hold \(h_{1}\) fixed, which is consistent with the true function \(f(\mathbf{x})=(5x_{1}-h_{1})^{2}\) if we take \(h_{1}=-(x_{2}+x_{3}+x_{4}+x_{5}-4.5)\). The functions \(g_{1}\) and \(h_{1}\) in (7) are not unique, of course, since we can incorporate an additive and/or multiplicative constant into \(h_{1}\) and modify \(g_{1}\) accordingly without changing the function \(g_{1}(x_{1},h_{1}(\mathbf{x}_{\backslash 1}))\). This, however, does not change the interpretation of the joint effects of the input variables. Further regarding the interpretation of the Level 1 plot, the function \(f\) reaches its minimum at a value of \(x_{1}\) that increases linearly as \(h_{1}\) increases.
In order to understand the effects of the other inputs \(\mathbf{x}_{\backslash 1}\) on \(f\), we must discern two things from the plots in Figure 6: (i) The effect of \(h_{1}\) on \(f\), and (ii) the effect of the variables in \(\mathbf{x}_{\backslash 1}\) on \(h_{1}\). Regarding the former, from the Level 1 plot, for fixed \(x_{1}\) the function \(f\) appears roughly quadratic in \(h_{1}(\mathbf{x}_{\backslash 1})\). In order to understand the effects of
Figure 6: IANN visualization plots for the example \(f\) in Eq. (6). The plots are to be read in numerical order of the levels, from the top left to the bottom right.
on \(h_{1}\), one must view the subsequent level plots. From the Level 2, 3, and 4 plots in Figure 6, each function \(h_{i}\) is approximately linear in its two arguments \(x_{j_{i+1}}\) and \(h_{i+1}\), which means that \(h_{1}\) is linear in \(\boldsymbol{x}_{\backslash 1}\).
To further illustrate how the IANN level plots can help understand the behavior of \(f\), supposed one wanted to select the values of \(\boldsymbol{x}\) that maximize \(f\). From the Level 1 plot, this occurs at the two corner points \((x_{1},h_{1})=(0,1.5)\) or \((1,-1)\). If we focus on the former, from the Level 2 plot, \(h_{1}=1.5\) occurs for \((x_{5},h_{2})=(0,2)\). In turn, from the Level 3 plot, \(h_{2}=2\) occurs for \((x_{4},h_{3})=(0,-0.5)\). Finally, from the Level 4 plot, \(h_{3}=-0.5\) occurs for \((x_{2},x_{3})=(0,0)\). Therefore, the maximum of \(f\) at \((x_{1},h_{1})=(0,1.5)\) corresponds to \(x_{1}=x_{2}=\cdots=x_{5}=0\). Following the same procedure for the other maximum at \((x_{1},h_{1})=(1,-1)\), we find that this corresponds to \(x_{1}=x_{2}=\cdots=x_{5}=1\). Both of these cases correspond to the true maxima of Eq. (6).
Moreover, since \(h_{1}(\boldsymbol{x}_{\backslash 1})\) is just a linear function of the remaining variables \(x_{2},x_{3},x_{4},x_{5}\) from the plots for Levels 2, 3, and 4, and since \(f(\boldsymbol{x})\) is quadratic in \(h_{1}(\boldsymbol{x}_{\backslash 1})\) and in \(x_{1}\), we can conclude that \(f\) is quadratic in each variable.
### Disjoint active subspace hierarchical (DASH) structure
In this section, we propose an alternative hierarchical structure that can be viewed as a generalization of the OVH structure. The DASH structure assumes the hierarchical functions \(h_{i}\)'s are functions of certain disjoint linear combinations of the input variables:
\[f(\boldsymbol{x}) \approx g_{1}(v_{1},h_{1}(\boldsymbol{v}_{\backslash 1})),\] \[h_{i-1}(\boldsymbol{v}_{\backslash(1,\cdots,i-1)}) \approx g_{i}(v_{i},h_{i}(\boldsymbol{v}_{\backslash(1,\cdots,i)})), \qquad i=2,\cdots,p-1, \tag{8}\] \[\text{where}\quad v_{i} =\boldsymbol{\beta}_{i}^{T}\boldsymbol{x}_{J_{i}},\quad i=1,2, \cdots,p\leq d.\]
Here \(\{J_{1},\cdots,J_{p}\}\) are disjoint index sets, i.e., \(J_{i}\bigcap J_{j}=\emptyset,\ \forall 1\leq i\neq j\leq p\), and \(\bigcup\limits_{i=1}^{p}J_{i}=\{1,2,\cdots,d\}\), and \(\boldsymbol{x}_{J_{i}}\) denotes the input variables with indices in \(J_{i}\). Note that the DASH
structure (8) reduces to the OVH structure when \(p=d\).
Theorem 2.1 also applies to structure (8) if we substitute the \(x\)'s with the \(v\)'s since the disjoint linear combinations of inputs (the \(v\)'s) take values in closed intervals. Similar to the proof for the OVH structure, one can repeatedly apply the theorem to \(f\) and the \(h_{i}\)'s to show that any continuous function \(f\) on \([0,1]^{d}\) can be arbitrarily closely approximated by the structure (8) for some continuous functions \(g_{i}\) and \(h_{i}\). The algorithms we describe in Appendix A automatically find the number of disjoint linear combinations \(p\), the \(v\)'s in (8), and the order of the \(p\) disjoint linear combinations \((v_{1},v_{2},\cdots,v_{p})\) that gives a good approximation of \(f\) by (8).
Similar to what was done for the OVH structure, we can make a series of 3D plots to visualize \(f\) in (8) as a function of the \(v\)'s, noting that the \(v\)'s are easy to understand as functions of the inputs, since they are disjoint linear combinations of the inputs. More specifically, in the Level 1 plot we visualize \(f\) as a function of \(v_{j_{1}}\) and \(h_{1}\), and then further visualize \(h_{i-1}\) as a function of \(v_{j_{i}}\) and \(h_{i}\) at the \(i^{th}\) level for \(i=2,3,\cdots,p-1\). For many real examples that we have considered, Eq. (8) with \(p<<d\) provides a close approximation of \(f\). The advantage of this is that we reduce the number of hierarchical 3D visualization plots from \(d-1\) to \(p-1\), which substantially simplifies the interpretation when \(p<<d\).
To represent (8) with an IANN architecture, we add one linear layer in front of the first layer in the OVH structure to learn the \(p\) disjoint linear combinations \((v_{1},v_{2},\cdots,v_{p})\) and the underlying disjoint input groups, as illustrated in Figure 7 for the case of \(p=5\). Each linear combination \(v_{i}=\boldsymbol{\beta}_{i}^{T}\boldsymbol{x}_{J_{i}}\) is represented by a single node in this layer with no bias and a linear activation function. The coefficients \(\boldsymbol{\beta}_{i}\) in (8) are the estimated weights in this linear layer. The remainder of the architecture is the same as in the IANN for the OVH structure shown in Figure 5, except that the inputs to the subsequent layers are the disjoint linear combinations. For visualization, we show the coefficients \(\boldsymbol{\beta}_{i}\) below the 3D visualization plot of \(h_{i-1}\) as a function of \(v_{j_{i}}\) and \(h_{i}\).
To illustrate, consider the function:
\[f(\mathbf{x})=\left[7\exp\left(-4(1.5x_{1}+x_{2}-2x_{3})^{2}\right)+2x_{4}-1.5x_{5}+0. 7x_{6}-1.5\right](x_{7}-1.5x_{8}+0.7x_{9}-0.3)^{2}, \tag{9}\]
for \(\mathbf{x}\in[0,1]^{9}\), which can be represented as (8) with \(p=3\) and
\[\begin{array}{l}v_{1}=x_{7}-1.5x_{8}+0.7x_{9},\\ v_{2}=2x_{4}-1.5x_{5}+0.7x_{6},\\ v_{3}=1.5x_{1}+x_{2}-2x_{3},\quad\text{and}\\ f(\mathbf{x})=(v_{1}-0.3)^{2}*h_{1}(v_{2},v_{3}),\end{array} \tag{10}\]
where \(h_{1}(v_{2},v_{3})=\left[7\exp\left(-4v_{3}^{2}\right)+v_{2}-1.5\right]\). We fit the DASH structure IANN to data from this function (the three disjoint linear combinations were automatically determined using the algorithm described in Appendix A and were not prespecified), the results of which are shown in Figure 8. The test \(r^{2}\) was 99.98%, indicating that the DASH structure (8) provides a good approximation of \(f\).
Rather than using eight 3D plots to visualize and interpret the func
Figure 7: Illustration of the IANN architecture for the DASH structure with \(p=5\).
the OVH structure, we only need two 3D plots to visualize the effects of all the input variables with the DASH structure in Figure 8. We show the coefficients \(\beta_{i,j}\) of the inputs \(x_{j}\) in each linear combination \(v_{i}\) below each IANN visualization plot. From Figure 8, we see that the estimated coefficients in each \(\boldsymbol{\beta}_{i}\) have ratios that are similar to the true ratios defined in Eq. (10). For example, the true \(v_{1}=x_{7}-1.5x_{8}+0.7x_{9}\), and the estimated \(v_{1}\) from fitting the IANN is \(v_{1}=0.667x_{7}-1.001x_{8}+0.467x_{9}\), which is virtually identical except for a constant multiplicative factor.
Regarding interpreting the plots, first, from the top plot in Figure 8 the function \(f(\boldsymbol{x})\) is non-monotonic (roughly quadratic) in \(v_{1}\) when \(h_{1}\) is fixed. Since \(v_{1}\) is a linear combination of three input variables, \(x_{7},x_{8},x_{9}\), we can conclude that each of these input variables have non-monotonic effects on \(f(\boldsymbol{x})\), at least for some fixed values of the other two variables in this linear combination. From the same plot, when \(v_{1}\) is fixed, function \(f(\boldsymbol{x})\) monotonically increases in a nearly linear manner as \(h_{1}\) increases. To visualize \(h_{1}\), the Level 2 plot suggests that \(h_{1}\) is monotonic and nearly linear in \(v_{2}\) for each fixed \(v_{3}\) but non-monotonic in \(v_{3}\) for
Figure 8: DASH structure IANN visualization of \(f\) in Eq. (9). The solid curves in the left figure are \(f\) vs \(v_{1}\) for fixed \(h_{1}=-1.5,-1.0\), and \(-0.5\).
each fixed \(v_{2}\). One conclusion from this is that \(f\) is nearly linear in \(x_{4}\), \(x_{5}\), and \(x_{6}\) (the inputs involved in \(v_{2}\)) with fixed \(v_{1}\) but nonlinear in the other input variables. Notice that the slight concave curvature in \(f\) with respect to \(h_{1}\) tends to cancel the slight convex curvature in \(h_{1}\) with respect to \(v_{2}\), so that \(f\) is nearly linear in \(v_{2}\) for \(v_{1}\) fixed.
Moreover, interactions between input variables can also be discerned from the plots. Understanding interactions (as opposed to additivity) between inputs is often important when interpreting models. We can tell whether an input variable in \(v_{1}\) interacts with the remaining variables from the Level 1 IANN plot. Since the remaining variables reside either in \(v_{1}\) or \(h_{1}\), we consider two types of interactions: (i) interaction between variables in \(v_{1}\) and in \(h_{1}\), and (ii) interaction for variables within \(v_{1}\). If neither interaction exists for that input variable, we can conclude that it has no interaction with all the remaining variables.
Regarding the interaction between \(v_{1}\) and \(h_{1}\), if the effect of \(v_{1}\) on \(f\) (e.g., the solid curves in the top left plot in Figure 8) only changes by an additive constant as \(h_{1}\) changes, then by definition, there is no interaction between \(v_{1}\) and \(h_{1}\) and thus no interaction between the inputs \(\mathbf{x}_{J_{1}}\) and the inputs \(\mathbf{x}_{\setminus J_{1}}\). Regarding the interactions within \(v_{1}\), if \(v_{1}\) has only linear effect on \(f\) for each \(h_{1}\) values, the input variables within \(v_{1}\) do not interact with each other. Conversely, if \(v_{1}\) has nonlinear effect on \(f\) for at least some \(h_{1}\) values, then there are interactions within \(v_{1}\) based on the following argument. In this case, for the \(h_{1}\) values for which the effect of \(v_{1}\) is nonlinear, \(\frac{\partial f(v_{1},h_{1})}{\partial v_{1}}=\tilde{g}(v_{1},h_{1})\) for some function \(\tilde{g}\) that varies as \(v_{1}\) varies, in which case the input variables \(\mathbf{x}_{J_{1}}\) having nonzero \(\beta\) coefficients all interact with each other.
From the IANN visualization in Figure 8, we see from the three solid curves in the Level 1 plot that there is a strong interaction between \(v_{1}\) and \(h_{1}\). Since both \(v_{2}\) and \(v_{3}\) have an effect on \(h_{1}\) from the second level plot, we conclude that the inputs in \(\mathbf{x}_{J_{1}}\) interact with those in \(\mathbf{x}_{J_{2}}\) and \(\mathbf{x}_{J_{3}}\). Moreover, since the solid curves in the Level 1 plot are nonlinear, the inputs in \(\mathbf{x}_{J_{1}}\) interact with each other.
As another example of how the IANN plots can be used, consider the so-called robust parameter design (RPD) problem (Taguchi, 1986; Robinson et al., 2004), in which some of the input variables are controllable system/product design variables and others are noise variables that vary uncontrollably during system operation or product usage. The RPD goal is to find design values for the controllable inputs such that the function \(f\) is least sensitive to variation in the uncontrollable inputs. To illustrate, suppose the inputs \(\mathbf{x}_{J_{1}}=\{x_{7},x_{8},x_{9}\}\) comprising \(v_{1}\) are noise variables, and the other inputs are controllable design variables. From the Level 1 plot in Figure 8, we see that \(f\) depends least strongly on \(v_{1}\) when \(h_{1}\approx-1.0\). Consequently, if the inputs \(\mathbf{x}_{\setminus J_{1}}\) on which \(h_{1}\) depends are selected so that \(h_{1}\approx-1.0\) (e.g., by making use of the Level 2 plot in Figure 8), then \(f\) will be least sensitive to variation in the noise variables \(\mathbf{x}_{J_{1}}\).
## 4 IANN Algorithm Details
This section describes how to determine the order of the input variables in the hierarchical IANN decomposition (i.e., which input variable appears at each level) and other details of the algorithm. Here, we mainly present algorithms for this purpose for the OVH IANN structures. The algorithms for the DASH IANN structures, the details of which can be found in Appendix A, are similar.
To best approximate \(f\) with the OVH structure and make the visualization plots more reliable for interpretation, we determine the order of input variables sequentially by selecting the variable that results in the best approximation accuracy at each level in the OVH structure (5). To see the intuition behind our approach for this, notice that if \(f(\mathbf{x})=g_{1}(x_{j_{1}},h_{1}(\mathbf{x}_{\setminus j_{1}}))\) in the first level of (5), then by the chain rule,
\[\frac{\partial f}{\partial\mathbf{x}_{\setminus j_{1}}}(\mathbf{x})=\frac{\partial g_ {1}}{\partial\mathbf{x}_{\setminus j_{1}}}(\mathbf{x})=\frac{\partial g_{1}}{\partial h _{1}}(\mathbf{x})\nabla h_{1}(\mathbf{x}_{\setminus j_{1}}), \tag{11}\]
where \(\nabla\) denotes the gradient of a function with respect to its input arguments. Since
\(\frac{\partial g_{1}}{\partial h_{1}}(\mathbf{x})\) is a scalar, if we keep \(\mathbf{x}_{\setminus j_{1}}\) fixed and consider \(\frac{\partial f}{\partial\mathbf{x}_{\setminus j_{1}}}(\mathbf{x})\in\mathbb{R}^{d-1}\) for many different \(x_{j_{1}}\) values, they are all approximately colinear and differ only in their magnitude and sign. Therefore, as candidates for \(x_{j_{1}}\), we consider the inputs whose gradients are the most colinear using principal component analysis (PCA) to measure the extent of colinearity.
We note that although the functions \(g\) and \(h\) from the IANN structure appear in Eq. (11) (and in Eq. (16) below), they are not used in the algorithm for determining the ordering of the inputs in the IANN, which is a pre-processing step that uses the gradient of \(f\) directly and requires no IANN fitting. Eqs. (11) and (16) are only used to justify the rationale behind our input ordering approach. We also note that finding the ordering of inputs \(\mathbf{x}\) requires the function \(f\) to be differentiable since we need to compute the gradient of \(f(\mathbf{x})\) with respect to \(\mathbf{x}\).
The following describes the details of this gradient projection algorithm for Level 1. **Gradient projection algorithm:** (repeat steps 1-3 for each input variable \(x_{j},\ j=1,2,\cdots,d\))
1. For input variable \(x_{j}\), draw \(N_{\setminus j}\) latin hypercube design (LHD) samples in the \(\mathbf{x}_{\setminus j}\) space, denoted by \(\{\mathbf{x}_{\setminus j}^{l};\ l=1,\cdots,N_{\setminus j}\}\). Then draw \(N_{j}\) evenly spaced points spanning the range of \(x_{j}\) in the \(x_{j}\) space, denoted by \(\{x_{j}^{m};\ m=1,\cdots N_{j}\}\).
2. For \(l=1,2,\cdots,N_{\setminus j}\), calculate the gradient vectors \(\mathbf{a}_{j}^{m,l}=\nabla_{\mathbf{x}_{\setminus j}}f(x_{j}^{m},\mathbf{x}_{\setminus j }^{l})\in\mathbb{R}^{d-1},\ m=1,2,\cdots,N_{j}\), and stack them into the \(N_{j}\times(d-1)\) matrix: \[A_{j}^{l}=\begin{bmatrix}\left(\mathbf{a}_{j}^{1,l}\right)^{T}\\ \vdots\\ \left(\mathbf{a}_{j}^{N_{j},l}\right)^{T}\end{bmatrix},\quad\text{for }l=1,2, \cdots,N_{\setminus j}.\] (12)
3. Use PCA to find the eigenvector \((\mathbf{z}_{j}^{l})\) corresponding to the largest eigenvalue of \(\left(A_{j}^{l}\right)^{T}A_{j}^{l}\) for each \(l=1,2,\cdots,N_{\setminus j}\), and normalize the eigenvector such that \(\left|\left|\mathbf{z}_{j}^{l}\right|\right|=1\). Compute the error between the gradient vectors \(\mathbf{a}_{j}^{m,l}\) and their projections onto
\(\mathbf{z}_{j}^{l}\), defined as \[E_{j}^{l}=\sum_{m=1}^{N_{j}}\left|\left|\mathbf{a}_{j}^{m,l}-\left[\mathbf{z}_{j}^{l} \left(\mathbf{z}_{j}^{l}\right)^{T}\right]\mathbf{a}_{j}^{m,l}\right|\right|^{2},\quad \text{for }l=1,2,\cdots,N_{\setminus j}.\] (13) Then, compute the gradient projection error \((E_{j})\) for \(x_{j}\) via \[E_{j}=\frac{1}{N_{\setminus j}}\sum_{l=1}^{N_{\setminus j}}\frac{E_{j}^{l}}{ \Omega_{j}^{l}},\] (14) where \[\Omega_{j}^{l}=\frac{1}{N_{j}}\sum_{m=1}^{N_{j}}\left|\left|\mathbf{a}_{j}^{m,l} \right|\right|^{2}\qquad\text{for }l=1,2,\cdots,N_{\setminus j}.\] (15) Note that we use the normalization factor \(\Omega_{j}^{l}\) in (14) to express the projection error \(E_{j}^{l}\) relative to the average squared length of the gradient vectors for that \(l\).
4. Finally, we choose the input variable \(x_{j_{1}}\) as the one having the smallest projection error, i.e., \(j_{1}=\underset{j}{\arg\min}\{E_{j}:j=1,2,\cdots,d\}\).
We use the maximin criterion (Johnson et al., 1990; McKay et al., 2000) to draw the LHD samples throughout the paper, which aims to maximize the minimum distance between any two samples in the LHD to enhance the space-filling property. Also note that the entire IANN modeling procedure uses LHDs for two different purposes: First, in the gradient projection algorithm above for determining the input ordering, and then again to generate the training data for the IANN model fitting (see Appendix B). For the former, we have found that a standard LHD works fine. For the latter, we have found that the modified LHD in Appendix B consistently works better than a standard LHD.
The algorithm for selecting the input for each subsequent level is similar: Suppose we have determined the order of input variables in the previous levels, \(\{x_{j_{1}},\cdots,x_{j_{i-1}}\}\) for some \(i>1\). Similar to the procedure for finding \(x_{j_{1}}\), for each candidate input index \(j_{i}\in\{1,2,\cdots,d\}\setminus\{j_{1},\cdots,j_{i-1}\}\), we consider the gradient of \(f(\mathbf{x})\) with respect to the remaining inputs \(\mathbf{x}_{\setminus(j_{1},\cdots,j_{i})}\), which by the chain rule is (if structure (5) holds with an equality),
\[\frac{\partial f}{\partial\mathbf{x}_{\setminus(j_{1},\cdots,j_{i})}}(\mathbf{x})=\frac{ \partial g_{1}}{\partial h_{1}}(\mathbf{x})\frac{\partial h_{1}}{\partial h_{2}}( \mathbf{x}_{\setminus j_{1}})\cdots\frac{\partial h_{i-1}}{\partial h_{i}}(\mathbf{x} _{\setminus(j_{1},\cdots,j_{i-1})})\nabla h_{i}(\mathbf{x}_{\setminus(j_{1}, \cdots,j_{i})}). \tag{16}\]
Then, if we fix \(\mathbf{x}_{\setminus(j_{1},\cdots,j_{i})}\) and vary \((x_{j_{1}},\cdots,x_{j_{i}})\) by taking some LHD samples in the \(\mathbf{x}_{(j_{1},\cdots,j_{i})}\) space, the vector \(\nabla h_{i}(\mathbf{x}_{\setminus(j_{1},\cdots,j_{i})})\) in (16) is a constant vector, so that the gradient vectors \(\frac{\partial f}{\partial\mathbf{x}_{\setminus(j_{1},\cdots,j_{i})}}(\mathbf{x})\) are all colinear and differ only in their magnitude and sign. Consequently, we apply the same gradient projection algorithm described above to the subsequent levels by substituting \(\mathbf{x}_{\setminus j_{1}}\) with \(\mathbf{x}_{\setminus(j_{1},\cdots,j_{i})}\) and selecting \(x_{j_{i}}\) to be the remaining input whose gradient vectors in Eq. (16) are the most colinear according to the gradient projection error measure analogous to (14). To avoid generating LHD samples repeatedly in each level, we only take LHD samples once in the \(d\)-dimensional input space and then project them into the lower dimensional spaces where needed.
## 5 Numerical Experiments
This section provides two examples of visualizing functions for which we have a closed-form "ground truth" expression to compare to our IANN visualizations. The Supplementary Materials section provides an example in which the function is an actual surrogate model of a complex numerical simulation of the potential energy of a strain-loaded material sample.
### IANN for the borehole function
To illustrate how IANN plots help with the visualization and interpretation of function, we use the borehole function
\[f(\mathbf{x})=2\pi x_{1}\left(x_{4}-x_{6}\right)\left[\log\left(\frac{x_{2}}{x_{3 }}\right)\left(1+2\frac{x_{7}x_{1}}{\log(\frac{x_{2}}{x_{3}})x_{3}^{2}x_{8}} +\frac{x_{1}}{x_{5}}\right)\right]^{-1}, \tag{17}\]
which models the water flow between two aquifers. Although the functional form in (17) is known, it is commonly treated as a black-box function for evaluating surrogate modeling methods (An and Owen, 2001; Harper and Gupta, 1983; Morris et al., 1993; Surjanovic and
Bingham, 2013). The response \(f\) is the water flow rate between the aquifers. Here, \(d=8\), \(x_{1}\in[0.05,0.15]\) and \(x_{2}\in[100,50\,000]\) represent the radius of a borehole and its influence respectively, \(x_{3}\in[63\,070,115\,600]\) and \(x_{5}\in[63.1,116]\) represent transmissivity of the upper and lower aquifer, \(x_{4}\in[990,1110]\) and \(x_{6}\in[700,820]\) represent the potentiometric head of the upper and lower aquifer, \(x_{7}\in[1120,1680]\) represents the length of the borehole, and \(x_{8}\in[9855,12\,045]\) represents the hydraulic conductivity of the borehole.
Since the borehole function has eight inputs, each with different scales, we first use the min-max normalization to rescale the range of inputs to \([0,1]\) when fitting the IANN model. The OVH structure will hierarchically generate seven 3D visualization plots. Instead, we use the DASH structure to interpret the effects of all the input variables through four linear combinations and their hierarchical ordering, which were automatically determined using the algorithm described in Appendix A, which is similar to the one in Section 4.
Regarding computational expense, the algorithm took 96 seconds to determine the input ordering for the borehole example. The main computational expense is in fitting the single IANN after determining the input ordering, which took 245 seconds for the borehole example. The computational expense of fitting the single IANN is comparable to that of fitting standard neural networks with comparable numbers of parameters. One can apply cross-validation to choose the optimal hyperparameters in our IANN model, the computational expense of which is proportional to the time for fitting a single IANN.
In the event that users want to understand and interpret the effect of one input in particular, they can manually choose the ordering of the linear combination groups so that the first group contains the input variable of interest. Table 1 shows the ordered groups produced by the algorithms for the DASH IANN structure with the constraint that the first group is the one containing the input of particular interest. The eight rows show the resulting orderings with each of the eight inputs specified as being of particular interest. This way, one can more directly visualize its effect on \(f\) from the Level 1 plot. The order
of the remaining linear combinations is shown in the second column in Table 1. For each of the orderings in Table 1, we also show the resulting test \(r^{2}\). Figure 9 shows the IANN visualization plot with the highest test \(r^{2}\).
For comparison, we use global sensitivity analysis (Herman and Usher, 2017) to compute the total sensitivity indices for \(\{x_{1},x_{2},\cdots,x_{8}\}\), which are \(\{0.0,0.0,0.174,0.26,0.0,0.261,\\ 0.258,0.063\}\). This suggests that \(x_{1},x_{2}\) and \(x_{5}\) have little effect on the borehole function, which is also reflected in the IANN plot in Figure 9.
We used 34,000 training samples for fitting our IANN and \(10^{6}\) randomly generated test input samples to compute the test \(r^{2}\). The test \(r^{2}=99.99\%\) for the Borehole function. The coefficients of the (normalized) inputs in each linear combination are listed below the plots in Figure 9. Since the inputs have been normalized to \([0,1]\), comparing the magnitudes of
\begin{table}
\begin{tabular}{|c|c|c|} \hline index j & input groups and ordering & test \(r^{2}\) \\ \hline
1 & [[1, 2, 3, 5], [4, 6], [7], [8]] & 99.97\% \\ \hline
2 & [[1, 2, 3, 5], [4, 6], [7], [8]] & 99.97\% \\ \hline
3 & [[1, 2, 3, 5], [4, 6], [7], [8]] & 99.97\% \\ \hline
4 & [[4, 6],[[1, 2, 3, 5], [7], [8]]] & 99.99\% \\ \hline
5 & [[1, 2, 3, 5], [4, 6], [7], [8]]] & 99.97\% \\ \hline
6 & [[4, 6],[[1, 2, 3, 5], [7], [8]]] & 99.99\% \\ \hline
7 & [[7],[4, 6],[[1, 2, 3, 5], [8]]] & 99.94\% \\ \hline
8 & [[8],[4, 6],[[1, 2, 3, 5], [7]]] & 99.99\% \\ \hline \end{tabular}
\end{table}
Table 1: For the borehole function with the DASH structure assumed, a listing of the \(p=4\) disjoint linear combination groups and their orderings, which were automatically determined by algorithms for the DASH IANN structure under the constraint that the first group contains the specified input variable of particular interest, which is shown in Column 1.
the coefficients suggests the relative importance of the inputs involved in each linear combination. For example, from the Level 1 plot in Figure 9, \(v_{1}\) consists of two input variables, \(x_{4}\) and \(x_{6}\), and the corresponding coefficients have the same magnitude with different signs. This agrees with the formula for the borehole function in Eq. (17), which can be written as \(f(\mathbf{x})=(x_{4}-x_{6})h_{1}(\mathbf{x}_{\setminus(4,6)})\) for \(h_{1}=2\pi x_{1}\left[\log\left(\frac{x_{2}}{x_{3}}\right)\left(1+2\frac{x_{ 7}x_{1}}{\log(\frac{x_{2}}{x_{3}})x_{3}^{2}x_{8}^{3}}+\frac{x_{1}}{x_{5}} \right)\right]^{-1}.\) From the Level 2 plot, the coefficients are almost zero except for the input variable \(x_{3}\). This also agrees with the global sensitivity analysis, which shows that \(x_{1},x_{2},x_{5}\) have little impact on the response \(f\).
As a reference for comparison, the test \(r^{2}\) for a linear model fitted to the borehole function is \(94.68\%\), which suggests that the borehole function is almost linear over the specified input domain. The examples in the subsequent sections demonstrate the interpretability of the IANN visualization for functions having a higher level of nonlinearity.
Figure 9: DASH structure IANN visualization for the borehole function \(f\) in Eq. (17). The axes of the plots and the \(\mathbf{\beta}\) coefficients listed below the plots correspond to the normalized inputs.
### IANN for the harmonic wave function
Reconsider the harmonic wave function \(f(\mathbf{x})\) in Eq. (2), for which the IANN visualizations for only Level 1 were shown in Figures 1 and 3. We first considered the DASH structure to approximate \(f\), but it found no relevant disjoint linear combinations other than the trivial one with \(p=d\) and each \(v_{i}\) a single input variable. Consequently, we use the OVH structure. More generally, we recommend that users first try the DASH structure, and if it does not produce any relevant linear combinations that result in an acceptably high test \(r^{2}\), then users should use the OVH structure.
From the Level 1 plot in Figure 10 (which is similar to Figure 1), we see that \(x_{1}\) has linear effect on \(f(\mathbf{x})\) when the other variables are fixed, and \(f(\mathbf{x})\) is sinusoidal for each fixed value of \(x_{1}\), which agrees with Eq. (2). From the Level 2 plot, \(h_{1}(\mathbf{x}_{\setminus 1})\) is linear in \(x_{4}\) and \(h_{2}\), and \(x_{4}\) and \(h_{2}\) have little interaction, which suggests the additive structure \(h_{1}(\mathbf{x}_{\setminus 1})=\alpha_{1}h_{2}(\mathbf{x}_{\setminus(1,4)})+\alpha_{2 }x_{4}\) for some constants \(\alpha_{1}\) and \(\alpha_{2}\). Therefore, \(x_{4}\) reflects the phase angle in the harmonic function. From the Level 3 plot, we can see that \(h_{2}\) is linear in \(x_{3}\) for each fixed \(x_{2}\) and nonlinear in \(x_{2}\) for some fixed \(x_{3}\), especially when \(x_{3}\) is higher. Moreover, it is clear from the Level 3 plot that \(x_{2}\) has zero effect on \(h_{2}\) when \(x_{3}=0\), versus a large and nonlinear effect on \(h_{2}\) when \(x_{3}=1\), which also agrees with the "true"
Figure 10: OVH structure IANN visualization of harmonic wave function in Eq. (2). The test \(r^{2}=99.95\%\).
expression for \(h_{2}\) (\(=2\pi\frac{x_{3}}{x_{2}}\) up to some constants) from Eq. (2). Likewise from the Level 3 plot, as we decrease \(x_{2}\), \(x_{3}\) has a far larger effect on \(h_{2}\), and thus a far larger effect on \(h_{1}\) (from the Level 2 plot) and on \(f\) (from the Level 1 plot). From this, for smaller \(x_{2}\), \(f\) undergoes more sinusoidal cycles as \(x_{3}\) varies over its full range, which agrees with the observation from Eq. (2) that \(x_{2}\) represents the wavelength of the sinusoidal function. In this sense, the IANN plot truly reflects the physical interpretation of the input variables.
One can find more examples in the Appendix C that use our IANN approach to visualize actual black box functions that represent a surrogate model fit to the output of a black-box computer simulation of a physical system.
## 6 Conclusions and Potential Extensions
This paper introduced the IANN approach to visualize and interpret black box functions. We have shown that, theoretically, any continuous function on \([0,1]^{d}\) can be arbitrarily closely approximated by our more interpretable structure, which can be conveniently represented with the proposed IANN architecture. To visualize the effects of all the input variables, we developed two hierarchical structures (OVH and DASH), each of which can be represented with a particular IANN architecture. We have also developed algorithms to automatically determine the ordering of input variables with the goal of providing the best approximation to the original function for each hierarchical structure. We have used a number of examples to demonstrate the interpretability advantages of our IANN method.
We envision several potential extensions of our IANN approach to either enhance the interpretation of the original function \(f\) and/or to expand the class of \(f\) to which our IANN provides a good approximation. One way to enhance interpretability with the IANN visualization plots is to develop customized graphical user interfaces (GUIs), with which users can change the value of each input variable \(\{x_{1},x_{2},\cdots,x_{d}\}\) via slide-bar controls and interactively visualize the corresponding points \(\{f,h_{1},\cdots,h_{d-2}\}\) on the IANN visualization
surfaces plotted in each level. Moreover, to facilitate the creation of 3D trellis plots of \(f\) vs \(\{x_{j},x_{l}\}\) for some collection of fixed values of \(\mathbf{x}_{\setminus(j,l)}\) (which normally requires careful selection of many fixed \(\mathbf{x}_{\setminus(j,l)}\) values when \(d\) is larger), one could modify the IANN architecture to fit a model of the form
\[f(\mathbf{x})=g(x_{j},x_{l},h(\mathbf{x}_{\setminus(j,l)})), \tag{18}\]
in which case users need only to select a few fixed values of the _scalar_\(h(\mathbf{x}_{\setminus(j,l)})\), instead of selecting many fixed values of the higher-dimensional \(\mathbf{x}_{\setminus(j,l)}\).
One might also consider removing the disjoint restriction on the linear combinations in the DASH structure, which would allow each input to appear in multiple combinations. The main reason we did not pursue this is that we are prioritizing interpretability of the effects of individual inputs over generality of the IANN approach. Having overlapping linear combinations would mean that the same input variable is present in multiple linear combinations, which makes the interpretation of that individual input variable less clear. Considering that the disjoint set restriction in the DASH structure becomes less restrictive and more general as the number of linear combinations increases (in the limiting case, each linear combination is a single input, in which case the DASH structure reduces to the OVH structure), we thought the disjoint restriction constitutes a reasonable tradeoff between generality and interpretability. As future work, we plan to explore this issue further, but as yet we are unsure how to preserve interpretability with overlapping linear combinations.
A related extension is to use the modified architecture (18) and simply plot \(f\) vs \(\{x_{j},x_{l}\}\) with \(h(\mathbf{x}_{\setminus(j,l)})\) controlled by a slide bar, which would be interpreted similarly to 3D trellis plots but completely avoids having to select any fixed values related to the omitted variables \(\mathbf{x}_{\setminus(j,l)}\). Analogous to the OVH structure in Section 3.1, one could further decompose \(h\) in Eq. (18) as a function of two additional inputs and a second latent function that is likewise controlled by a slide-bar in a Level 2 plot of \(h\) versus the two additional inputs, and so on in subsequent levels. This extension would reduce the number of required levels by roughly
half in the IANN visualizations and would also expand the applicability of the approach from functions that can be represented by (1) to functions that can be represented by (18).
Another potential extension is to use a dichotomous tree IANN structure, for which the Level 1 plot is of \(f(\mathbf{x})=g_{1}(h_{1,1}(\mathbf{x}_{J_{1,1}}),h_{1,2}(\mathbf{x}_{J_{1,2}}))\) vs \(h_{1,1}(\mathbf{x}_{J_{1,1}})\) and \(h_{1,2}(\mathbf{x}_{J_{1,2}})\), where \(J_{1,1}\) and \(J_{1,2}\) are a disjoint partition of \(\{1,2,\cdots,d\}\). Each latent \(h\) function can then be decomposed similarly in the subsequent levels to produce a tree-like nested set of 3D plots. To find the indices group partition at each level, we anticipate that something similar to the IANN algorithm in Section 4 can be developed. The main benefit of this extension is that it expands the class of \(f\) to which our IANN provides a good approximation (the IANN structure in (5) is a special case of the tree IANN structure with each group of inputs partitioned into a single input and the remaining inputs) and potentially reduces the depth of the tree in the IANN visualization.
Another potential extension is from the current setting of visualizing black box simulation functions to visualizing general supervised learning models fit to observational data, in order to interpret the effects of the input variables. The main challenge is that, unlike black box simulation functions for which the entire (rectangular) input space is meaningful, the input training data for supervised learning models are often highly correlated. Visualizing black box supervised learning models using our current IANN approach would require extrapolation to regions of the input space where data are scarce, which would render the interpretations unreliable.
## Supplementary Materials
**Supplemental Materials**: Several topics will be covered here, including the algorithms for the DASH IANN structure, customized LHD sampling techniques, additional numerical examples, and the proof of Theorem 2.1. (IANN-supplementary.pdf)
**Python-package for IANN:**: Python-package "IANN" containing code to perform the
IANN method described in the article and the related datasets. (iann-codes.zip, zipped codes for IANN)
## Disclosure Statement
The authors report there are no competing interests to declare.
## Acknowledgements
This work was funded in part by the Air Force Office of Scientific Research Grant # FA9550-18-1-0381, which we gratefully acknowledge.
|
2304.07152 | Combining Stochastic Explainers and Subgraph Neural Networks can
Increase Expressivity and Interpretability | Subgraph-enhanced graph neural networks (SGNN) can increase the expressive
power of the standard message-passing framework. This model family represents
each graph as a collection of subgraphs, generally extracted by random sampling
or with hand-crafted heuristics. Our key observation is that by selecting
"meaningful" subgraphs, besides improving the expressivity of a GNN, it is also
possible to obtain interpretable results. For this purpose, we introduce a
novel framework that jointly predicts the class of the graph and a set of
explanatory sparse subgraphs, which can be analyzed to understand the decision
process of the classifier. We compare the performance of our framework against
standard subgraph extraction policies, like random node/edge deletion
strategies. The subgraphs produced by our framework allow to achieve comparable
performance in terms of accuracy, with the additional benefit of providing
explanations. | Indro Spinelli, Michele Guerra, Filippo Maria Bianchi, Simone Scardapane | 2023-04-14T14:21:20Z | http://arxiv.org/abs/2304.07152v1 | Combining Stochastic Explainers and Subgraph Neural Networks can Increase Expressivity and Interpretability
###### Abstract
Subgraph-enhanced graph neural networks (SGNN) can increase the expressive power of the standard message-passing framework. This model family represents each graph as a collection of subgraphs, generally extracted by random sampling or with hand-crafted heuristics. Our key observation is that by selecting "meaningful" subgraphs, besides improving the expressivity of a GNN, it is also possible to obtain interpretable results. For this purpose, we introduce a novel framework that jointly predicts the class of the graph and a set of explanatory sparse subgraphs, which can be analyzed to understand the decision process of the classifier. We compare the performance of our framework against standard subgraph extraction policies, like random node/edge deletion strategies. The subgraphs produced by our framework allow to achieve comparable performance in terms of accuracy, with the additional benefit of providing explanations.
## 1 Introduction
Graph neural networks (GNNs) are neural network models designed to adapt and perform inference on graph domains [1]. While a few models were already proposed in-between 2005 and 2009 [2, 3], the interest in GNNs has increased dramatically over the last few years, thanks to the broader availability of data, processing power, and automatic differentiation frameworks. Now, GNNs are the state-of-the-art solution in a comprehensive set of scenarios. Nevertheless, regulations require high task performance and a transparent decision process [4].
For this reason, several researchers have investigated techniques to explain GNNs' predictions, primarily identifying the most critical portions of the graph that contributed to producing a particular inference. The vast majority of these techniques [5, 6] provide a post-hoc explanation, thus inferring the reasons that led to a specific outcome by a trained model. However, recent efforts toward "explainable-by-design" GNNs rather than post-hoc explainers are opening up new, interesting approaches. For example, in [7], the authors introduce an explainability term to let the network converge to a "interpretable" local minimum to facilitate the work of post-hoc explanation algorithms. Solutions like [8, 9] completely discard the notion of post-hoc algorithms providing explanations directly in the main model output.
On a separate line of research, recent studies [10, 11, 12] demonstrated that by providing the GNN with subgraphs that give different views of the same graph, it is possible to increase the expressive power of the standard message-passing framework.
We propose to connect these two topics and build an explainable by-design subgraph-enhanced GNN. We use a data-driven approach to learn small and representative subgraphs that increase the expressive power for the downstream task and that can be used as explanations.
Related Works
### Explainability in GNNs
GNNs are generally seen as a black-box since they form decisions in high-dimensional and nonlinear spaces. However, explainability techniques enable humans to interpret the decision process of GNNs, discover potential sources of error, find biases and limitations in the model and learn more about the data and the task at hand. Most approaches in the literature are post-hoc explainers differentiated according to the techniques used to explain a trained GNN. In particular, there are gradient-based approaches [13], perturbation-based approaches [14, 5, 6], decomposition methods [15], and counterfactual explainers like [16] and GEM [17].
Of particular interest for this work is **PGExplainer**[6], which uses a small network to parametrize the probability of each edge \(\omega_{ij}\) of being part of the explanatory subgraph, and sample from this distribution to obtain the final explanation subgraph characterized by edges \(e_{ij}\). The optimization objective maximizes the mutual information between the masked prediction and the prediction obtained from the original graph. In addition, an element-wise entropy term encourages sparsity on \(\omega\), and an \(l1-\)norm forces small explanation subgraphs.
Fewer research efforts have been devoted to explainable-by-design GNNs. ProtGNN [8] first learns some subgraph prototypes from the dataset. Then, they compare the inputs to these prototypes in the latent space. Furthermore, they introduce a subgraph sampling algorithm to highlight which component in the input is most similar to each learned prototype. GIB [9], instead, uses a bi-level optimization scheme to find the IB-subgraph, which is the most informative yet compressed subgraph. This subgraph is the one that maximizes the mutual information (MI) or shares the same properties with the original input. GSAT [18], which builds upon GIB, injects stochasticity to the attention weights to block the information from task-irrelevant graph components while learning task-relevant subgraphs for interpretation.
### Subgraph-enhanced graph neural networks
The study of the expressive power of GNNs has always been of central interest to the community. Most GNNs operating via local neighbourhood aggregation are as powerful as the Weisfeiler-Leman (1-WL) graph-isomorphism test [19]. It has recently been shown that it is possible to create more expressive GNNs using standard architectures that process several subgraphs of the input graph [10, 11]. In particular, **ESAN**[10] represents each graph as a bag of subgraphs \(\{\mathcal{G}_{1},\ldots,\mathcal{G}_{m}\}\) chosen according to some predefined policy, e.g., all graphs obtainable by removing one edge (edge deleted strategy) or one node (node deleted strategy) from the original graph. The encoder implements a module \(L_{1}\) consisting of several message-passing layer that process each subgraph independently, and then a second message-passing module \(L_{2}\) preprocesses the aggregation of the subgraphs working as an information-sharing module for the subgraphs. For example, to compute the embedding \(H_{i}\) for the subgraph \(\mathcal{G}_{i}\), they use the following procedure:
\[\mathrm{H}_{i}=L_{1}\left(\mathcal{G}_{i}\right)+L_{2}\left(\sum_{j}\mathcal{ G}_{j}\right)\,. \tag{1}\]
Then, in the last layer of the encoder, a pooling operation aggregates the node embeddings into subgraph embeddings with a global pooling operation. Finally, a set learning module [20] aggregates the obtained subgraph representations into a single one used in downstream tasks. The authors of [10] name this general configuration DSS-GNN.
Recently k-OSANs [12] developed a data-driven policy for the subgraph selection that improves the predictive performances compared to the simple policies used in ESAN.
## 3 Proposed Framework
In this work, we consider an undirected and unweighted graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), where \(\mathcal{V}=\{1,\ldots,n\}\) is the set of node indexes, and \(\mathcal{E}\subseteq\{(i,j)\mid i,j\in\mathcal{V}\}\) is the set of edges connecting pairs of nodes. The
entities and relationships represented by nodes and edges depend on the data and the application. The graph topology can be represented by the adjacency matrix \(\mathbf{A}\in\left\{0,1\right\}^{n\times n}\). Other operators matching the sparsity pattern of \(\mathbf{A}\), such as the graph Laplacian, can be used to define the weighted connectivity of the graph. Letting \(\mathbf{D}\) be one of these operators encoding the topological information, a GNN layer is defined by (ignoring biases for simplicity, but w.l.o.g.):
\[\mathbf{H}=\phi\left(\mathbf{D}\mathbf{X}\mathbf{W}\right)\,, \tag{2}\]
where \(\mathbf{X}\in\mathbb{R}^{n\times d}\) is a matrix collecting all vertex features row-wise, \(\mathbf{W}\in\mathbb{R}^{d\times q}\) is a matrix of trainable coefficients, and \(\phi\) is an element-wise non-linear function (e.g., for ReLU \(\phi(s)=\max\left(0,s\right)\)). A GNN can stack multiple layers in the form of (2) to learn highly nonlinear node representations that account for larger neighbourhoods on the graph. In this work, we are interested in graph classification tasks. This task requires a global pooling operation, such as mean, max, or min pooling, to build a graph representation from the node representations produced by stacking the layers described in Equation (2).
Our goal is to develop a framework that jointly predicts the graph class and the explanation masks, highlighting the parts of the graph that contribute the most to the prediction. The key to our framework is the role played by the subgraphs which are learned end-to-end, based on the optimization of the classification loss. Firstly, they must improve the expressive power of the base GNN classifier. Secondly, they must serve as explanation masks by selecting the parts of the input that mostly contribute to determine the correct class. We expect such subgraphs to be more informative than post-hoc explanation masks since they are directly generated by the model to maximize the classification performance.
We summarize our framework, which consists of three main steps highlighted in Fig. 1. In the first step, we train the backbone of the SGNN, which is a classifier that processes the original graph. In the second step we apply PGExplainer with a minor modification to circumvent the "introduced evidence" [21] issue due to the presence of soft masks. Specifically, we binarize the soft weights to create "hard" explanation masks. Since "hard" explanations are not differentiable, to allow the computation and the flowing of the gradients in the backward pass, we apply a straight-through estimator [22]. These modifications lead to design choices for the regularization terms that are different from those originally used in PGExplainer. In particular, we apply the regularization directly on the "hard" explanation mask \(e\) instead of \(\omega\). Therefore we remove the element-wise entropy term and the \(l1-\)norm, in this case, is equivalent to an \(l0-\)norm minimizing the number of edges appearing in the explanation.
While PGExplainer generates a unique explanation subgraph, we are interested in collecting
Fig. 1: In the first step, we train the backbone of the SGNN, a GIN classifier, using the original graphs. In the second step, we train the explainer with the original graph and the backbone’s predictions. We then create the new representation consisting of bags of explanation subgraphs. Finally, in the third step, we train the whole SGNN framework, fine-tuning the backbone using the explanatory subgraphs. Our model outputs the predicted label and an explanation obtained by combining all the subgraphs used during training.
several subgraphs to train the subgraph-based classifier. Therefore, we devise two strategies to obtain a heterogeneous yet meaningful set of subgraphs. The first is to use the reparametrization trick also in the inference phase allowing the noise, which can be tuned as a hyperparameter, to inject diversity that leads to multiple different subgraphs. In the second approach we introduce a budget on the explanation. We adapt the binarization threshold to obtain a set of explanations with \(K\) edges, where \(K\) ranges from \(5\%\) to \(75\%\) of the initial number of edges in the original graph. An example of these two strategies is presented in Figure 2, on the synthetic dataset BA-2Motifs where the five nodes cycle is responsible for the prediction.
In the third step, we insert the backbone GNN into the DSS-GNN model introduced in ESAN in [10]. The backbone works as the encoder that processes each explanation subgraph independently. A new message-passing module preprocesses the aggregation of the explanations working as an information-sharing module for the subgraphs. Finally, a new set learning module [20] aggregates subgraph representations obtained after a global pooling operation into a single one used in downstream tasks. Besides being used by the information-sharing module, the subgraph obtained by aggregating all the explanations is used to explain the model's prediction. We would like to stress the fact that this explanation, rather than being computed just from a post-hoc explainer, is used to train the model.
## 4 Experimental Evaluation
We want to prove that our framework retains the same classification performance of known SGNN architectures. However, our framework has the added benefit of re-using the subgraphs involved in the computations as plausible explanations for the model's prediction. We used the same evaluation proposed in ESAN [10] over the TUD repository datasets. This consist of computing the average accuracy obtained by running a \(10-\)fold cross-validation. We selected GIN [19] and DSS-GNN with the GIN backbone as baselines. We used the most compact model consisting of 4 GIN layers with two linear layers of a hidden size of 32. The batch size is also 32. We selected the top performing policies from [10]: node deleted (ND) and edge deleted (ED). Furthermore, we consider both their deterministic versions: one that uses the entire bag of subgraphs (referred to as "1.0") and the sampled versions, which keeps only 10% of the bag (referred to as "0.1"). Finally, we select our hyperparameters using grid-search. We report the results in Table 1. We can observe that subgraph-enhanced GNNs achieve slightly better accuracies. We did not notice any empirical difference when using a subset of the bag of subgraphs instead of the complete one. Furthermore, as noted in [10], despite the model being no longer invariant, the sample bag of subgraphs still increases the expressive power of the base encoder. Finally, the results obtained using explanatory subgraphs instead of randomly generated ones are comparable. However, we
Figure 2: The collection of subgraphs is represented as a heatmap. On the left, there are the subgraphs obtained with the noise based strategy. On the right, those obtained with the progressive top-K selection. The motif responsible for the prediction is clearly visible in both cases, however, the top-K approach selects less spurious edges.
have the advantage that our bag of subgraphs forms an explanation helping human experts to extract new knowledge from the model.
## 5 Conclusion
We introduced a framework to bridge subgraph-enhanced GNNs with explainability techniques. First, we pre-train a base GNN such that we can train a post-hoc explainer modified to satisfy the need for multiple and discrete explanation masks. Next, the explainer generates a bag of explanation subgraphs. Then, we resume the training of the base model, including equivariant layers, to process the new representation in the SGNN fashion. Finally, we combine the subgraphs in a unique heatmap highlighting the most relevant parts responsible for the prediction and the prediction itself. This additional information allows human experts to interpret and extract knowledge from the model. The experimental evaluation showed that our model achieves comparable classification performance to state-of-the-art models while providing, at the same time, an explanation.
|
2307.14456 | AutoSourceID-Classifier. Star-Galaxy Classification using a
Convolutional Neural Network with Spatial Information | Aims. Traditional star-galaxy classification techniques often rely on feature
estimation from catalogues, a process susceptible to introducing inaccuracies,
thereby potentially jeopardizing the classification's reliability. Certain
galaxies, especially those not manifesting as extended sources, can be
misclassified when their shape parameters and flux solely drive the inference.
We aim to create a robust and accurate classification network for identifying
stars and galaxies directly from astronomical images. By leveraging
convolutional neural networks (CNN) and additional information about the source
position, we aim to accurately classify all stars and galaxies within a survey,
particularly those with a signal-to-noise ratio (S/N) near the detection limit.
Methods. The AutoSourceID-Classifier (ASID-C) algorithm developed here uses
32x32 pixel single filter band source cutouts generated by the previously
developed ASID-L code. ASID-C utilizes CNNs to distinguish these cutouts into
stars or galaxies, leveraging their strong feature-learning capabilities.
Subsequently, we employ a modified Platt Scaling calibration for the output of
the CNN. This technique ensures that the derived probabilities are effectively
calibrated, delivering precise and reliable results. Results. We show that
ASID-C, trained on MeerLICHT telescope images and using the Dark Energy Camera
Legacy Survey (DECaLS) morphological classification, outperforms similar codes
like SourceExtractor. ASID-C opens up new possibilities for accurate celestial
object classification, especially for sources with a S/N near the detection
limit. Potential applications of ASID-C, like real-time star-galaxy
classification and transient's host identification, promise significant
contributions to astronomical research. | F. Stoppa, S. Bhattacharyya, R. Ruiz de Austri, P. Vreeswijk, S. Caron, G. Zaharijas, S. Bloemen, G. Principe, D. Malyshev, V. Vodeb, P. J. Groot, E. Cator, G. Nelemans | 2023-07-26T18:55:36Z | http://arxiv.org/abs/2307.14456v1 | # AutoSourceID-Classifier
###### Abstract
Aims:Traditional star-galaxy classification techniques often rely on feature estimation from catalogues, a process susceptible to introducing inaccuracies, thereby potentially jeopardizing the classification's reliability. Certain galaxies, especially those not manifesting as extended sources, can be misclassified when their shape parameters and flux solely drive the inference. We aim to create a robust and accurate classification network for identifying stars and galaxies directly from astronomical images. By leveraging convolutional neural networks (CNN) and additional information about the source position, we aim to accurately classify all stars and galaxies within a survey, particularly those with a signal-to-noise ratio (S/N) near the detection limit.
Methods: The AutoSourceID-Classifier (ASID-C) algorithm developed here uses 32x32 pixel single filter band source cutouts generated by the previously developed ASID-L code. ASID-C utilizes CNNs to distinguish these cutouts into stars or galaxies, leveraging their strong feature-learning capabilities. Subsequently, we employ a modified Platt Scaling calibration for the output of the CNN. This technique ensures that the derived probabilities are effectively calibrated, delivering precise and reliable results.
Results: We show that ASID-C, trained on MeerLICHT telescope images and using the Dark Energy Camera Legacy Survey (DECALS) morphological classification, outperforms similar codes like SourceExtractor. ASID-C opens up new possibilities for accurate celestial object classification, especially for sources with a S/N near the detection limit. Potential applications of ASID-C, like real-time star-galaxy classification and transient's host identification, promise significant contributions to astronomical research.
## I Introduction
Current and future large-scale astronomical photometric surveys are amassing, and will continue to amass, vast quantities of photometric data, including millions of images containing billions of stars and galaxies. This data influx necessitates a series of processing steps, such as source localization, anomaly detection, feature extraction, and star-galaxy classification, to be further optimized for efficiency in terms of both time and computational resources. With the advent of CMOS detectors capable of capturing large-format images of the night sky at cadences exceeding 1Hz, traditional methods and software for data processing will be inadequate to keep pace with the data acquisition rate.
To address this challenge, our research aims to enhance and streamline existing methods using machine learning tools. In AstroSourceID-Light (ASID-L, Stoppa _et al._ 2022), we demonstrated the potential for rapid and accurate source identification on images, achieving this in a fraction of the time required by currently used methods. Building on this foundation, we now focus on star-galaxy classification, a fundamental data-processing task and often the initial step for the scientific exploitation of survey data. Despite advances in source localization and feature extraction algorithms, there remains significant potential for improvement in star-galaxy classification methods, which are often applied from catalogues rather than directly from source images (Weir _et al._ 1995, Ball _et al._ 2006, Vasconcellos _et al._ 2011, Sevilla-Noarbe _et al._ 2018). Current classification methods, including classification tree methods based on the morphological features of the sources and Bayesian approaches that integrate available source information with prior knowledge about nearby star and galaxy |
2304.04315 | Microseismic source imaging using physics-informed neural networks with
hard constraints | Microseismic source imaging plays a significant role in passive seismic
monitoring. However, such a process is prone to failure due to aliasing when
dealing with sparsely measured data. Thus, we propose a direct microseismic
imaging framework based on physics-informed neural networks (PINNs), which can
generate focused source images, even with very sparse recordings. We use the
PINNs to represent a multi-frequency wavefield and then apply inverse Fourier
transform to extract the source image. To be more specific, we modify the
representation of the frequency-domain wavefield to inherently satisfy the
boundary conditions (the measured data on the surface) by means of a hard
constraint, which helps to avoid the difficulty in balancing the data and PDE
losses in PINNs. Furthermore, we propose the causality loss implementation with
respect to depth to enhance the convergence of PINNs. The numerical experiments
on the Overthrust model show that the method can admit reliable and accurate
source imaging for single- or multiple- sources and even in passive monitoring
settings. Compared with the time-reversal method, the results of the proposed
method are consistent with numerical methods but less noisy. Then, we further
apply our method to hydraulic fracturing monitoring field data, and demonstrate
that our method can correctly image the source with fewer artifacts. | Xinquan Huang, Tariq Alkhalifah | 2023-04-09T21:10:39Z | http://arxiv.org/abs/2304.04315v2 | # Microseismic source imaging using physics-informed neural networks with hard constraints
###### Abstract
Microseismic source imaging plays a significant role in passive seismic monitoring. However, such a process is prone to failure due to the aliasing problem when dealing with sparse measured data. Thus, we propose a direct microseismic imaging framework based on physics-informed neural networks (PINNs), which can generate focused source images, even with very sparse recordings. We use the PINNs to represent a multi-frequency wavefield and then apply the inverse Fourier transform to extract the source image. Specially, we modify the representation of the frequency-domain wavefield to inherently satisfy the boundary conditions (the measured data on the surface) by means of the hard constraint, which helps to avoid the difficulty in balancing the data and PDE losses in PINNs. Furthermore, we propose the causality loss implementation with respect to depth to enhance the convergence of PINNs. The numerical experiments on the Overthrust model show that the method can admit reliable and accurate source imaging for single- or multiple- sources and even in passive monitoring settings. Then, we further apply our method on the hydraulic fracturing field data, and demonstrate that our method can correctly image the source.
## Plain Language Summary
Source imaging plays an important role in seismology and exploration geophysics as a reliable tool to locate earthquake events and monitor human-induced microseismic events. The conventional process numerically propagates the waveform recorded, for example, on the Earth surface backward in time so that energy focuses at the source location. However, when dealing with sparse recordings, this numerical implementation fails to focus the energy at the source, as it encounters considerable numerical artifacts and energy leakage. In addition, its computational cost is high. Neural networks have the potential to solve these problems because of their efficient execution and great interpolation abilities. Here, we propose to train a neural network (NN) to learn the observed data from imperfect recordings and then incorporate the prediction of this small NN into another NN, which learns the wavefield solutions, by means of a hard constraint. The functional form of the NN allows for source imaging in irregular geometry and the proposed method could provide reliable and robust source imaging for sparse recordings like those we deal with in global seismology.
## 1 Introduction
Microseismic events location is the essential foundation of passive seismic monitoring. The common way to locate seismic events is based on the travel time inversion (Lienert et al., 1986), which requires the time-consuming first arrival travel time picking. The process of picking arrivals becomes a challenge in low signal-to-noise data and in multiple-events cases. What's more, the resolution and accuracy of the source locations via this type of method are relatively low. Directly making use
of the full waveform information in the seismograms can help us avoid such issues and allow for higher resolution and accuracy in locating the microseismic events. Kao and Shan (2004) proposed an imaging approach based on summing the amplitudes of the measured data at the corresponding estimated arrival times without the need for phase picking, called the source scanning algorithm. It is widely used in seismology, e.g. for detailed earthquake rupture imaging (Ishii et al., 2007). Another type of method based on full waveforms is time-reversal imaging (McMechan, 1982). Its key idea is to backpropagate the waveform recorded on the surface until the energy is focused. Gajewski and Tessmer (2005) utilized the time reversal imaging (TRI) to reverse the waveform in the time domain for source locations. For this category of methods, the accuracy of the location results is highly determined by the imaging condition. In the past decade, many types of imaging conditions, including the maximum energy imaging condition (or the zero-lag cross-correlation imaging condition) (Artman et al., 2010; Yang and Zhu, 2019), the interferometric imaging condition for sparsely sampled data (Sava, 2011), the deconvolution imaging condition (Douma and Snieder, 2014) and the geometric mean imaging condition (Nakata and Beroza, 2016) for higher resolution, the maximum variance imaging condition (Wang and Alkhalifah, 2017) and the grouped seismic imaging condition (Lin et al., 2020; Chen et al., 2021) for noise-robust source location, have been proposed to enhance waveform-based location results. However, the accuracy for TRI is still an issue in case of sparse and irregular observations. The other type of waveform-based method is to invert the observed data for a more accurate velocity model (Kamei and Lumley, 2014; Kaderli et al., 2018; Song et al., 2019; Wang and Alkhalifah, 2021). These methods require massive repeated wavefield computation, and thus, they are costly.
Thanks to recent advances in computing and algorithms, machine learning has achieved a lot of success in many fields, e.g. natural language processing, computer vision, and science. Many computer-vision-based methods have been adapted to microseismic source location tasks (Perol et al., 2018; Kriegerowski et al., 2019; Zhang et al., 2020; Wang and Alkhalifah, 2021; Zhang et al., 2022; Chen et al., 2022). Most of these methods deal with source locations in a supervised manner. That is to say, they train a neural network on simulated data with labels (source locations) available and then minimize the loss using labels. However, the generalization of these methods to the field data is a challenge due to the lack of a sufficiently diverse dataset. Labels for the field data are often attained using conventional methods and human intervention, and thus, they are prone to errors. On the other hand, purely data-driven methods, which do not require labels, are an attractive alternative to the supervised approach. One way to devise such an approach is by incorporating the physics priors or specifically training the neural networks with physics constraints.
Based on the universal function approximation theory (Hornik et al., 1989), neural networks can be utilized to represent functions like the seismic wavefield, which is a key component of the source location process using time reversal imaging. Thus, we represent the seismic wavefield with a function of the spatial coordinates and frequency, yielding significant flexibility of the simulation in irregular domains. The governing equations for seismic modeling, are used as a loss function, to optimize the neural network using physics-informed neural networks (PINN framework (Raissi et al., 2019)). By means of PINNs for source imaging, the subsurface velocity information is well embedded into the neural network, and it can image the source with irregular and sparse receivers.
In this paper, we propose a novel direct microseismic source imaging method by means of PINNs, in which the source image is a snapshot of the time-domain wavefield. We represent the wavefield as a function of the spatial coordinates and frequency via a neural network (NN) and then train the NN with the Helmholtz equation as a loss function. Specifically, we incorporate the observed data on the surface (boundary condition for the Helmholtz equation) into the wavefield via the hard constraint and use the modified Helmholtz equation as the loss function. Besides, inspired by the fact that the information guiding the optimization of the NN parameters is coming from the recorded data on the surface, we propose to impose causality to the loss calculation along the depth axis, yielding an ideal from-surface-down reconstruction of the wavefield, which can improve the convergence speed and accuracy. We will first demonstrate the effectiveness of the proposed method via single- and multiple-passive sources scenarios on the Overthrust model using sparsely sampled data. We further test our proposed method on the Oklahoma Arkoma Basin Hydraulic Fracturing data to highlight the benefits and potential of the proposed method.
In summary, our main contributions are three-fold:
* We develop a direct source imaging framework based on PINNs with hard constraints.
* To ensure a stable and reliable training process, we use a reference frequency loss function incorporated in the hard constraint implementation and impose causality on the loss function with respect to depth.
* We evaluate the proposed method on the synthetic data for two different synthetic cases and field data, and achieve reliable source location results, showing the potential for even global event location.
We first introduce the proposed source imaging framework and key concepts including PINNs with hard constraints, reference frequency loss function, causality implementation, and data fitting. To demonstrate the effectiveness of the proposed method, Sections 3 and 4 present the numerical examples and field tests. Finally, we discuss the potential of the approach and its limitations in Section 5 and conclude in Section 6.
## 2 Theory
### Source imaging in the form of wavefield modeling
The source location imaging problem can be formulated using frequency-domain wavefield modeling, which offers a reduction of dimensionality compared to time-domain wavefield modeling, but more importantly, it will allow us to implement a causal loss function in the depth direction. The wave equation in the frequency domain is described by the Helmholtz equation and for a 2D acoustic isotropic medium, it is, along with the data boundary condition, given by
\[\frac{\omega^{2}}{v^{2}(x,z)}u(x,z,\omega)+\nabla^{2}u(x,z,\omega )=0, \tag{1}\] \[u(x,z=0,\omega)=D(x,\omega), \tag{2}\]
where \(u(x,z,\omega)\) is the wavefield corresponding to the frequency \(\omega\) and location \((x,z)\), \(v(x,z)\) is the velocity, and \(D(x,\omega)\) is the recorded data on the surface, which can be transformed to the frequency domain using Fast Fourier transform (FFT). We use Equations 1 and 2 to solve for the wavefield and then transform it to the time domain in which the source image corresponds to a slice of the time-domain wavefield where the energy focus is the highest.
However, frequency domain modeling for irregular geometry and high frequencies, where the fine grids are needed, are memory and computationally intensive. It also requires solving for every frequency. On the other hand, function learning, by utilizing an NN, is a potential solution in view of its flexibility to irregular mesh and its instant inference speed. Thus, we use PINNs to represent the frequency-domain wavefield.
### Physics-informed neural networks with hard constraints
In seismic wavefield modeling, physics-informed neural networks (PINNs) can be used to find \(\Phi(x,z,\omega,\theta)\) with parameters \(\theta\), that maps the input spatial coordinates and frequency to the value of the complex wavefield that satisfies the Helmholtz equation. Using a vanilla PINN often includes two loss terms for each of Equations 1 and 2. Balancing the contributions of these two terms using a weight affects the convergence and accuracy of the solution. The hard constraints (Berg and Nystrom, 2018; Schiassi et al., 2021) concept can alleviate this issue by modifying the representation of the wavefield \(u(x,z,\omega)\) to a new form where the boundary condition is inherently satisfied. Specifically, in this paper, we propose to use the following form:
\[u(x,z,\omega)=D(x,\omega)+z\Phi(x,z,\omega,\theta). \tag{3}\]
Note that the zero-depth (\(z=0\)) frequency-domain wavefield in this case is equal to the recorded surface data \(D(x,\omega)\). By means of the hard constraint, the governing equations are reduced to a single equation given by inserting Equation 3 into Equation 1, and thus, the loss function of PINNs is reduced to the mean square errors of the resulting single equation. Thus, the loss function with hard
constraints can be written as follows:
\[\begin{split}\mathcal{L}&=\frac{1}{N}\sum_{i=1}^{N} \left|\mathbf{z}^{i}\frac{(\omega^{i})^{2}}{(v^{i})^{2}}\Phi(\mathbf{x}^{i}, \mathbf{z}^{i},\omega^{i},\theta)+\mathbf{z}^{i}\nabla^{2}\Phi(\mathbf{x}^{i}, \mathbf{z}^{i},\omega^{i},\theta)+\right.\\ &\left.2\nabla_{z}\Phi(\mathbf{x}^{i},\mathbf{z}^{i},\omega^{i}, \theta)+\frac{(\omega^{i})^{2}}{(v^{i})^{2}}D(\mathbf{x}^{i},\omega^{i})+2 \nabla_{x}^{2}D(\mathbf{x}^{i},\omega^{i})\right|_{2}^{2},\end{split} \tag{4}\]
where \(N\) is the number of samples used for training, which includes the samples with variable spatial coordinates \((x^{i},z^{i})\) and angular frequencies \(\omega^{i}\).
### A modified loss function
Recall that if we do the source imaging in the frequency domain, then the wavefield belonging to multiple frequencies is needed for the transformation to the time domain wavefield. As shown in Huang and Alkhalifah (2022), direct use of the PINN with the loss (Equation 4) would decrease the accuracy and convergence speed. Similar to their implementation, we modify the loss function with a single reference frequency loss by replacing \(\omega\) to \(\alpha\omega_{ref}\) and \(U(x,z,\omega)\) to \(D(x,\omega)+\alpha z\Phi(x,z,\omega,\theta)\), yielding
\[\begin{split}\mathcal{L}&=\frac{1}{N}\sum_{i=1}^{N} \left|\alpha\mathbf{z}^{i}\frac{\omega_{ref}^{2}}{(v^{i})^{2}}\Phi(\mathbf{x} ^{i},\mathbf{z}^{i},\omega^{i},\theta)+\alpha\mathbf{z}^{i}\frac{\partial^{2} \Phi(\mathbf{x}^{i},\mathbf{z}^{i},\omega^{i},\theta)}{\partial(\alpha x)^{2}} \right.\\ &\left.+\alpha\mathbf{z}^{i}\frac{\partial^{2}\Phi(\mathbf{x}^{i},\mathbf{z}^{i},\omega^{i},\theta)}{\partial(\alpha z)^{2}}+2\frac{\partial \Phi(\mathbf{x}^{i},\mathbf{z}^{i},\omega^{i},\theta)}{\partial(\alpha z)}+ \right.\\ &\left.\frac{\omega_{ref}^{2}}{(v^{i})^{2}}D(\mathbf{x}^{i},\omega ^{i})+2\frac{\partial^{2}D(\mathbf{x}^{i},\omega^{i})}{\partial(\alpha x)^{2} }\right|_{2}^{2},\end{split} \tag{5}\]
where \(\alpha\) is a scaling factor equal to the ratio of the frequency \(\omega\) to the reference frequency \(\omega_{ref}\). With this scaling factor, the frequency variation is compensated by the scaling of the spatial dimensions, so that the wavelength within the wavefield apparently (to the NN) does not change much with frequency. Accordingly, the dimensions of the velocity \(v\) is also scaled, by \(\alpha\), to \(v(\alpha x,\alpha z)\). For details, we refer the reader to Huang and Alkhalifah (2022).
### Causality loss implementation
To improve the convergence of PINN, we replace the conventional loss function with a causality loss implementation. Since the information guiding the wavefield is coming from the data on the surface, we impose causality to the loss function along the depth axis, so that the wavefield is reconstructed from the earth surface down. This has an equivalence in imaging referred to as downward continuation. Like the loss calculation based on the temporal causality Wang et al. (2022), where the causality is with respect to time, here we can apply it with respect to depth. In this paper, we are formulating the solution with respect to an initial condition \(D(x,\omega)\) (for a solution in depth instead of time). The corresponding modified loss function is given by,
\[\mathcal{L}_{c}=\frac{1}{N_{z}}\sum_{i=1}^{N_{z}}\exp\left(-\epsilon\sum_{k=1 }^{i-1}\mathcal{L}\left(z_{k},\mathbf{\theta}\right)\right)\mathcal{L}\left(z_{i},\mathbf{\theta}\right), \tag{6}\]
where \(\mathcal{L}_{c}\) is the PDE loss with causality implementation, \(\mathcal{L}(z_{i}\theta)\) is the loss corresponding to the parameters \(\theta\) (equation 5) at a depth of \(z_{i}\), \(N_{z}\) is the number of samples along the depth axis, and \(\epsilon\) is the causality parameter. It could be a constant hyperparameter, but we modify it to be a varying one that varies with the number of epochs. We use \(\epsilon=\epsilon_{0}+\frac{\lambda}{epoch+1}\) as an alternative, where \(\epsilon_{0}\) is the initial value of \(\epsilon\) and \(\lambda\) is a factor to adjust the change in \(\epsilon\). The causality implies that, for example, the solution at depth 0.5 m depends on the solution at depth 0.3 m, but not the opposite, since the initial condition is at depth zero. With this modification, the convergence is faster and the NN admits better predictions, which we will see later.
### Fitting the observed data with NNs
As suggested in Schiassi et al. (2021), the hard constraint boundary condition could be stored in a neural network to simplify the calculation of the partial derivatives for the hard constraint PINN
loss later. As a result, we fit a small multi-layer perception (MLP) to the surface observed data in a supervised fashion, so it becomes as an NN function of the horizontal position of the receiver and frequency. Such training allows for fitting the NN to irregular acquisition (receivers) geometry, which is common in field applications (e.g. the lack of receivers or the non-uniform receiver intervals). The size of this data NN function will control the smoothness involved in the interpolation (Llanas and Sainz, 2006). More importantly, it allows for robust derivatives calculation for equation 5 using automatic differentiation (AD). Here, the data fitting branch is trained in a supervised manner, where the input of the NN are \(x\) and \(\omega\), and the output are the real and imaginary parts of the frequency-domain data at that location and frequency. Then, the supervised loss function for the data fitting is defined as follows:
\[\mathcal{L}_{d}(\theta)=\lambda\frac{1}{N_{r}\times N_{f}}\sum_{j=1}^{N_{r} \times N_{f}}\left|D\left(x_{j},\omega_{j}\right)-D_{\theta}\left(x_{j},\omega _{j}\right)\right|_{2}^{2}, \tag{7}\]
where \(N_{r}\) and \(N_{f}\) are the numbers of samples along the receivers and frequencies. Using all samples is not always needed because the noise in the samples may decrease the accuracy and we usually only need the use of a random subset of the data points to train the NN. Here, we show a simple demonstration to highlight the versatility of the approach for data fitting in handling irregularly sampled data. We generate data for a small layered model (same as the one used in Huang et al. (2021)) and place 100 receivers on the surface. We randomly drop receivers to imitate irregular data to train the network. As shown in Figure 1, the prediction looks reasonable even when using only 20% of the receivers for training. The second-order derivative w.r.t \(x\) of the NN data function calculated by AD, which is needed for equation 5, also has good accuracy compared with the second-order derivative calculated using a 5-point central difference formulation or AD with the full data.
### The pipeline and the backbone NN architecture
As shown in Figure 2, the preprocessed (e.g., non-local means filtering, NLM) recorded data on the surface are first transformed into the frequency domain via FFT and then used to train a small NN
Figure 1: The comparison between the NN function and the 3Hz observed data. a) shows the prediction of NN function trained with 20% to 90% of the dataset, and b) is the comparison between the corresponding second-order derivates calculated by AD or FD for various coverage percentages.
for data fitting. This trained NN will be used for the evaluation of the data term and calculation of the second-order derivate by AD in the partial differential equation (PDE) fitting. In the PDE fitting branch, the parameters of the data fitting branch are frozen, and another new larger NN is used to fit the PDE. To accelerate the convergence, we use the causality implementation of the PDE loss. After training, we predict the multi-frequency wavefield using the NN and utilize inverse FFT to get the time-domain snapshot. The locations where the energy focus are the source locations.
As for the implementation of the neural networks used in this pipeline, we utilize positional encoding [Huang et al., 2021] to help the network deal with the complex wavefield variations and help the convergence of the NN. The backbone of the NN is MLP (Multilayer Perception) with sine as the activation function. The size of the networks and training details will be shared in the examples.
## 3 Numerical examples
We first evaluate the proposed method on synthetic data including single-source event and multiple-source events cases. The synthetic data are generated for the Overthrust model (Figure 2(a)). The model size is 501\(\times\)161 with a spatial interval of 25 m. During the training for all synthetic examples, we use an MLP with four hidden layers, with 128, 64, 32, and 16 neurons in the layers from shallow to deep respectively, to fit the data, while for the PDE fitting, we use a larger MLP with six hidden layers, with 256, 256, 128, 128, 64, and 64 neurons from shallow to deep, respectively. We also include the positional encoding for \(x\), \(z\), and \(\omega\) with a level of 4. The frequency range for both tasks is from 3 to 12 Hz and we choose 12 Hz as the reference frequency.
### A single-source event case
In this subsection, we first test the proposed method in a toy problem, that has only one source event. The source is placed at (5.0, 3.0) km. We first do the forward modeling on the Overthrust model due to this source and recorded the data with 50 randomly placed receivers on the surface, shown in Figure 2(a). We train the NN for data fitting with positional encoding and train for 20000 epochs using an Adam optimizer with a learning rate of 1e-3. Then we train another NN for PDE fitting with 80000 random samples for 6000 epochs where we freeze the parameters of the NN for data fitting. We evaluate the NN on a uniform grid with 501\(\times\)161 grid points and an interval of 25 m to obtain the multi-frequency wavefields. As we have the source information for this case, we directly obtain the
Figure 2: The pipeline of the proposed method. There are two branches: the data fitting branch (**top**) and the PDE fitting branch (**bottom**). The recorded data \(d(x,t)\) are first transformed to the frequency domain and then be used to train the data NN. And then the data NN is combined into the PDE fitting branch to train the PDE NN. Then the output of the PDE NN are transformed to the time-domain snapshots using inverse FFT, where the red box denotes the source imaging where the energy focus. The module PE denotes the positional encoding with sinusoidal functions [Huang et al., 2021].
source image by the summation of the wavefield over frequencies (equal to the time-domain snapshot at zero time). As shown in Figure 3 b, the source location in the image is consistent with the ground truth (black star in Figure 3a), which demonstrates the accuracy of the proposed method. Next, we test more complex scenarios.
### A more complex case including triple-sources event
In a more realistic scenario, there might be multiple sources distributed at variable locations in the subsurface. Thus, we test the proposed method in this more complex situation. The black stars in Figure 4a shows the source locations, and we use forward modeling from these sources to obtain the data, recorded by a varying numbers of random receivers. With the same hyperparameters and training process mentioned in the last section, we tested the proposed method with data received at 50 (Figure 4 a), 40 (Figure 4 c), 30 (Figure 4 e), and 20 (Figure 4 g) random receivers. As shown in Figures 4b,d,f, and h, the source locations are accurately extracted by the proposed method. Even in the case of only 20 random receivers placed along the 12.5 km distance, we can still get focused source imaging results. This demonstrates the capability of the proposed method in handling sparse and irregular observations, showing good potential for the application to field data.
But before that, we need to test the performance of the proposed method in the passive seismic monitoring scenario as this is a more common case. The above experiments are built on an active seismic scenario in which the source is ignited at zero time. For passive seismic data, we often do not know the source ignition time. Thus, instead of the direct summation of the wavefields over frequencies, we inverse Fourier transform the wavefields to obtain time-domain snapshots. The snapshot that shows the largest focusing of energy is often regarded as the ignition time of the source [Artman et al., 2010]. Thus, with a similar setting to the previous experiments, we execute the forward modeling from the triple sources but ignite the source at 5 s, which we consider as an unknown. We use the proposed pipeline to obtain the time-domain snapshots, and we show several snapshots from 4.75 to 5.5 s in Figure 5. We observe that the energy focuses best at the snapshot at 5.00 s, which is consistent with the ignition time. The locations where the energy focus are also consistent with the ground truth of the source locations.
## 4 The field test to the hydraulic fracturing data
The above synthetic examples have demonstrated the potential for a field data application. Thus, in this section, we have further tested on the field vertical-component data acquired in the Arkoma Basin in North America. The data are recorded through passive seismic monitoring during a hydraulic fracturing stimulation for a shale gas reservoir [Anikiev et al., 2014, Stanek and Eisner, 2017, Wang et al., 2022a]. Figure 6a shows the whole region covering a square area of 5\(\times\)5 km\({}^{2}\) with a maximum depth of 2.45 km. There were 75 events (red dots) captured in the four days of monitoring and recorded by 911 receivers (blue dots) on the surface. The 3D P-wave velocity (Figure 6b) is obtained from an active seismic experiment in the region and adjusted by the well-log information [Anikiev et al., 2014]. In this case, we do not have the exact zero-tag information of the recordings. Here, to demonstrate the performance of the proposed method on the field data, we select two events based on their signal-to-noise ratio of the recorded event and the geometry of the line and the event, so it complies with the 2D implementation of the proposed method. The first event is extracted from
Figure 3: a) shows the Overthrust velocity model, the source event location (denoted by the black star), and the random receivers (denoted by the red triangles); b) is the source image courtesy of the proposed method.
the recordings along line 10, denoted by the black box, shown in Figure 7. The corresponding 2D velocity profile and recorded data for the event are shown in Figure 8. Prior to applying our PINN imaging, we use several pre-processing steps, including the Non-local means filter [Buades et al., 2005] and a bandpass filter on the data.
Figure 4: The source location results using various numbers of random receivers, e.g. 50 (a), 40 (c), 30 (e), and 20 (g) receivers denoted by the red triangles. The sources denoted by the black stars are distributed at different locations: \((5.0,2.5)\), \((7.5,3.0)\), and \((10.0,3.5)\) km. The source location results corresponding to various numbers of receivers by the proposed method are shown in b), d), f), and h).
Figure 5: Time-domain snapshots of wave propagation at various times, e.g., a) 5.50 s, b) 5.25 s, c) 5.00 s, and d) 4.75 s, where the energy of the wavefield focus best at 5.0 s (c).
After preprocessing, we start the proposed pipeline (Figure 2). We transform the recorded data to the frequency domain and train an NN of size {18, 128, 2} with positional encoding where L=4 to fit the 3-12Hz recorded data on the surface. As we claimed before, the data fitting network could deal with a big part of the receivers missing. Here, we randomly pick 70 receivers out of 122 for the training. The benefit of this coarse fitting is to allow for a smooth representation of the data that captures the
Figure 8: The original recorded data for a single event a) and the 2D P-wave velocity profile b), where the source location provided by the data provider is denoted by a red star.
Figure 6: Passive seismic monitoring in Oklahoma, a) is the recording geometry in this region where the blue dots denote the receivers, and red dots denote the locations of the previously estimated events projected on the 2D plane; b) is the corresponding 3-D P-wave velocity.
Figure 7: The selected event 8 and the recording line. a) is the original geometry, where the black box denotes the selected recording line, and the black star denotes the location of the event; b) is the zoom of the corresponding geometry.
key moveout information. After 4000 epochs of supervised training with an Adam optimizer and a learning rate of 1e-3, the predicted data are shown in Figure 10. We could obviously observe that the main features of the original data are well-reconstructed by the neural network, granted we ignore the sharp changes, which could be due to noise (wavefields are inherently smooth).
Then the next step is to perform the PDE fitting branch in Figure 2. Here we use a larger NN of size {27, 256, 256, 128, 128, 64, 64, 2} with positional encoding where L=4 to ensure the capacity of the NN is enough to represent this high-dimensional wavefield. The loss function here is the causality implementation of equation 5. After training with an Adam optimizer using a learning rate of 1e-3 for 1500 epochs, we obtain the frequency-domain wavefield. An inverse Fourier transform provides time-domain snapshots (source imaging). Figure 11 shows these snapshots in which the source image seems to focus on the right source location consistent with the provided location at the estimated time of around 0.7 s.
In the proposed method, we do not need label information for source location during the training as the supervision comes from the governing equations and the data. On the other hand, for supervised based approaches, errors in the labels would result in an error-prone trained model. For example, as for event 62 in line 2 (Figures 12 and 13), the source image result is shown in Figure 14, where the image focuses at a location different than the provided source locatio
Figure 10: A comparison between the data fitting result and recorded data of 3Hz for the first 100 receivers. The blue line denotes the recorded data, while the orange line is the predicted data by NN.
Figure 9: The recording of event 8, a) is the original recording with recording time from 0.8 s to 1.6 s; b) is the corresponding processed data from 3 Hz to 12 Hz after non-local means filtering where the patch size used for denoising is 15, maximal distance in pixels where to search patches used for denoising is 21, and the cut-off distance is 1.0.
supervised learning). We simulate the data from the provided source location label using the 2D P-wave velocity profile and a finite difference wave equation solver. We realize that the simulated records and recorded data are very different in curvature and shape (Figure 15). Using the simulated data, we image the source using our PINN approach and obtain the images in Figure 16. The image focused at the location of the source. This demonstrates that the provided source location is not reliable, but our method is consistent, and forms an adjoint to the forward modeling, which is based on a finite difference method.
Figure 11: The time-domain snapshots of the wavefield at various times. It seems that the wavefield focus best at 0.7 s. The red box denotes the location where the energy is focused, which is consistent with the provided location (red star).
Figure 12: The selected event 62 and the recording line 2. a) is the original geometry, where the black box denotes the selected recording line, and the black star denotes the location of the event; b) is a zoom of the corresponding geometry.
## 5 Discussion
In this paper, we showed the results of our novel direct source imaging method using PINNs with hard constraints on both synthetic and field data. The functional form of NNs offers flexibility for irregular and sparse recordings. The synthetic demos and field examples demonstrate the effectiveness and potential for source imaging. In the following subsections, we will discuss the causality implementation, the drawbacks of the method, and further potential improvements.
### The performance with/without causality implementation
As we claimed earlier, using the PDE loss with causality could accelerate the convergence and improve the final predictions. Here, we take the triple-sources case with 20 random receivers to demonstrate the benefits of the causality implementation. We use the same velocity model to generate the data, the
Figure 14: The time-domain snapshots of the wavefield at various times. The red box denotes the location where the energy is focused but is inconsistent with the given label.
Figure 13: The original shot gathers of line 2 due to event 62, a) and the 2D P-wave velocity profile b), where the red star denotes the source location given by the data provider.
same network configuration but with 3000 epochs in the training, and the same workflow to obtain the source imaging results. For the causality implementation, we set \(\epsilon_{0}=1e-7\) and \(\lambda=1e-5\). Figure 17 shows the loss curves of the training process with and without the causality modification. The convergence with causality implementation is much faster than the vanilla implementation. The prediction after 3000 epochs (Figure 18) with causality implementation is better than the one without causality implementation. The reconstructed source image (Figure 18b) has better energy focusing.
Figure 16: The time domain snapshots of the wavefield at various times with the proposed method applied to the simulated data (Figure15b).
Figure 15: a) is the observed data while b) is the simulated data on the 2D velocity profile (Figure13b) due to the event on the location given by the data provider.
### Resolution
The proposed method still has room for further developments in terms of resolution. This can be achieved by extending the frequency range used in the proposed method. High frequencies have been a thorn for PINNs. However, with the reference frequency approach and the strategy of the frequency extension (Huang and Alkhalifah, 2022; Huang et al., 2022), we can gear the approach to work better. Increasing the frequency range used in the data fitting and PDE fitting would increase the resolution of the final source image.
Figure 17: The loss curves of training with and without the causality implementation. The purple line is the version without causality implementation while the black line is with causality implementation.
Figure 18: The source image a) with a conventional loss, and b) with a causal loss equation 6.
### Efficiency
Although the workflow could deal with irregular and sparse observations and show highly reliable performance, there exists the main drawback of PINNs for now and that is efficiency. The cost is reasonably high, especially considering that the wavefield corresponds to a particular velocity and source location. However, PINNs are new compared to, for example, the finite difference methods for solving the wave equation, so speed up is a matter of time. Even with the slower implementation, the flexibility in PINN in its functional representation of the wavefield, no mesh is required, that allows it to handle irregular geometry might make it all worthwhile to explore and utilize.
## 6 Conclusion
We present a novel direct microseismic imaging framework utilizing physics-informed neural networks with hard constraints. It allows us to image the source due to the use of the neural network functional representation, in addition to its interpolation capabilities. Meanwhile, we propose a loss function with causality with respect to depth to accelerate the convergence and improve the prediction of the NN. With only 20 random receivers on a 12.5 km lateral stretch, the multiple-source event is stably and accurately located. We successfully applied to the Oklahoma Arkoma Basin hydraulic fracturing data, showing the effectiveness of the proposed method and its potential for large-scale source location.
## 7 Acknowledgments
The authors thank KAUST and the DeepWave Consortium sponsors for supporting this research. We thank Microseismic Inc. for the use of the Arkoma data, and Hanchen Wang and Fu Wang for discussing the field data preprocessing. We would also like to thank KAUST for its support and the SWAG group for the collaborative environment. This work utilized the resources of the Supercomputing Laboratory at King Abdullah University of Science and Technology (KAUST) in Thuwal, Saudi Arabia. The datasets in this paper are available at [https://doi.org/10.5281/zenodo.7543192](https://doi.org/10.5281/zenodo.7543192).
|
2301.08497 | Hierarchical Classification of Variable Stars Using Deep Convolutional
Neural Networks | The importance of using fast and automatic methods to classify variable stars
for large amounts of data is undeniable. There have been many attempts to
classify variable stars by traditional algorithms like Random Forest. In recent
years, neural networks as classifiers have come to notice because of their
lower computational cost compared to traditional algorithms. This paper uses
the Hierarchical Classification technique, which contains two main steps of
predicting class and then subclass of stars. All the models in both steps have
same network structure and we test both Convolutional Neural Networks (CNN) and
Recurrent Neural Networks (RNN). Our pre-processing method uses light curves
and period of stars as input data. We consider most of the classes and
subclasses of variable stars in OGLE-IV database and show that using
Hierarchical Classification technique and designing appropriate preprocessing
can increase accuracy of predicting smaller classes, ACep and T2Cep. We obtain
an accuracy of 98% for class classification and 93% for subclasses
classification. | Mahdi Abdollahi, Nooshin Torabi, Sadegh Raeisi, Sohrab Rahvar | 2023-01-20T09:57:08Z | http://arxiv.org/abs/2301.08497v1 | # Hierarchical Classification of Variable Stars Using
###### Abstract
The importance of using fast and automatic methods to classify variable stars for large amounts of data is undeniable. There have been many attempts to classify variable stars by traditional algorithms like Random Forest. In recent years, neural networks as classifiers have come to notice because of their lower computational cost compared to traditional algorithms. This paper uses the Hierarchical Classification technique, which contains two main steps of predicting class and then subclass of stars. All the models in both steps have same network structure and we test both Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN). Our pre-processing method uses light curves and period of stars as input data. We consider most of the classes and subclasses of variable stars in OGLE-IV database and show that using Hierarchical Classification technique and designing appropriate pre-processing can increase accuracy of predicting smaller classes, ACep and T2Cep. We obtain an accuracy of 98% for class classification and 93% for subclasses classification.
Variable Stars, Hierarchical Method, Convolutional Neural Networks 10.22128/jia.2018.106
## 1 Introduction
Variable Stars are important objects in astrophysics and cosmology. Most importantly, there is well-defined relationships between the period and absolute magnitude of Cepheid stars, which are relatively young, massive and radially pulsating stars. These stars are used as the standard candles for measuring the cosmological distance and calibrating type Ia supernovas [1, 2].
The Cepheids as the distance indicators also provide essential information on the size of our galaxy [3]. Type II Cepheids are also useful for studying stellar evolution as each subclass of this class is in a different stage of stellar evolution [4]. RR Lyrae stars can be used for studying the early history of galaxies. They are one of the oldest observable populations of stars and the chemical and dynamical evolution of this type of stars provides better understanding on the evolution of stars [5].
In order to classify variable stars, astronomers construct a set of features to describe light curves and extract them using Certain algorithms. Debosscher, J. et al. [6] used the Lomb-Scargle periodogram to find 28 properties of variable stars like median and mean of magnitude of light curves. Kim, Dae-Won et al. [7] classified EROS-2 LMC variable stars by
using Random Forest (RF) algorithm, using multiple features extracted from light curves. The features included Period, period S/N, Color and magnitude among others which were extracted using algorithms like Lomb [8] and Scargle [9]. They tested more than 30 features and selected 22 features to train their classifier based on the feature importance. They also computed the feature importance and found out that the period of variable stars is the most important property extracted in classifying by RF.
Nun et al. [10] published the FATS package to extract features from photometry data automatically. Kim & Bailer-Jones [11] designed the UPSILON package, which extracts the features and then predicts the class of variable stars. This package works with a RF classifier which was trained with OGLE and EROS-2 data and its performance has been tested on MACHO, LINEAR, and ASAS database. All these works use sophisticated pre-processing algorithms to extract relevant information of the variable stars which leads to high computational cost. This is not practical for large cosmological datasets such as the Large Synoptic Survey Telescope (LSST) which will gather approximately 20 TB of data every night [12]. For realistic applications of these models on large astronomical data, it is essential to use efficient and low-cost pre-processing techniques.
In the light of recent advances in Deep Learning, Neural Networks make a good candidate for classification of Variable Stars. Using neural networks can help saving time by using transformed data instead of extracted features for networks' input. Transformed data here refers to data which is reshaped to become appropriate as input, without any complex calculations like algorithms used for feature extraction.
Aguirre et al. [13] and Becker et al. [14] proposed models for classification of OGLE-III variable stars using Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN), respectively. They did not use any feature extraction. Instead, they transformed the light curves into matrix representations with differences in time and magnitude as elements. In the case of data, Aguirre et al. [13] did not consider some classes like T2Cep and ACep. Also, Becker et al. [14] accounted T2Cep class as Cep class and ignored Acep variable stars. These are classes with smaller population and they are usually challenging to classify.
Machine learning models tend to train poorly when the training data is imbalanced i.e, distribution of stars in classes is biased. For such problems, machine learning models tend to over-fit to more populated classes and ignore smaller classes. For instance, in the case of variable stars, ACep makes only \(~{}0.83\%\) of samples. It means that if the model ignores the ACep completely and classifies all the ACep as any other class, this would lead to an error of less than \(~{}1\%\) based on accuracy. In many cases, we are interested in the less populated classes which correspond to rare events. For instance, as was explained before, detection/identification of ACep and T2Cep are of particular interest.
In recent years, hierarchical methods have come to notice for classification by machine learning tools. Sanchez-Saez et al. [15] used hierarchical classification to classify several objects in the sky, especially variable stars. They selected 152 features for each star in classification, mostly defined from previous works like Kim, Dae-Won et al. [7] and Nun et al. [10]. Rimoldini et al. [16] also used hierarchical classification with 5 steps which uses different features in each step to distinguish stars.
We use hierarchical technique and train our Neural Networks using OGLE-IV database [17]. We only use the I-band photometry of the stars, i.e., time series containing the magnitude at different observation times and the period of each light curve [31]. For comparison, we have also trained an RNN based model. We find that the CNN has a lower computational cost and better performance, especially for less populated classes.
Knowing what subclass a star belongs to, can be of importance. Therefore, the structure of our model consists of two general steps of predicting class and then subclass of the stars.
They provide information about stellar evolution. For instance, each subclass of Type II Cepheids is in a very short state between two steps in stellar evolution, located in the separated areas in the Hertzsprung-Russel diagram. Because of their small population, some of these subclasses cannot be classified without using Fourier transform or other period related information about them [16]. So, in this work, we use the period of stars to obtain phase-folded light curves as input in addition to using the period itself.
This paper is organized as follows. Section 2 presents the dataset used for training and testing. In section 3, we propose the method designed for pre-processing. Section 4 presents the hierarchical classification and the classifier used in this technique. In section 5, we compare our results to some of the papers on this subject, and we also specifically consider results of less populated classes. Finally, in section 6, we present the conclusion and future work.
## 2 Data
We used the OGLE-IV [17] variable stars database for training and testing 1. Data contains 7 classes and 16 subclasses in total (see Table 1 and Table 2 for the information of the classes and sub-classes used in this paper).
Footnote 1: Other references related to the OGLE database : [18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30]
For each star, the data includes observation time in Julian days, magnitude and error bar of magnitude. The initial data has fluctuations due to the photometric error, the periodical nature of variable stars and some gaps due to sampling of the light curves. In this database, the population of different classes are not balanced. Machine learning techniques tend to favor more populated classes. This means that probability of misclassification increases for stars of less populated classes. To solve this problem we implement two solutions. The first is balancing the data, and the second is using the hierarchical technique presented in section 4. To balance the population of classes, a maximum of 10,000 random stars was chosen from each class. Also, we removed some subclasses with less than 15 stars, like pWVir and BLHer1O subclasses from T2Cep. We used a total of 43,683 stars from the OGLE database to train and test the neural network model. 70% of the data were used for training, 10% for validation, and 20% for testing the performance.
\begin{table}
\begin{tabular}{l r r r} \hline Class (Acronym) & Num. Of Subclasses & Num. of stars & Frequency(\%) \\ \hline Eclipsing and Binary Stars (ECL) & 2 & 9945 & 22.76 \\ RR Lyrae (RRLYR) & 2 & 9519 & 21.79 \\ Long Period Variables (LPV) & 3 & 9914 & 22.69 \\ Delta Scuti (DSCT) & 1 & 2678 & 6.13 \\ Cepheid (Cep) & 3 & 9478 & 21.69 \\ Type II Cepheid (T2Cep) & 3 & 1783 & 4.08 \\ Anomalous Cepheid (ACep) & 2 & 366 & 0.83 \\ \hline Total & 16 & 43683 & \\ \hline \end{tabular}
\end{table}
Table 1: This table includes information on each class of the data used in this work. Frequency is the number of is the ratio of the number of stars in each class to the total population of stars.
\begin{table}
\begin{tabular}{l l r r} \hline Class & Subclass & Num. of stars & Frequency(\%) \\ \hline ECL & NC & 8209 & 20.30 \\ ECL & C & 1736 & 4.29 \\ RRLYR & RRab & 7095 & 17.54 \\ RRLYR & RRc & 2424 & 5.99 \\ LPV & Mira & 188 & 0.46 \\ LPV & OSARG & 8455 & 20.91 \\ LPV & SRV & 1271 & 3.14 \\ DSCT & SINGLE & 2678 & 6.62 \\ Cep & F & 5315 & 13.14 \\ Cep & 1 & 3469 & 8.58 \\ Cep & 12 & 694 & 1.72 \\ T2Cep & BLHer & 747 & 1.85 \\ T2Cep & RVTau & 346 & 0.86 \\ T2Cep & Wvir & 690 & 1.71 \\ ACep & F & 246 & 0.61 \\ ACep & 1 & 120 & 0.30 \\ \hline Total & & 43683 & \\ \hline \end{tabular}
\end{table}
Table 2: This table contains the number of each subclass in the OGLE dataset used for training, validation, and testing the performance of neural networks. Frequency is the ratio of the number of stars in each subclass to the total number of variable stars.
Figure 1: Schematic of pre-processing steps. In the first step, we take the raw data from the OGLE-IV dataset. Then in the second step, we fold raw data by using its specific period provided by the OGLE catalogue. We try binning the data to make the data with same length in the third step. Then, we add the period as an important feature to the input data. For an example, pre-processing steps on OGLE-LMC-CEP-0004 is shown. For details of pre-processing, see section 3.
## 3 Pre-processing input data
The raw data from different surveys differ in the sampling rate and even in the number of observation points due to complications such as the weather and moon's effects. To make the models independent of sampling rate, the pre-processing method produces normalized and binned light curves.
To start pre-processing, first we use the period of variable stars, which is provided by the OGLE catalogue to fold the raw data using the "lightkurve" package [32]. This step leads to obtaining the periodic behavior of the star in the phase space. Then, each phase folded light curve is divided into 50 equal bins to make the length of data points the same, and the value of each bin is set to the average of the values of the points in it. During this process, some bins become empty because there are no data points in them. To address this problem, we replace the empty values using linear interpolation.
To find the best number of bins, we should consider that increasing the number of bins leads to increasing empty values, and decreasing the number of bins results in ignoring the details of the light curve. We test 25, 50, and 75 for the number of bins, and we find that 50 gives the best classification result. We also normalize the light curve by the mean of its magnitude. This would enhance the performance of the neural network model.
The folded light curve has the disadvantage of losing period information which is a critical feature for classification of variable stars. To solve this problem, the value of the period is provided as a separate feature to the model. Fig.1 shows the pre-processing pipeline.
Figure 2: Schematic of Hierarchical Classification. Hierarchical classifiers are shown in the blue box. Classifier(a) and Classifier(b) are built to predict class of the stars, and the other classifiers named by their classes are designed to predict subclasses. The red box shows the targets.
## 4 Model
We use a Hierarchical classification model technique, i.e., we use several classifiers to achieve better accuracy. The first classifier which we refer to as Classifier(a) separates ECL, RRLYR, LPV, DSCT, and a 5th class containing Cep, ACep, and T2Cep classes. More specifically, for the first level of classification, we group the three classes that are less populated and similar together. When the first classifier outputs the 5th class, a second classifier is used to specify which subgroup, i.e., Cep, ACep or T2Cep, the star belongs to. Up to this point, the combination of the two classifiers makes a classifier that can identify the class of the variable stars. Next, for each class that is composed of subclasses, a new classifier is trained to identify them.
A schematic picture of the structure of our hierarchical classification model is depicted in Fig. 2 (acronyms used for classes are introduced in Table 1). This is in contrast to models used in [14] where one classifier is directly trained to classify subclasses, and then they group the subclasses to find the classification result for classes. Having several classes with different subclasses increases the probability of misclassification. The Hierarchical Classification technique helps to reduce these errors when we deal with multi-class classification. Specifically, this structure limits instances where stars from less populated classes such as ACep are assigned to more populated classes like RRLYR.
The classifiers used for the hierarchical model are identical in the network structure. The model is composed of a 1D convolutional layer with eight channels followed by three fully connected layer. We used the "Keras" library [33] to design our classifiers.
CNN are a type of neural networks that are often used for image processing. CNN layers exploit properties in the input data such as locality or transnational in-variance for parameter sharing [34]. This reduces the number of parameters of the neural network which reduces the training costs and also makes the model less susceptible to over-fitting [34]. The schematic structure of the network is shown in Fig. 3. As shown in the figure, the model makes 8 convolutional channels from input data. Then, the flattened layer make appropriate data for the fully connected neural network which have three layers with different nodes. For details of CNN model, see Table. 7.
For comparison, we also try another type of classifier, which is known as RNN [35]. The RNN models are typically used for classification of sequential data like the photometry data. This is why one may expect to get better performance from RNN. In this work, we use RNN models with two layers of simple RNN with 3 and 32 units, respectively. Also, we add two dense layers to increase the number of trainable parameters to have flexible models. See Table. 8 for more information. However, we find that RNN does not offer any advantages over the CNN-based models and in fact, CNN model provides a faster classification with the same accuracy.
\begin{table}
\begin{tabular}{c c c} \hline Model & Accuracy & Training Time \\ & & (min) \\ \hline CNN & 0.93 & \(\sim\) 24 \\ RNN & 0.90 & \(\sim\) 324 \\ \hline \end{tabular}
\end{table}
Table 3: Comparing accuracy and runtime of each type of neural network used to classify variable stars.
\begin{table}
\begin{tabular}{l l c c} \hline Class & Subclass & CNN & RNN \\ & & Precision-Recall & Precision-Recall \\ \hline Cep & F & 0.98 - 0.98 & 0.97 - 0.90 \\ Cep & 1 & 0.87 - 0.98 & 0.82 - 0.90 \\ Cep & 12 & 0.72 - 0.45 & 0.67 - 0.28 \\ T2Cep & BLHer & 0.94 - 0.88 & 0.90 - 0.83 \\ T2Cep & RVTau & 0.86 - 0.87 & 0.84 - 0.81 \\ T2Cep & Wvir & 0.95 - 0.89 & 0.75 - 0.95 \\ ACep & F & 0.92 - 0.98 & 0.48 - 0.98 \\ ACep & 1 & 0.43 - 0.25 & 0.29 - 0.67 \\ \hline \end{tabular}
\end{table}
Table 4: Classification results of less populated subclasses using CNN and RNN classifier in hierarchical model.
Figure 3: Schematic of Convolutional Neural Networks. In a simple neural network, \(h_{i}\) is the number of nodes in each layer. In the final layer, n-class is the number of classes which the model should classify, e.g., 2 for the ECL classifier. The words in parenthesis are the name of activation functions, e.g., Relu. Blue rectangular is a filter that makes a convolutional layer.
## 5 Classification results
We trained and tested our models on Google collaboratory, which is an online programming environment equipped with 12 GB of RAM and NVIDIA Tesla T4 GPU.
We have tested different models (CNN, RNN). For evaluation, we consider two main metrics, the accuracy of classification and the time required for training models. Table 3 shows the accuracy and time cost of our models. This shows that the CNN model has better performance and is faster.
One of the key aspects of our model is its ability to classify classes and subclasses with smaller populations. To evaluate the performance for unbalanced datasets like ours, accuracy is not enough. We also need to measure how well samples from smaller classes are detected. For this, we use "recall" which for each class, is the fraction of the samples from that class that have been correctly classified. For instance, the Cep has 1916 samples in the test set and our model classifies 1863 of them as Cep which means that its recall for Cep is 0.97.
Another metric that should be considered for imbalanced datasets, is "precision" which is the fraction of samples that are correctly predicted for each class. For instance, our model predicted 1907 samples from the test set to be Cep, out of which, only 1863 stars are actually Cep. This means that the precision for the Cep class is 0.97. The performance metrics are discussed in more details in appendix B.
Table 4 shows the comparison of classification results for CNN and RNN based models for less populated classes. For the performance of all classes and subclasses for the CNN model see Tables 5 and 6.
These two tables show that the models have better results for more populated classes than smaller ones, based on precision and recall.
In addition to mentioned metrics, we can evaluate our model by using confusion matrix. The Confusion matrix of classes and subclasses is shown in Fig. 4 and 5 for the CNN based model discussed in the paper.
The more the matrix is diagonal, the more the classification is accurate. Checking the confusion matrix allows us to find the misclassifications made by the model. Therefore, the numbers out of the diagonal show misclassified classes and subclasses.
Considering the confusion matrix, we find that misclassification in some subclasses are more than others. The main reason for these mistakes is the similarity of the light curves and the periods of the two subclasses. Subclasses with small population can be hard to predict, too, due to the data not being enough for training the network.
As an example, 62% of ACep (subclass 1) stars have been predicted as RRab stars. The reason for this is the similarity between period and light curve of these two types of stars.
\begin{table}
\begin{tabular}{l c c} \hline Class & Precision & Recall \\ \hline ECL & 1.00 & 0.98 \\ RRLYR & 0.98 & 0.99 \\ LPV & 0.98 & 1.00 \\ DSCT & 0.99 & 1.00 \\ Cep & 0.98 & 0.97 \\ T2Cep & 0.94 & 0.90 \\ ACep & 0.82 & 0.74 \\ \hline weighted avg & 0.98 & 0.98 \\ \hline \end{tabular}
\end{table}
Table 5: Results of classes classification using CNN models.
\begin{table}
\begin{tabular}{l l c c} \hline Class & Subclass & Precision & Recall \\ \hline ECL & NC & 0.91 & 0.92 \\ ECL & C & 0.68 & 0.57 \\ RRLYR & RRab & 0.98 & 1.00 \\ RRLYR & RRc & 0.96 & 0.98 \\ LPV & MIRA & 1.00 & 0.86 \\ LPV & OSARG & 0.96 & 0.97 \\ LPV & SRV & 0.79 & 0.84 \\ DSCT & SINGLE & 0.99 & 1.00 \\ Cep & F & 0.98 & 0.98 \\ Cep & 1 & 0.87 & 0.93 \\ Cep & 12 & 0.72 & 0.45 \\ T2Cep & BLHer & 0.94 & 0.88 \\ T2Cep & RVTau & 0.86 & 0.87 \\ T2Cep & Wvir & 0.95 & 0.89 \\ ACep & F & 0.92 & 0.98 \\ ACep & 1 & 0.43 & 0.25 \\ \hline weighted avg & & 0.93 & 0.93 \\ \hline \end{tabular}
\end{table}
Table 6: Results of subclasses classification using CNN models.
Figure 4: Confusion matrix of classification using CNN models and the pre-processing method discussed in section 3, only containing classes.
Figure 5: Confusion matrix of classification using CNN models and the pre-processing method discussed in section 3. The black squares highlight the seven classes of the top level (from top to bottom, ECL, RRLYR, LPV, DSCT, Cep, T2Cep and ACep, respectively)
To be able to enhance prediction of ACep stars, information of distance and absolute magnitude is needed [36].
By comparing confusion matrices, we can find misclassifications in the second step of hierarchical classification. For instance, in Fig. 4, 98% of ECLs predict as actual labels; however, in the subclass confusion matrix (Fig. 5), we find misclassifications in both subclasses of ECL, especially in the second one. This means class classifiers work well to distinguish ECLs from other classes; however, subclass classifiers in the second step could not classify subclasses as good as the first step. This happens more in classes and subclasses with imbalanced data and leads to less accuracy for subclass classification than class classification.
## 6 Conclusion
We present a hierarchical model for classification of variable stars. The model has two neural networks for classification of the class of the star. Then each class is passed to a new neural network to identify the subclasses of the corresponding class. With this architecture we manage to classify less populated classes/sub-classes with high performance (See tables 5 and 6).
We use methods mentioned in section 3 to make phase folded light curves suitable for input data. Although we obtain satisfying results for classes and subclasses classification, this pre-processing method is very challenging and high cost because of the processing needed for calculating the period. But it has a significant advantage. When using other methods based on features extracted from light curves, we usually face over-fitting when there are many features, and we should select important ones to solve this problem. But our models classify variable stars by the shape of light curves and their essential feature, the period.
This work was done using OGLE-IV dataset. Next step could be including other datasets like WISE and Gaia.
|
2304.09157 | Neural networks for geospatial data | Analysis of geospatial data has traditionally been model-based, with a mean
model, customarily specified as a linear regression on the covariates, and a
covariance model, encoding the spatial dependence. We relax the strong
assumption of linearity and propose embedding neural networks directly within
the traditional geostatistical models to accommodate non-linear mean functions
while retaining all other advantages including use of Gaussian Processes to
explicitly model the spatial covariance, enabling inference on the covariate
effect through the mean and on the spatial dependence through the covariance,
and offering predictions at new locations via kriging. We propose NN-GLS, a new
neural network estimation algorithm for the non-linear mean in GP models that
explicitly accounts for the spatial covariance through generalized least
squares (GLS), the same loss used in the linear case. We show that NN-GLS
admits a representation as a special type of graph neural network (GNN). This
connection facilitates use of standard neural network computational techniques
for irregular geospatial data, enabling novel and scalable mini-batching,
backpropagation, and kriging schemes. Theoretically, we show that NN-GLS will
be consistent for irregularly observed spatially correlated data processes. We
also provide a finite sample concentration rate, which quantifies the need to
accurately model the spatial covariance in neural networks for dependent data.
To our knowledge, these are the first large-sample results for any neural
network algorithm for irregular spatial data. We demonstrate the methodology
through simulated and real datasets. | Wentao Zhan, Abhirup Datta | 2023-04-18T17:52:23Z | http://arxiv.org/abs/2304.09157v3 | # Neural networks for geospatial data
###### Abstract
Analysis of geospatial data has traditionally been model-based, with a mean model, customarily specified as a linear regression on the covariates, and a covariance model, encoding the spatial dependence. While non-linear machine learning algorithms like neural networks are increasingly being used for spatial analysis, current approaches depart the model-based setup and cannot not explicitly incorporate the spatial covariance. We propose embedding neural networks directly within the traditional geostatistical models to accommodate non-linear mean functions while retaining all other advantages of the model-based setup including use of Gaussian Processes to explicitly model the spatial covariance, enabling inference on the covariate effect through the mean and on the spatial dependence through the covariance, and offering predictions at new locations via kriging. We propose _NN-GLS_, a new neural network estimation algorithm for the non-linear mean that explicitly accounts for the spatial covariance through generalized least squares (GLS), the same loss used in the linear case. We show that NN-GLS admits a representation as a special type of graph neural network (GNN). This connection facilitates use of standard neural network computational techniques for irregular geospatial data, enabling novel and scalable mini-batching, backpropagation, and kriging schemes. Theoretically, we show that NN-GLS will be consistent for irregularly observed spatially correlated data processes. To our knowledge this is the first asymptotic consistency result for any neural network algorithm for spatial data. We demonstrate the methodology through numerous simulations and an application to air pollution modeling.
_Keywords:_ Geostatistics, spatial statistics, Gaussian process, neural networks, machine learning, consistency.
Introduction
_Geostatistics_, the analysis of geocoded data, is traditionally based on stochastic process models which offer a coherent way to model data at any finite collection of locations while ensuring the generalizability of inference to the entire region. _Gaussian processes (GP)_ with a mean function capturing effects of covariates and the covariance function encoding the spatial dependence, is a staple for geostatistical analysis, offering theoretical guarantees and practical benefits. GP are flexible enough to model any smooth spatial surface, and can be specified parsimoniously with covariance functions using a very small set of parameters. The spatial covariance parameters offer insights into the smoothness and spatial properties of the response process (Stein, 1999). The finite dimensional realizations of a GP are multivariate Gaussian, thereby offering estimates of the mean and covariance parameters via convenient maximization of the Gaussian likelihood, and predictions at new locations by using conditional Gaussian distributions (see, e.g., Banerjee et al., 2014; Cressie and Wikle, 2015, for detailed exposition on GP models for spatial and spatio-temporal data). Also, computational roadblocks to using GP for large spatial data have been greatly mitigated by recent advances (see, Heaton et al., 2019, for a recent review of scalable GP approaches).
The mean function of a Gaussian process is often modeled as a linear regression on the covariates. The growing popularity and accessibility of machine learning algorithms such as neural networks, random forests, gradient boosted trees, capable of modeling complex non-linear relationships has heralded a paradigm shift. Practitioners are increasingly shunning models with parametric assumptions like linearity in favor of these machine learning approaches that can capture non-linearity and high-order interactions in a data-driven manner. The field of spatial statistics has not been insulated from this machine learning revolution. In particular, deep neural networks, a class of algorithms that has become almost synonymous with machine learning, have seen considerable recent adoption and
adaptation for geospatial data (see Wikle and Zammit-Mangion, 2023, for a comprehensive review).
Many of the machine learning based regression approaches assume independent observations, implicit in the choice of a sum of squared error loss (_ordinary least squares_ or _OLS_ loss) as the objective function used in estimating the algorithm parameters. Explicit encoding of spatial dependency via a covariance function, as is common in process-based geospatial models, is challenging within these algorithms. Current renditions of neural networks for spatial data circumvent this by using spatial co-ordinates or some transformations (like distances or basis functions) as additional covariates (Gray et al., 2022; Chen et al., 2020; Wang et al., 2019). These methods incorporate all spatial information directly into the mean, implicitly assuming that the error terms are independent. While this approach is very non-parametric conceptually, its explanatory power can be limited in practice. Many spatial features (basis functions) might be needed to capture the complex'shape' of spatial effect, resulting in a high-dimensional feature set (covariates) and triggering 'the curse of dimensionality'. Finding a balance between parsimony and explainability requires a careful design of spatial features that can explain complex spatial dependencies without drowning out the effect of the non-spatial covariates. Adding spatial features has also been considered for random forests (Hengl et al., 2018) and they have been shown to perform well only when the spatial signal is considerable compared to the covariate signal (Saha et al., 2023).
The second disadvantage of neural networks with added spatial features is that they obfuscate the separation of the spatial and non-spatial variations in the data. They model the mean as a joint function of covariates and space and do not offer an estimate of the covariate effect only, impeding understanding relationships between the response and the covariates. This is in stark contrast to the traditional GP based geospatial models, that model the covariate effect though the mean and the spatial variation through the covariance. This separation of the two sources of variation into first- and second-order effects naturally
induces parsimony via concepts like stationarity or isotropy for the covariance while still retaining the ability to model complex spatial dependence.
We propose a novel neural network algorithm to estimate non-linear means within the traditional Gaussian process models, explicitly accounting for the spatial dependence encoded in the GP covariance matrix. The core motivation comes from extension of the ordinary least squares (OLS) to generalized least squares (GLS) for linear models with dependent errors. For correlated data, GLS is more efficient than OLS according to the Gauss-Markov theorem. We modify the OLS loss used in neural networks to its GLS version. We refer to our algorithm as _NN-GLS_. We retain all advantages of the model-based framework, including separation of the covariate and spatial effects thereby allowing inference on both, parsimonious modeling of the spatial effect through the GP covariance function circumventing the need to create and curate spatial features, and seamlessly offering predictions at new locations via kriging. NN-GLS is compatible with any network architecture for the mean function and with any family of GP covariance function.
We note that the philosophy of GLS has been recently adopted for spatial analysis using tree-based machine learning algorithms like Random Forests (_RF-GLS_, Saha et al., 2023; Saha and Datta, 2023) and boosted trees (_GP-boost_, Sigrist, 2022; Iranzad et al., 2022). Forest and tree estimators use a brute force search to iteratively grow the regression trees, requiring multiple evaluation of the GLS loss within each step. This severely reduces the scalability of these approaches. RF-GLS also requires prior estimation of the spatial parameters, which are then kept fixed during the random forest estimation part.
NN-GLS avoids both these issues by offering a representation of the algorithm as a special type of _graph neural network (GNN)_. We show that NN-GLS using any multi-layer neural network architecture for the mean and a Nearest Neighbor Gaussian Process (NNGP, Datta et al., 2016_a_) for the covariance is a GNN with additional graph-convolution layers based on the nearest-neighbor graph and with graph-weights derived from kriging.
Leveraging this representation of the model as a GNN, we can exploit the various computing techniques used to expedite neural networks. This includes using _mini-batching_ or _stochastic gradients_ to run each iteration of the estimation on only a subset of the data in each cycle, as well as use of _backpropagation_ to compute the gradients. We provide novel scalable mini-batching, backpropagation and kriging schemes using the GNN framework Also, spatial parameters are now simply weight parameters in the network and updated throughout the training. This connection of NN-GLS to GNN, demonstrating use of GNN for irregular spatial data, is of independent importance for other types of spatial analysis.
Theoretically, there is little asymptotic theory supporting current neural networks approaches for spatially correlated data. Even in the case of i.i.d. errors, the theory of neural networks, while being discussed for a long time (Hornik et al., 1989), is still in its nascent stages. Shen et al. (2019)'s recent work provides a framework to study asymptotics of one-layer neural network estimator class. A main contribution of our article is proving the consistency of NN-GLS when the observations have spatially correlated errors arising from a stochastic process. We first extend Shen et al. (2019)'s framework to prove general results on the existence and the consistency of neural networks using GLS loss for dependent data processes. The novelty in the consistency results is accommodating spatial dependency in the data which prevents the traditional techniques (such as symmetrization and Hoeffding-type bound) from being used. Consequently, direct application of Shen et al. (2019)'s results is inapplicable to our case. As a solution, we design a new Orlicz norm in the function space for the successful application of uniform laws of large numbers.
We then apply the general results to prove consistency of NN-GLS for irregular spatial data designs and popular GP models like the Matern Gaussian process. We also show that consistency holds when using NNGP for the GLS loss. To our knowledge, these are the first theoretical result for neural networks under spatial dependence and the first examples of consistency of any machine learning approach to estimating a non-linear mean of a GP
for irregular spatial designs.
The rest of the manuscript is organized as follows. In section 2, we review process-based geostatistical models and neural networks. Section 3 describes the idea of combining the two perspectives. Section 4 formally proposes the algorithm NN-GLS, depicts its connection with graph neural networks (GNN), and exploits this representation to provide scalable estimation and prediction algorithms. Section 5 presents the theoretical results. Section 6 and 7 respectively demonstrate the performance of NN-GLS in simulated and real data.
## 2 Preliminaries
### Process-based geostatistical modeling
Consider spatial data collected at locations \(s_{i}\), \(i=1,\ldots,n\), comprising of a covariate vector \(\mathbf{X}_{i}:=\mathbf{X}(s_{i})\in\mathbb{R}^{d}\) and a response \(Y_{i}:=Y(s_{i})\in\mathbb{R}\). Defining \(\mathbf{Y}=(Y_{1},\ldots,Y_{n})^{\prime}\) and the \(n\times d\) covariate matrix \(\mathbf{X}\) similarly, the spatial linear model is given by
\[\mathbf{Y}\sim\mathcal{N}(\mathbf{X}\mathbf{\beta},\mathbf{\Sigma}(\mathbf{\theta})) \tag{1}\]
where \(\mathbf{\Sigma}:=\mathbf{\Sigma}(\mathbf{\theta})\) is a \(n\times n\) covariance matrix, parameterized by \(\mathbf{\theta}\).
A central objective in spatial analysis is to extrapolate inference beyond just the data locations to the entire continuous spatial domain. Stochastic processes are natural candidates for such domain-wide models. In particular, (1) can be viewed as a finite sample realization of a Gaussian process (GP)
\[Y(s)=\mathbf{X}(s)^{\top}\mathbf{\beta}+\epsilon(s),\epsilon(\cdot)\sim GP(0,\Sigma( \cdot,\cdot)). \tag{2}\]
where \(\epsilon(s)\) is a zero-mean Gaussian process modeling the spatial dependence via the co
variance function \(\Sigma(\cdot,\cdot)\) such that \(\Sigma(s_{i},s_{j})=Cov(Y(s_{i}),Y(s_{j}))\). Often, \(\epsilon(\cdot)\) can be decomposed into a latent spatial GP and a non-spatial (random noise) process. This results in the variance decomposition \(\mathbf{\Sigma}=\mathbf{C}+\tau^{2}\mathbf{I}\) where \(\mathbf{C}\) is the covariance matrix corresponding to the latent spatial GP and \(\tau^{2}\) is the variance for the noise process. One can impose plausible modeling assumptions on the nature of spatial dependence like stationarity (\(C(s_{i},s_{j})=C(s_{i}-s_{j})\)) or isotropy (\(C(s_{i},s_{j})=C(\|s_{i}-s_{j}\|)\)) to induce parsimony, thereby requiring a very-low dimensional parameter \(\mathbf{\theta}\) to specify the covariance function.
Gaussian processes have become a staple in geostatistical analysis for multitudinous reasons. It offers a coherent way to jointly model both observed data at a finite set of locations and unobserved data anywhere in the continuous spatial domain. They are non-parametric in the sense of being able to model any smooth spatial surface with suitable choice of covariance functions (Van Der Vaart and Van Zanten, 2011). Yet GP are parsimonious, specified by a parametric covariance function with a handful of parameters. From a practical perspective, finite-dimensional realizations of a GP are simply multivariate Gaussian leading to the data likelihood of the form (1). This effectuates simple estimation of the parameters \(\mathbf{\beta}\) and \(\mathbf{\theta}\) by maximum likelihood process. Additionally, predictions at new locations (kriging) simply reduces to obtaining the conditional Gaussian distributions.
### Neural networks
Artificial neural network (ANN) or simply, neural networks (NN) are widely used to model non-parametric regression of the form \(E(Y_{i})=f(\mathbf{X}_{i})\) where \(Y_{i}\) represents the univariate response and \(\mathbf{X}_{i}\) represents \(d\)-dimensional covariate vector. The unknown non-linear regression function \(f(\cdot)\) is approximated using a specific family of functions, which is the feed-forward part of a NN. Feed-forward neural networks, also referred to as a _multi-layer perceptrons (MLP)_, describes how the inputs \(\mathbf{X}_{i}\) are translated into outputs \(Y_{i}\). It is spec
ified via a sequence of hidden layers composed of a large number of nodes, connection between layers is specified via weights (as the contributions from each node) and activation (link) functions (to model non-linearity). Mathematically, an \(L\)-layer NN can be described as,
\[\begin{split}\mathbf{A}_{i}^{(0)}=\mathbf{X}_{i},&\mathbf{Z}_{i}^ {(l)}=\mathbf{W}_{(l)}^{\top}\mathbf{A}_{i}^{(l-1)},\,\mathbf{A}_{i}^{(l)}=\mathbf{g}_{l}( \mathbf{Z}_{i}^{(l)}),\,l=1,\ldots,L\\ & O_{i}=\mathbf{W}_{(L+1)}^{\top}\mathbf{A}_{i}^{(L)},\,f(\mathbf{X}_{i} )=O_{i},\,i=1,\ldots,n\end{split} \tag{3}\]
where for layer \(l\), \(\mathbf{A}^{(l)}\) represents the \(d_{l}\) nodes, \(\mathbf{W}_{(l)}\)'s are the weight matrix, \(\mathbf{Z}_{i}^{(l)}\)'s are hidden features as a linear combination of nodes with weights \(\mathbf{W}_{(l)}\)'s, and \(\mathbf{g}_{(l)}(\mathbf{Z}_{i}^{(l)})\) denotes the non-linear activation functions \(g_{l}(\cdot)\) (e.g., sigmoid function, ReLU function) applied to each component of \(\mathbf{Z}_{i}^{(l)}\). The final layer \(O_{i}\) is called the _output layer_ and gives the modeled mean of the response, i.e., \(O_{i}=f(\mathbf{X}_{i})=E(Y_{i})\). The choice of the activation functions is made before the training, and the weight matrices are estimated during the training process. Parameter estimation in NN is done using back-propagation, where estimates of weight matrices are updated based on the current model's fits on the training data. For regression, the fit accuracy is measured by the mean squared error (MSE) or ordinary least squares (OLS) loss
\[\sum_{i=1}^{n}(Y_{i}-f(\mathbf{X}_{i}))^{2}. \tag{4}\]
The training process iteratively switches between feed-forward (obtaining the fits given the current estimates of the weights) and back-propagation (updating the weights given the current fits). Estimation is expedited by mini-batching where the data are split into smaller and disjoint mini-batches and at each iteration the loss (4) is approximated by restricting to one of the mini-batches, and cycling among the mini-batches over iterations.
Neural networks for Gaussian process models
The two paradigms reviewed in Section 2 are complimentary in their scope. The popularity of the geospatial linear models (Section 2.1) is owed to their simplicity, interpretability, and parsimony - separating the covariate effect and the spatial effects, modeling the former through a linear mean, and the latter through the GP covariance matrix. However, it relies on the strong assumption of a linear covariate effect. Neural networks can estimate arbitrary non-linear covariate effect. However, implicit in the usage of the OLS loss (4) for NN is the assumption that the data units are independent. This is violated for spatial data, where the error process is a dependent stochastic process as in (2).
We bridge the two paradigms by allowing the mean of the GP model to be non-linear in the covariates and proposing a novel NN algorithm to estimate it. We extend (2) to
\[Y_{i}=f(\mathbf{X}_{i})+\epsilon(s_{i});\epsilon(\cdot)\sim GP(0,\Sigma(\cdot, \cdot)). \tag{5}\]
Estimating \(f\) in (5) using neural networks (3) needs to account for the spatial dependence modeled via the GP covariance. Using the OLS loss (4) ignores this covariance. We now extend NN to explicitly accommodate the spatial covariance in its estimation.
### NN-GLS: Neural networks using GLS loss
Let \(\mathbf{f}(\mathbf{X})=\left(f(\mathbf{X}_{1}),f(\mathbf{X}_{2}),\cdots,f(\mathbf{X}_{n})\right)^ {\top}\). Then the data likelihood for the non-linear spatial model (5) is
\[\mathbf{Y}\sim N(f(\mathbf{X}),\mathbf{\Sigma}). \tag{6}\]
This is a direct generalization of the spatial linear model (1). It is well known that in models like (1) for correlated outcomes, an OLS loss is replaced by a _generalized least squares (GLS)_ loss using the covariance matrix \(\mathbf{\Sigma}\). To estimate \(f\) in (6) using NN, we
naturally replace the OLS loss (4) with the GLS loss
\[\mathcal{L}_{n}(f)=\frac{1}{n}(\mathbf{Y}-\mathbf{f(X)})^{\top}\mathbf{Q}(\mathbf{Y}-\mathbf{f(X) }), \tag{7}\]
accounting for the spatial dependency with the working precision matrix \(\mathbf{Q}\), which equals \(\mathbf{\Sigma}^{-1}\) or, more practically, an estimate of it.
We refer to the neural network estimation using the GLS loss (7) as _NN-GLS_. Conceptually, generalizing NN to NN-GLS is well-principled as minimizing the GLS loss (7) with \(\mathbf{Q}=\mathbf{\Sigma}^{-1}\) is equivalent to obtaining a maximum likelihood estimate of \(f\) in (6). In practice however, for spatial dependence modeled using GP, the GLS loss ushers in multiple computational issues for both mini-batching and backpropagation, techniques fundamental to the success of NN. The back-propagation equations corresponding to the GLS loss will involve computing an inverse of the dense \(n\times n\) matrix \(\mathbf{\Sigma}\) which requires \(O(n^{2})\) storage and \(O(n^{3})\) time. This is repeated multiple times as the parameters \(\mathbf{\theta}\) specifying \(\mathbf{\Sigma}\) are updated. These computing needs are infeasible for even moderate \(n\).
Furthermore, mini-batching involves partitioning the data into small subsets. Sensitivity to the choice of partitioning can be ignored for data with i.i.d. errors as under random partitioning the losses for the mini-batches are i.i.d. and are unbiased estimates of the loss for the full data. Partitioning of spatial data is challenging due to dependence across data units. Also, while the OLS loss (4) is naturally amenable to mini-batching due to its additive nature, this advantage is lost when switching to GLS loss as the quadratic form in (7) usually do not admit such additive decomposition for typical choices of the GP covariance function. In the next Section, we develop an algorithm _NN-GLS_ with a specific class of GLS loss that mitigates these issues and offers a pragmatic approach to use NN for GP models.
NN-GLS as Graph Neural Network
We offer a representation of NN-GLS as a special graph neural network (GNN) with OLS loss, a connection that allows developing scalable mini-batching and backpropagation algorithms for NN-GLS. We propose choosing \(\mathbf{Q}\) as the precision matrix from a Nearest Neighbor Gaussian Process (NNGP, Datta et al., 2016_a_). NNGP provides a sparse approximation to the dense full GP precision matrix \(\mathbf{\Sigma}^{-1}\) without requiring any large matrix operations. NNGP creates a _directed acyclic graph (DAG)_ based on pairwise distances among the \(n\) data locations, such that each node (location) has at most \(m\ll n\) directed (nearest) neighbors. Letting \(N(i)\) be the set of neighbors of location \(s_{i}\) in the DAG, NNGP yields a precision matrix \(\widetilde{\mathbf{\Sigma}}^{-1}=(\mathbf{I}-\mathbf{B})^{\top}\mathbf{F}^{-1}( \mathbf{I}-\mathbf{B})\)(Finley et al., 2019; Datta et al., 2016\(b\); Datta, 2022) where \(\mathbf{B}\) is a strictly lower triangular matrix and \(\mathbf{F}\) is a diagonal matrix with
\[\begin{split}\mathbf{B}_{i,N(i)}&=\mathbf{\Sigma} \big{(}i,N(i)\big{)}\mathbf{\Sigma}\big{(}N(i),N(i)\big{)}^{-1},\,B_{ij}=0\text{ elsewhere, and}\\ \mathbf{F}_{ii}&=\mathbf{\Sigma}_{ii}-\mathbf{\Sigma}\big{(} i,N(i)\big{)}\mathbf{\Sigma}\big{(}N(i),N(i)\big{)}^{-1}\mathbf{\Sigma}\big{(}N(i),i \big{)}.\end{split} \tag{8}\]
NNGP precision matrices can be obtained using only inversion of \(n\) small matrices of size \(m\times m\). As \(m\ll n\), it requires total \(O(n)\) time and storage. We note that basis functions derived from NNGP has been used as additional features to capture spatial structure in neural networks (Wang et al., 2019). This differs from our approach of using NNGP to directly model the spatial covariance, analogous to the practice in spatial linear models. We detail below how choosing \(\mathbf{Q}\) in the GLS loss (7) to be the NNGP precision matrix \(\widetilde{\mathbf{\Sigma}}^{-1}\) enables fast mini-batching and backpropagation for the NN parameter estimation.
We first offer a representation of NN-GLS using NNGP precision matrix in the GLS loss as a graph neural network using OLS loss. This connection will enable NN-GLS to retain the computational advantages of OLS loss. A GLS loss can be viewed as an OLS loss for the decorrelated response \(\mathbf{Y}^{*}=\mathbf{Q}^{\frac{1}{2}}\mathbf{Y}\), where \(\mathbf{Q}^{\frac{1}{2}}\) is the Cholesky factor of \(\mathbf{Q}=\mathbf{Q}^{\frac{\top}{2}}\mathbf{Q}^{\frac{1}{2}}\).
Hence, decorrelation is simply a linear operation. A convenience of choosing \(\mathbf{Q}=\widetilde{\mathbf{\Sigma}}^{-1}\), the NNGP precision matrix, is that decorrelation becomes a convolution in the nearest neighbor DAG. To elucidate, note that \(\mathbf{B}_{i,N(i)}\) defined in (8) denotes the kriging weights for predicting \(Y_{i}\) based on its directed nearest neighbors \(\mathbf{Y}_{N(i)}\) using a GP with covariance \(\Sigma(\cdot,\cdot)\). Similarly, \(F_{ii}\) in (8) is the corresponding nearest neighbor kriging variance. Letting \(N^{*}[i]=N(i)\cup\{i\}\) denote the graph neighborhood for the \(i^{th}\) node and defining weights
\[\mathbf{v}_{i}^{\top}=\frac{1}{\sqrt{F_{ii}}}(1,-\mathbf{B}_{i,N(i)}), \tag{9}\]
we can write \(Y_{i}^{*}=\mathbf{v}_{i}^{\top}\mathbf{Y}_{N^{*}[i]}\). Thus the decorrelated responses \(Y_{i}^{*}\) are simply convolution over the DAG used in NNGP with the graph convolution weights \(\mathbf{v}_{i}\) defined using kriging. Similarly, one can define the decorrelated output layer \(O_{i}^{*}=\mathbf{v}_{i}^{\top}\mathbf{O}_{N^{*}[i]}\) using the same graph convolution, where \(O_{i}\) is the output layer of the neural network \(f\) (see (3))
The decorrelation step makes NN-GLS a special type of graph neural network (GNN) as depicted in Figure 1. In GNN, typically, the input observations are represented on a graph and the locality information is aggregated using convolution layers based on the graph structure (graph convolution). For NN-GLS, both the inputs \(\mathbf{X}_{i}\) and the responses \(Y_{i}\) are graph-valued objects as they both correspond to the locations \(s_{i}\), which are the nodes of the nearest neighbor DAG. First, the inputs \(\mathbf{X}_{i}\)'s are passed through the feed forward NN (or multi-layer perceptron) to produce the respective output layer of \(O_{i}\)'s. This is a within-node operation, and any architecture can be used (number of layers, number of nodes within each layer, sparsity of connections, choice of activation functions). Subsequently, the output layer \(O_{i}\)'s from the MLP are passed through an additional graph-convolution to create the decorrelated output layer of \(O_{i}^{*}\)'s using the weights \(\mathbf{v}_{i}\) (9). This layer is matched, using the OLS loss, to the decorrelated response layer of \(Y_{i}^{*}\)', created from the \(Y_{i}\)'s using the same graph convolution. Thus fitting a GLS loss to a NN is simply fitting an OLS loss
to a new NN with two additional decorrelation layers at the end of the MLP.
To summarize, there are two ingredients of NN-GLS: a feed forward NN or MLP (using any architecture) for intra-node operations; and a sparse DAG among the locations for incorporation of spatial correlation via inter-node graph convolutions. Information about the mutual distances among the irregular set of locations are naturally incorporated in the kriging-based convolution weights. We now discuss how this formulation of NN-GLS as a GNN with OLS loss helps leverage the traditional strategies to scale computing for NN.
Figure 1: NN-GLS as a graph neural network with two decorrelation layers
### Mini-batching
Using the NNGP precision matrix \(\widetilde{\mathbf{\Sigma}}^{-1}\) as \(\mathbf{Q}\) simplifies the GLS loss (5) to be \(\mathcal{L}_{n}=\sum_{i=1}^{n}(Y_{i}^{*}-O_{i}^{*})^{2}\) where \(Y_{i}^{*}\)'s correspond to the decorrelated response layer and \(O_{i}^{*}\)'s are the decorrelated output layer in the GNN (Figure 1). Leveraging the additivity of this OLS loss, we can write \(\mathcal{L}_{n}=\sum_{b=1}^{B}\mathcal{L}_{b,n}\) where \(L_{b,n}=\sum_{i\in S_{b}}(Y_{i}^{*}-O_{i}^{*})^{2}\), \(S_{1},\ldots,S_{B}\) being a partition of the data-locations each of size \(K\). The \(Y_{i}^{*}\)'s are uncorrelated and identically distributed (exactly under NNGP distribution, and approximately if the true distribution is a full GP), so the loss \(L_{b,n}\) corresponding to the mini-batch \(S_{b}\) are approximately i.i.d. for \(b=1,\ldots,B\). Hence, parameter estimation via mini-batching can proceed like the i.i.d. case. The only additional computation we incur is during the graph convolution as obtaining all \(O_{i}^{*}\) for the mini-batch \(S_{b}\) involves calculating \(O_{i}\)'s for the neighbors of all units \(i\) included in \(S_{b}\).
### Back-propagation
Gradient descent or back-propagation steps for NN-GLS can be obtained in closed form and without any large matrix inversions. We provide the back-propagation equations for a single-layer network with \(O_{i}=\mathbf{\beta}^{\top}\mathbf{A}_{i}\), where \(\mathbf{\beta}\) and \(\mathbf{A}_{i}=\mathbf{g}(\mathbf{Z}_{i})\) are \(d_{1}\) dimensional, \(d_{1}\) being the number of hidden nodes, \(g\) is the known activation (link) function. Here \(\mathbf{Z}_{i}=\mathbf{W}^{\top}\mathbf{X}_{i}\) is the hidden layer created using weights \(\mathbf{W}^{\top}=(W_{rj})\) is the weight-matrix (with rows \(\mathbf{w}_{r}^{\top}\)). Denote \(z_{ir}=\mathbf{w}_{r}^{\top}X_{i}\), \(a_{ir}=g(z_{ir})\), \(a_{ir}^{*}=\mathbf{v}_{i}^{\top}\mathbf{a}_{N^{*}[i]r}\), \(\delta_{i}=-2(y_{i}-\mathbf{\beta}^{\top}A_{i})\) and \(\delta_{i}^{*}=\mathbf{v}_{i}^{\top}\delta_{N^{*}[i]}\).
Then we have the following customized back-propagation updates for NN-GLS:
\[\begin{split}\beta_{r}^{(t+1)}&=\beta_{r}^{(t)}-\gamma _{t}\sum_{i\in S_{b(t)}}\delta_{i}^{*}a_{ir}^{*}\\ w_{rj}^{(t+1)}&=w_{rj}^{(t)}-\gamma_{t}\beta_{r} \sum_{i\in S_{b(t)}}\delta_{i}^{*}\left(\mathbf{v}_{i}^{\top}(\mathbf{g}^{\prime}(\mathbf{ z}_{N^{*}[i]r})\odot\mathbf{X}_{N^{*}[i]j})\right),\\ \theta_{c}^{(t+1)}&=\theta_{c}^{(t)}+\frac{\gamma _{t}}{2}\sum_{i\in S_{b(t)}}\delta_{i}^{*}\left(\left(\frac{\partial\mathbf{v}_{i}} {\partial\theta_{c}}\right)^{\top}\delta_{N^{*}[i]}\right)\end{split} \tag{10}\]
Here \(\gamma_{t}\) is the learning rate and \(S_{b(t)}\) is the mini-batch for the \(t^{th}\) iteration, and \(g^{\prime}\) is the derivative of \(g\). These custom and scalable gradient descent steps for NN-GLS can also be conducted using off-the-shelf neural network software. One can simply input the mini-batch loss for NN-GLS to obtain scalable gradient descent updates for the NN parameters using numerical, automatic, or symbolic differentiation.
Note that in the GNN representation, the spatial covariance parameter \(\mathbf{\theta}\) are just the parameters of the convolution weights \(\mathbf{v}_{i}\). Hence, the updates for \(\mathbf{\theta}\) can be just absorbed as a back-propagation step as shown in (10). Alternatively, the spatial parameters \(\mathbf{\theta}\) can also be updated, given the current estimate of \(f\), by maximizing the NNGP log-likelihood
\[\sum_{i}\Big{(}(Y_{i}^{*}(\mathbf{\theta})-O_{i}^{*}(\mathbf{\theta}))^{2}+\log F_{ii} (\mathbf{\theta})\Big{)}. \tag{11}\]
### Spatial predictions
Using the GNN representation, subsequent to estimating \(\widehat{f}(\cdot)\), predictions at a new location can be obtained seamlessly. Given a new location \(s_{0}\) and covariates \(\mathbf{X}_{0}:=\mathbf{X}(s_{0})\), first \(\mathbf{X}_{0}\) is passed through the trained feed-forward part (MLP) of the GNN to obtain the output \(O_{0}=\widehat{f}(\mathbf{X}_{0})\) (see Figure 1). Next, the new location \(s_{0}\) is added as a new node to the DAG, and let \(N(0)\) be its set of \(m\) neighbors on the DAG, \(N^{*}[0]=\{s_{0}\}\cup N(0)\), and
define the graph weights \(\mathbf{v}_{0}\) similar to (9). Then using the graph convolution step, we obtain the decorrelated output \(O_{0}^{*}=\mathbf{v}_{0}^{\top}\mathbf{O}_{N^{*}[0]}\). As the decorrelated output layer \(O^{*}\) is the model for the decorrelated response layer \(Y^{*}\) in the GNN, we have \(\widehat{Y}_{0}^{*}=O_{0}^{*}\). Finally, as \(Y_{0}^{*}=\mathbf{v}_{0}^{\top}\mathbf{Y}_{N^{*}[0]}\) is the graph-convoluted version of \(Y_{0}\), we need to deconvolve \(\widehat{Y}_{0}^{*}\) over the DAG to obtain \(\widehat{Y}_{0}\). This leads to the final prediction equation
\[\widehat{Y}_{0}=\sqrt{F_{0,0}}(\widehat{Y}_{0}^{*}+\mathbf{B}_{0,N(0)}^{\top} \mathbf{Y}_{N(0)}). \tag{12}\]
In the spatial linear model (1), predictions at new locations are performed using kriging. It is easy to verify that the prediction (12) is exactly same as the \(m\)-nearest neighbor kriging predictor for the spatial non-linear model (6), i.e.,
\[\widehat{Y}_{0}=\widehat{f}(\mathbf{X}_{0})+\mathbf{\Sigma}(s_{0},N(0))\mathbf{\Sigma}(N(0 ),N(0))^{-1}(\mathbf{Y}_{N(0)}-\widehat{\mathbf{f}}_{N(0)}). \tag{13}\]
Thus the GNN architecture for NN-GLS offers a simple and coherent way to obtain kriging predictions for the spatial non-linear model. We summarize all the steps of _NN-GLS_ using GNN in Algorithm 1.
## 5 Theory
There is currently no theory on asymptotic properties of neural networks (or their spatial renditions reviewed in the Introduction) under spatial dependence. Even in a non-spatial context theory of neural networks is very much in its nascent stage despite the foundational result of Hornik et al. (1989) showing that NN are universal approximators of any continuous function. Recently, Shen et al. (2019), proved the consistency of one-layer neural networks under OLS loss for i.i.d. data without any dependence.
We prove asymptotic consistency of NN-GLS in estimating the non-linear regression
function in a GP model by minimizing the GLS loss (7). We generalize the framework of Shen et al. (2019) to accommodate the spatially correlated error process, use of a GLS loss, and arbitrary irregular spatial designs. To our knowledge, this is the first asymptotic study of Neural Networks under spatial dependence.
### Notations and Assumptions
We first specify the notations and assumptions for the theoretical study. Let \(\mathbb{R}\) and \(\mathbb{N}\) denote the set of real numbers and natural numbers respectively. \(\|\cdot\|_{p}\) denotes the \(\ell_{p}\) norm for vectors or matrices, \(0<p\leq\infty\). Given the covariates \(\mathbf{X}_{1},\mathbf{X}_{2},\cdots,\mathbf{X}_{n}\), for any function \(f\), we define the norm \(\|f\|_{n}^{2}=\dfrac{1}{n}\sum_{i=1}^{n}f^{2}(X_{i})\). Given a \(n\times n\) matrix \(\mathbf{A}\), \(\lambda(\mathbf{A})\) denotes its eigenspace, \(\lambda_{\max}=\sup\{\lambda(\mathbf{A})\}\), and \(\lambda_{\min}=\inf\{\lambda(\mathbf{A})\}\). A sequence of numbers \(\{a_{n}\}_{n\in\mathbb{N}}\) is \(O(b_{n})\) (\(o(b_{n})\)) if the sequence \(\{|a_{n}/b_{n}|\}_{n\in\mathbb{N}}\) is bounded from above (goes to zero) as \(n\to\infty\). Random variables (distributions) \(X\sim Y\) means \(X\) and \(Y\) have the same distribution.
We first specify Assumptions on the data generation process.
**Assumption 1** (Data generation process).: The data \(Y_{i}:=Y(s_{i}),i=1,\ldots,n\) is generated from a Gaussian Process with a non-linear mean, i.e., \(Y_{i}=f_{0}(\mathbf{X}_{i})+\epsilon_{i}\), where the \(f_{0}(\cdot)\) is a continuous function and the error process \(\{\epsilon_{i}\}\) is a GP such that the maximum eigenvalue of the covariance matrix \(\mathbf{\Sigma}_{n}=Cov(\mathbf{Y})\) is uniformly upper-bounded in \(n\).
Assumption 1 imposes minimal restrictions on the data generation process. The mean function \(f_{0}\) is allowed to be any continuous function. The restriction on the spectral norm of the GP covariance matrix is tied to the spatial design. We show later in Propositions 1 and 2 that this assumption is satisfied for common GP covariance choices for any irregular set of locations in \(\mathbb{R}^{2}\) separated by a minimum distance.
We next state assumptions on the analysis model, i.e., the neural network family and the GLS working precision matrix. We consider a one-layer neural network class:
\[\mathcal{F}_{0}:=\left\{\alpha_{0}+\sum_{i=1}^{n}\alpha_{i}\sigma(\mathbf{W}_ {i}^{\top}\mathbf{X}+\mathbf{w}_{0i}),\ \mathbf{W}_{i}\in\mathbb{R}^{d\times p},\ \mathbf{w}_{0i}\in\mathbb{R}^{d},\ \alpha_{i}\in\mathbb{R}\right\}. \tag{14}\]
This formulation is equivalent to setting \(L=1\), \(g_{1}(\cdot)\) as the sigmoid function \(\sigma(\cdot)\) and \(g_{2}(\cdot)\) as identity function in (3). It is easy to see that \(\mathcal{F}_{0}\) can control the approximation error, as
one-layer NN are universal approximators. However, this class of functions \(\mathcal{F}_{0}\) can be too rich to control the estimation error. A common strategy to circumvent this is to construct a sequence of increasing function classes, also known as sieve, to approximate \(\mathcal{F}_{0}\), i.e.,
\[\mathcal{F}_{1}\subseteq\mathcal{F}_{2}\subseteq\cdots\subseteq\mathcal{F}_{n} \subseteq\mathcal{F}_{n+1}\subseteq\cdots\subseteq\mathcal{F}_{0}.\]
With careful trade-off of the complexity of the function classes, it's possible to control the estimation error (in terms of the covering number of \(\mathcal{F}_{n}\)) using some suitable _uniform-law-of-large-number (ULLN)_ while still being able to keep the approximation error in check. Following Shen et al. (2019) we consider the sieve given below.
**Assumption 2** (Function class).: The mean function \(f\) is modeled to be in the NN class
\[\begin{split}\mathcal{F}_{n}=&\Bigg{\{}\alpha_{0}+ \sum_{j=1}^{r_{n}}\alpha_{j}\sigma(\boldsymbol{\gamma}_{j}^{\top}\boldsymbol{X }+\gamma_{0,j}):\boldsymbol{\gamma}_{j}\in\mathbb{R}^{d},\alpha_{j},\gamma_{0, j}\in\mathbb{R},\\ &\sum_{j=0}^{r_{n}}|\alpha_{j}|\leq V_{n}\text{ for some }V_{n}>1/K,\,\max_{1\leq j \leq r_{n}}\sum_{i,j=0}^{d}|\gamma_{i,j}|\leq M_{n}\text{ for some }M_{n}>0\Bigg{\}},\end{split} \tag{15}\]
where \(r_{n},V_{n},M_{n}\to\infty\) as \(n\to\infty\) and satisfies the scaling
\[r_{n}V_{n}^{2}\log V_{n}r_{n}=o(n). \tag{16}\]
Here the activation function \(\sigma(\cdot)\) is a Lipschitz function on \(\mathbb{R}\) with range \([-r_{b},r_{b}]\) and Lipschitz constant \(K\). (For sigmoid function, \(r_{b}=1\) and \(K=1/4\)).
Hornik et al. (1989) has shown that \(\cup_{n}\mathcal{F}_{n}\) is dense in the continuous function class \(\mathcal{F}_{0}\) which will control the approximation error. The estimation error will depend on the covering number for this class which can be controlled under the scaling rate (16).
Finally, to guarantee regularity of the GLS loss (7) used for estimating the NN function
\(f\), we require the following conditions on the working precision matrix \(\mathbf{Q}\).
**Assumption 3** (Spectral interval for the working precision matrix.).: All eigenvalues \(\lambda\) of \(\mathbf{Q}\) lie in \((M_{low},M_{high})\) for universal constants \(M_{low},M_{high}>0\).
The lower bound ensures that a small GLS loss implies a small estimation error. The upper bound ensures an uniform Lipschitz continuity of the GLS loss function and the consistency of the loss function (Lemma S2) using empirical process results, which serves as a key part of the theory. We show in Propositions 1 and 2 how this Assumption is satisfied when \(\mathbf{Q}\) is either the true GP precision matrix \(\mathbf{\Sigma}_{n}^{-1}\) or its NNGP approximation.
### Main results
We first provide general results on the existence and consistency of neural networks estimators minimizing a GLS loss for dependent data. In Section 5.3 we apply these results to establish consistency of NN-GLS for common choices of GP covariance and spatial designs. The expected value of the GLS loss (7) is given by:
\[\begin{split} L_{n}(f)=\mathbb{E}\big{[}\mathcal{L}_{n}(f)\big{]} &=\mathbb{E}\left[\frac{1}{n}(\boldsymbol{y}-\boldsymbol{f}(x) )^{\top}\mathbf{Q}(\boldsymbol{y}-\boldsymbol{f}(x))\right]\\ &=\frac{1}{n}\mathbb{E}\big{[}\boldsymbol{\epsilon}^{\top}\mathbf{ Q}\boldsymbol{\epsilon}\big{]}+\frac{1}{n}(\boldsymbol{f}_{0}(x)-\boldsymbol{f}(x))^{ \top}\mathbf{Q}(\boldsymbol{f}_{0}(x)-\boldsymbol{f}(x))\end{split} \tag{17}\]
It is evident from above that \(f_{0}\) naturally minimizes \(L_{n}(f)\), while NN-GLS tries to minimize \(\mathcal{L}_{n}(f)\). We first show that such a minimizer exists in the sieve class \(\mathcal{F}_{n}\).
**Theorem 1** (Existence of sieve estimator).: For any \(n\), given data \((Y_{i},\boldsymbol{X}_{i},s_{i}),i=1,\cdots,n\) described in the model (5) and a prefixed working precision matrix \(\mathbf{Q}\), with the function classes \(\mathcal{F}_{n}\) defined in (15), there exists a sieve estimator \(\widehat{f}_{n}\) such that
\[\widehat{f}_{n}=\text{argmin}\{\mathcal{L}_{n}(f):f\in\mathcal{F}_{n}\}. \tag{18}\]
The proof of this and all subsequent theoretical results are provided in the Supplementary materials. The existence result ensures that a sieve estimator in the class of neural networks that minimizes the GLS loss is well defined. It is then natural to study its asymptotic consistency, as we do in the next result.
**Theorem 2** (Consistency).: Consider dependent data generated as \(Y_{i}=f_{0}(\mathbf{X}_{i})+\epsilon(s_{i})\) where \(f_{0}\) is a continuous function, \(\epsilon(\cdot)\) is a Gaussian process. Then under Assumptions 1, 2, 3, the NN-GLS estimate \(\widehat{f}_{n}\) (18) minimizing the GLS loss \(\mathcal{L}_{n}(f)\) in (7) is consistent in the sense \(\|\widehat{f}_{n}-f_{0}\|_{n}\xrightarrow{p}0\).
To our knowledge, this is the first result on consistency of any type of neural network under a dependent error process. We rely on very mild assumptions on the function class, and the covariance matrices of the data generation and analysis models and show in Section 5.3 how these are satisfied for typical GP covariances and irregular spatial data designs. Also, note that this general result does not restrict the nature of the dependence to be spatial. Hence, while spatial applications is the focus of this manuscript, Theorem 2 can be used to establish consistency of neural networks for time-series, spatio-temporal, network, or other types of structured dependence.
Theorem 2 generalizes the analogous consistency result of Shen et al. (2019) from i.i.d. data and OLS loss to dependent error processes and use of a GLS loss. Consequently, the proof addresses challenges that do not arise in the i.i.d. case. The spatial dependency makes the standard Rademacher randomization fail and preventing use the standard Uniform law of large number (ULLN) result. We overcome the difficulty by delving into the construction of normed functional space consisting of random processes. Such a space is the basis of applying a suitable ULLN. We introduce a new Orlicz norm to adjust for data dependence and use of the GLS loss, and are able to construct a ULLN for our dependent setting by showing that the empirical process is well behaved with respect to this Orlicz norm.
### Matern Gaussian processes
We now establish consistency of NN-GLS for common GP covariance families, spatial data designs, and choice of working precision matrices. The main task in applying the general consistency result (Theorem 2) for these specific settings is verifying compliance to the regularity assumptions - i.e., the spectral bounds on the true Gaussian process covariance (Assumption 1) and on the working precision matrix (Assumptions 3).
We provide consistency results of NN-GLS for spatial data generated from the Matern Gaussian process. The Matern covariance function is specified as \(C(\|s_{i}-s_{j}\|_{2})=C(\|h\|_{2})=\sigma^{2}\frac{2^{1-\nu}\big{(}\sqrt{2}\phi \|h\|_{2}\big{)}^{\nu}}{\Gamma(\nu)}\mathcal{K}_{\nu}\big{(}\sqrt{2}\phi\|h\|_ {2}\big{)}\). This is the predominant choice of covariance family in geostatistical models due to the interpretability of its parameters as \(\sigma^{2}\) is the marginal spatial variance, \(\phi\) controls the rate of decay of spatial correlation, \(\nu\) controls the smoothness of the underlying process (Stein, 1999). Closed form representations are available for several special or limiting cases such as the exponential (\(\nu=1/2\)) or Gaussian (\(\nu\to\infty\)) covariance functions. Our first result considers data generated from a class of GP that contains the Matern family and where the working precision matrix is the true GP precision matrix.
**Proposition 1**.: Consider data generated from a spatial process \(Y_{i}=f_{0}(\mathbf{X}_{i})+\epsilon(s_{i})\) at locations \(s_{1},\ldots,s_{n}\) in \(\mathbb{R}^{2}\), where \(f_{0}\) is continuous, \(\epsilon(s_{i})\) is a Gaussian process with covariance function \(\Sigma(s_{i},s_{j})=C(s_{i},s_{j})+\tau^{2}\delta(s_{i}=s_{j})\) with \(\tau^{2}>0\), \(\delta\) is the Kronecker delta, and \(C(s_{i},s_{j})=C(\|s_{i}-s_{j}\|)\) is a covariance of a stationary spatial GP such that \(C(u)=o\big{(}u^{-(2+\kappa)}\big{)}\) for some \(\kappa>0\). Suppose the data locations are separated by a minimum distance \(h>0\), i.e., \(\|s_{i}-s_{j}\|\geq h,\ \forall i\neq j\). Let \(\mathbf{\Sigma}_{n}=\mathbf{C}+\tau^{2}\mathbf{I}\) denote the covariance matrix of \(\mathbf{Y}=\left(Y(s_{1}),\ldots,Y(s_{n})\right)^{\top}\), where \(\mathbf{C}=\left(C(s_{i}-s_{j})\right)\). Then the NN-GLS estimator \(\widehat{f}_{n}=\operatorname{argmin}\{\mathcal{L}_{n}(f):f\in\mathcal{F}_{n}\}\) using \(\mathbf{Q}=\mathbf{\Sigma}_{n}^{-1}\) is consistent in the sense \(\|\widehat{f}_{n}-f_{0}\|_{n}\xrightarrow{p}0\).
Proposition 1 demonstrates how consistency is achieved for NN-GLS in Matern GP models with minimal assumptions on the data generation process. The decay rate on the
spatial covariance \(C(u)=o\big{(}u^{-(2+\kappa)}\big{)}\) is satisfied by the Matern family (Abramowitz and Stegun, 1948). The proposition holds for any irregular spatial design in \(\mathbb{R}^{2}\) meeting the restriction of the locations being separated by a minimum distance. As the sample size \(n\) grows this is essentially equivalent to considering the increasing domain paradigm that is commonly adopted in spatial asymptotics as parameters are not identifiable if data are collected densely in a bounded spatial domain (Zhang, 2004).
Propositions 1 describes the case where true covariance structure is known. In that case, it's possible to directly use the inverse of the covariance matrix as the working precision matrix in the GLS loss. However, this is often infeasible for multiple reasons. First, the true covariance parameters are usually unknown and the working covariance matrix will typically use different (estimated) parameter values. Computationally, GLS loss using the full Matern GP covariance matrix will require \(O(n^{3})\) time and \(O(n^{2})\) storage which are not available even for moderate \(n\). The next proposition introduces a more pragmatic result proving consistency of NN-GLS for data generated from Matern GP but when using a working precision matrix derived from NNGP (as described in Section 4) and with parameter values different from the truth.
**Proposition 2**.: Consider data generated as in Proposition 1 from a Matern GP with parameters \(\mathbf{\theta}_{0}=(\sigma_{0}^{2},\phi_{0},\nu_{0},\tau_{0}^{2})\) at locations separated by a distance \(h>0\). Let \(\mathbf{Q}\) be the NNGP precision matrix based on a Matern covariance with parameters \(\mathbf{\theta}=(\sigma^{2},\phi,\nu,\tau^{2})\) and using neighbor sets of maximum size \(m\) with each location appearing in at most \(M\) many neighbor sets. Then there exists some \(K>0\) such that for any \(\phi>K\), the NN-GLS estimator \(\widehat{f}_{n}=\text{argmin}\{\mathcal{L}_{n}(f):f\in\mathcal{F}_{n}\}\) using \(\mathbf{Q}\) is consistent in the sense \(\|\widehat{f}_{n}-f_{0}\|_{n}\xrightarrow{p}0\).
Proposition 2 proves consistency of NN-GLS for Matern GP when using NNGP working precision matrices. This is the actual choice of \(\mathbf{Q}\) used in our algorithm as it can be represented as a GNN thereby facilitating a scalable implementation, as described in Section
4. The result allows the spatial parameter used in the working covariance to be different from the truth, but requires two assumptions. The spatial decay parameter \(\phi\) for the working precision matrix \(\mathbf{Q}\) needs to be sufficiently large to ensure \(\mathbf{Q}\) is not close to being singular which will lead to numerical instability. We note that this is not a restriction on the data generation process but just on the working precision matrix and can be chosen by the user (although it is not required in practice). Similarly, the restriction on each location appearing in at most a fixed number of neighbor sets, although typically not enforced in NNGP, is usually satisfied in all but very pathological designs.
To our knowledge, Propositions 1 and 2 on consistency of NN-GLS are the first examples of consistency of any machine learning based approach to estimating a non-linear mean of a Matern GP for irregular spatial designs. The only similar result in the literature is consistency of RF-GLS, a GLS based random forest approach of Saha et al. (2023), for Matern GP. However, their result relies on a one-dimensional regular lattice design, restricts the true Matern process smoothness to be half-integers, and needs the covariates \(\mathbf{X}_{i}\) to be i.i.d. Our result is valid for any irregular spatial designs in the two-dimensional space - the most typical setting of spatial data collection. The result also holds for any true parameter values of the Matern process, and do not impose any assumption on the covariates.
## 6 Simulation study
We conduct extensive simulation experiments to study the advantages of NN-GLS over other existing methods in terms of both prediction and estimation. The data are simulated from the non-linear spatial GP model (5) with two choices for the non-linear mean function \(f_{1}(x)=10\sin(\pi x)\) and \(f_{2}(x)=\dfrac{1}{6}(10\sin(\pi x_{1}x_{2})+20(x_{3}-0.5)^{2}+10x_{4}+5x_{5})\) (Friedman function, Friedman, 1991). The error process is an exponential GP with a nugget. We consider
3 choices for each spatial parameter -- variance \(\sigma^{2}\), spatial decay \(\phi\), and error variance (nuggets) ratio \(\tau^{2}/\sigma^{2}\), thereby providing 27 combinations in total. For each setting, we perform 100 independent experiments including data generation, model fitting, and evaluation. In the data-generation process, covariates \(\mathbf{X}\) and coordinates \(s\) are independently sampled from \(\text{Unif}\big{(}[0,1]^{d}\big{)}\) and \(\text{Unif}\big{(}[0,10]^{2}\big{)}\).
We consider 4 candidate neural network approaches for comparison to NN-GLS - NN without spatial consideration (NN-nonspatial), kriging after simple NN (NN-nsp+kriging), NN using the spatial coordinates as two additional inputs (NN-latlon), and NN using spline basis as additional inputs (NN-splines). For NN-splines we use the _Deepkriging_ algorithm of Chen et al. (2020). Among these, NN-GLS and NN-nonspatial are capable for both estimation and prediction, NN-nsp+kriging, NN-latlon, and NN-splines are only designed for prediction.
We first evaluate the estimation performance of NN-GLS and NN-nonspatial in terms of Mean Integrated Square Error (MISE) for the estimate \(\widehat{f}\). Figure 2 (a) gives the comparison of estimation performance for different choice of nugget variance. NN-GLS consistently out
Figure 2: Comparison between methods on estimation of the mean function \(f_{2}\).
performs NN-nonspatial. In addition, Figure 2 (b) shows that NN-GLS has more significant advantage over non-spatial neural networks when \(\phi\) is small. This is expected as for small \(\phi\), there is strong spatial correlation in the data, so the performance of NN-nonspatial suffers on account of ignoring this spatial information. The deterioration in performance of NN-nonspatial over NN-GLS is lesser for large \(\phi\), as there is only weak spatial correlation in the data.
We compare prediction performances using the Relative Mean Squared Error (RMSE) on the test set, obtained by standardizing the MSE by the variance of the response so that the quantity can be compared across different experiments. In Figure 3, we present the prediction results from (a) a correctly specified model with the data generated as (5) using the exponential GP, and (b) a misspecified model for NN-GLS, where the spatial dependence is not from a GP but from a fixed smooth spatial surface (see details in section S2.6). The non-spatial NN, unsurprisingly, offers the worst prediction performance, on account of not using spatial information either in estimation or prediction. However, NN-nsp+kriging, which
Figure 3: (a) Prediction performance comparison when the mean function is \(f_{0}=f_{2}\) and spatial error randomly generated across repetitions; (b) Prediction performance comparison when the mean function is \(f_{0}=f_{1}\) and spatial error generated across from a fixed surface;
uses the estimates of the mean from NN-nonspatial but performs kriging on the residuals, seem to consistently perform better than NN-splines and NN-latlon. This demonstrates the limitation of using just the spatial co-ordinates or some spatial basis functions as additional features in neural networks. The choice and the number of these basis functions may need to be tuned carefully to optimize performance for specific applications. NN-GLS circumvents the need to introduce basis functions by parsimoniously modeling the spatial dependence directly through the GP covariance, as is done in geostatistical linear models, and performs better than these methods under both settings.
Section S2 of the Supplementary materials presents all the other simulation results, including estimation and prediction performance comparison for both choices of functions and all choices of the parameters (Sections S2.1 and S2.2). NN-GLS consistently outperforms NN-nonspatial for estimation of the non-linear covariate effect, and is the best or competitive with the best method for prediction. In Section S2.3, we investigate if the estimation of the spatial parameters has an impact on the performance of NN-GLS by comparing it to _NN-GLS (oracle)_ which uses the true spatial parameters. We observe that NN-GLS's performance is quite the same as that of NN-GLS (oracle), since it provides an accurate estimation of spatial parameters. NN-GLS also performs well for a higher dimensional mean function (of 15 covariates) (Section S2.4). We finally assess robustness of NN-GLS to model misspecification, including misspecification of the GP covariance (Section S2.5), and complete misspecification of the spatial dependence, being not generated from a GP (Section S2.6). For both estimation and prediction, NN-GLS performs as the best or comparably with the best method for both cases of misspecification.
Theoretically, NN-GLS requires linear running time by using NNGP covariance and the GNN framework (see Section 4). Figure 4 plots running times against sample sizes in log-log scale for the following different components of the NN-GLS algorithm: 't-NN-GLS-train' represents the step of neural network's training; 't-spatial-est' represents the step of
estimating spatial parameters and 't-total' gives the total run time. We find that the slopes of the curves are close to 1 (dashed lines), corresponding to linear run-times. This verifies the theoretical computational complexity.
## 7 Real data example
The real data example considered here is spatial analysis of PM\({}_{2.5}\) (fine particulate matter) data in the continental United States. The regulatory PM\({}_{2.5}\) data comes from US environmental protection agency's (EPA) air quality system (AQS). The AQS collects daily air quality data through widely distributed monitors and validates it through a quality assurance procedure. Since PM\({}_{2.5}\) is known to have prominent spatial patterns, we use NN-GLS, modeling the PM\({}_{2.5}\) concentrations across the U.S. as the correlated response, and using other meteorological variables as covariates. The meteorological data are obtained from the National Centers for Environmental Prediction's (NCEP) North American
Figure 4: Log running times for different components of the NN-GLS algorithm as functions of the (log) sample size when the mean functions are (a) \(f_{1}\) and (b) \(f_{2}\). The dashed lines represent linear running time.
Regional Reanalysis (NARR) product, which generates reanalyzed data for temperature, wind, moisture, soil, and dozens of other parameters. with spatial resolution of about \(32\times 32\) km. Following a similar analysis in Chen et al. (2020), we consider the daily PM\({}_{2.5}\) data on June 5th, 2019 for this analysis (other dates are considered in the Supplementary materials). We have PM\({}_{2.5}\) concentration (\(\mu g/m^{3}\)) from 841 stations across the states and six meteorological variables provided at 7706 grid cells from NARR. The six variables are: precipitation accumulation, air temperature, pressure, relative humidity, west and north wind speed. Since the coordinates of the two data sets are different, the spatial resolution of NARR data is retained and the PM\({}_{2.5}\) data are matched onto the grid by averaging in the grid cell. Grid cells without any EPA monitor are removed and there are 604 data points left for the downstream analysis.
Figure 5 (a) demonstrates the PM\({}_{2.5}\) distribution on the date (smoothed by inverse distance weighting interpolation), as well as the nationwide EPA monitor's distribution (the orange spots). The spatial nature of PM\({}_{2.5}\) is evident from the map. We consider the same 5 methods from Section 6: NN-nonspatial, NN-GLS, NN-latlon, NN-nsp+Kriging, and NN-splines. To evaluate the prediction performance, we randomly take 80% of the data as a training set and the rest part as a testing set, train the model and calculate the
Figure 5: PM\({}_{2.5}\) data analysis.
RMSE of the prediction on the testing set. The procedure is repeated for 100 times. The performance of each method is shown in figure 5 (b). We find that NN-GLS has the lowest average RMSE with NN-splines being the next best. This trend is consistent for other choices of dates and for different ways of splitting data into train and test sets (see Section S3 of the Supplementary materials).
As discussed before, unlike most of the other methods which only offer predictions, NN-GLS provides a direct estimate of the effect of the meteorological covariates on PM\({}_{2.5}\), specified through the mean function \(f(\mathbf{X})\).However, \(\mathbf{X}\) is six-dimensional in this application precluding any convenient visualization of the function \(f(\mathbf{X})\). Hence, we use partial dependency plots (PDP) - a common tool in the machine learning world for visualizing the marginal effect of one or two features on the response. In figure 6, we present the PDP of PM\({}_{2.5}\) on temperature and wind. We see clear non-linear effects in both, thereby demonstrating the need to move beyond the traditional linear spatial models and consider geostatistical models with non-linear means, like NN-GLS, for these types of spatial analysis.
Figure 6: Partial dependency plots showing the marginal effects of temperature and vertical (north) wind.
Discussion
In this work, we showed how neural networks can be embedded directly within traditional geostatistical Gaussian process models, creating a hybrid machine learning statistical modeling approach that balances parsimony and flexibility. Compared to existing renditions of NN for spatial data which need to create and curate many spatial basis functions as additional features, NN-GLS simply encodes spatial information explicitly and parsimoniously through the GP covariance. It reaps all benefits of the model-based framework including separation of the non-spatial and spatial structures into the mean and the covariance, use of GLS loss to account for spatial dependence while estimating the non-linear mean using neural networks, and prediction at new locations via kriging.
We show that NN-GLS using Nearest Neighbor Gaussian Process models can be represented as a graph neural network. The resulting GLS loss is equivalent to adding a graph convolution layer to the output layer of a OLS-style neural network. This ensures the computational techniques used in standard NN optimization like mini-batching and back-propagation can be adapted for NN-GLS, resulting in a linear time algorithm. Also, kriging predictions can be obtained entirely using the GNN architecture using graph convolution and deconvolution.
As was expected in our original motivation, NN-GLS should be more efficient than NN, which comes analogously from Gaussian-Markov theorem's result on GLS and OLS for linear models. Both our simulation and real-data studies under various settings coincide with the theoretical expectation and illustrate promising applications of our method.
In theory, we prove general results on the existence and consistency of neural networks using GLS loss for spatial processes. We show how the approach will be consistent for estimating the non-linear mean of data generated from a Matern GP observed over an irregular set of locations. There is a gap between the theoretical setup and the actual
implementation of NN-GLS, as the theory relies on a restricted class of neural networks and does not consider steps like mini-batching and backpropagation used in practice. However, even in a non-spatial context, this gap between the practice and theory of NN is yet to be bridged. To the best of our knowledge, we provide the first theoretical results for any neural network approach under spatial dependency. Even though their assumptions may not be the most general in practice, the theoretical results provide us with a valuable understanding of the algorithm and guarantee its effectiveness in most cases, which is consistent with our observations in simulations and real data analysis.
The connection between NN-GLS and GNN is of independent importance. It demonstrates how GNN, with spatially-informed convolution weights, can be used for irregular spatial data, as it is equivalent to a GP model with neural network mean and NNGP covariance. Part of our future work will be to extend this framework and consider more general graph deconvolution layers, and toother types of irregular spatial data where graphical models are already in use, like multivariate spatial outcomes using inter-variable graphs (Dey et al., 2022) or areal data graphs (Datta et al., 2019). We also primarily focused on modeling the mean as a function of the covariates using a rich non-linear family, i.e., the neural network class, while using stationary covariances to model the spatial structure. However, non-stationarity can be easily accommodated in NN-GLS either by including few lower order (slowly varying) basis functions in the mean or using the GLS loss with a non-stationary covariance function. For example, Zammit-Mangion et al. (2021) proposed rich classes of non-stationary covariance functions using transformations of the space modeled with neural networks. In the future, we will explore extensions of NN-GLS to accommodate such non-stationary GP covariances. We will also develop an efficient, publicly available user-friendly software, to make NN-GLS broadly accessible for analysis of spatial data.
## Acknowledgement
This work was partially supported by National Science Foundation (NSF) Division of Mathematical Sciences grant DMS-1915803 and National Institute of Environmental Health Sciences (NIEHS) grant R01 ES033739.
|
2305.08174 | ReSDF: Redistancing Implicit Surfaces using Neural Networks | This paper proposes a deep-learning-based method for recovering a signed
distance function (SDF) of a given hypersurface represented by an implicit
level set function. Using the flexibility of constructing a neural network, we
use an augmented network by defining an auxiliary output to represent the
gradient of the SDF. There are three advantages of the augmented network; (i)
the target interface is accurately captured, (ii) the gradient has a unit norm,
and (iii) two outputs are approximated by a single network. Moreover, unlike a
conventional loss term which uses a residual of the eikonal equation, a novel
training objective consisting of three loss terms is designed. The first loss
function enforces a pointwise matching between two outputs of the augmented
network. The second loss function leveraged by a geometric characteristic of
the SDF imposes the shortest path obtained by the gradient. The third loss
function regularizes a singularity of the SDF caused by discontinuities of the
gradient. Numerical results across a wide range of complex and irregular
interfaces in two and three-dimensional domains confirm the effectiveness and
accuracy of the proposed method. We also compare the results of the proposed
method with physics-informed neural networks approaches and the fast marching
method. | Yesom Park, Chang hoon Song, Jooyoung Hahn, Myungjoo Kang | 2023-05-14T14:54:19Z | http://arxiv.org/abs/2305.08174v1 | # _ReSDF_: Redistancing Implicit Surfaces using Neural Networks
###### Abstract
This paper proposes a deep-learning-based method for recovering a signed distance function (SDF) of a given hypersurface represented by an implicit level set function. Using the flexibility of constructing a neural network, we use an augmented network by defining an auxiliary output to represent the gradient of the SDF. There are three advantages of the augmented network; (i) the target interface is accurately captured, (ii) the gradient has a unit norm, and (iii) two outputs are approximated by a single network. Moreover, unlike a conventional loss term which uses a residual of the eikonal equation, a novel training objective consisting of three loss terms is designed. The first loss function enforces a pointwise matching between two outputs of the augmented network. The second loss function leveraged by a geometric characteristic of the SDF imposes the shortest path obtained by the gradient. The third loss function regularizes a singularity of the SDF caused by discontinuities of the gradient. Numerical results across a wide range of complex and irregular interfaces in two and three-dimensional domains confirm the effectiveness and accuracy of the proposed method. We also compare the results of the proposed method with physics-informed neural networks approaches and the fast marching method.
keywords: Signed distance function; Level set function; Reinitialization; Deep learning; Eikonal equation +
Footnote †: journal:
## 1 Introduction
The signed distance function (SDF) to a hypersurface \(\Gamma\subset\mathbb{R}^{n}\), which is the distance to \(\Gamma\) in the outer region and the negative of the distance to \(\Gamma\) in the inner region, has been crucial in various fields, ranging from computational fluid dynamics [1; 2; 3] to image segmentation [4; 5; 6], 3D shape reconstruction from scattered point data [7; 8; 9], architectural geometry [10; 11], and robotic navigation [12; 13]. After being devised by Osher and Sethian [14], the level set method, which represents \(\Gamma\) as the zero level set of a continuous function \(\phi:\mathbb{R}^{n}\rightarrow\mathbb{R}\), provides a numerical and theoretical paradigm of evolving hypersurfaces. To facilitate geometric features such as the normal vector and mean curvature of the interface and to reduce numerical instability, the level set function is preferable to be neither too flat nor steep near its zero contours. Reinitializing it as the SDF has been a common numerical treatment in modelling the motion of dynamic interfaces [15; 16; 17; 18] and shape optimization [19; 20; 21; 22].
There have been several numerical methods to re-distance the given level set function \(\phi\). One of the most prominent efforts is built on the fact that the SDF is a solution of partial differential equations (PDEs) [23; 24]. Fast marching methods (FMMs) [25; 26; 27; 28; 29] and fast sweeping methods [30; 31; 32] recover the SDF as a viscosity solution to the eikonal equation. Sussman et al. [33] reformulated the eikonal equation by a pseudo-time-dependent nonlinear hyperbolic PDE. This approach is known to be
more suitable than solving directly the eikonal equation in the case of evolving interfaces. Since the SDF [33] is obtained by the stationary solution, it is time-consuming and requires a large number of iterations depending on the CFL restriction. A more serious issue is that the zero level set fails to be maintained. Lee et al. [34] propose a fast method by using the Hopf-Lax formula of the Hamilton-Jacobi equation. In [35], it is known that the solution of the Hamilton-Jacobi equation in [34] is not exactly the SDF. Another approach [36] employs Varadhan's distance function [37]. By the Hopf-Cole formula, it can be transformed into a regularized eikonal equation with an artificial viscosity term.
Inspired by the tremendous success of deep learning in diverse machine learning tasks, such as image classification [38; 39; 40; 41; 42] and density estimation [43; 44; 45; 46], the use of neural networks for solving PDEs has begun to attract significant attention in recent years. Pioneering studies [47; 48] incorporate physical principles into neural networks by directly constructing a loss function as a residual of PDEs and errors of boundary or initial conditions. Building upon these earlier works, Raissi et al. [49] have revisited them by using modern computational tools and provided a framework named physics-informed neural networks (PINNs). The advantage of PINNs is that they can be readily transformed into various problems [50; 51; 52] and can treat the PDE in a fully mesh-free and time-continuous manner. However, they suffer from a challenging optimization landscape [53; 54], as it is difficult for them to learn many multi-scale or complex PDE systems [55; 56]. Another line of work involves neural operators [57; 58; 59], which unveil physical systems from data by learning implicit solution operator that maps boundary or initial conditions to solutions. They have shown promise in learning complex PDEs [60; 61], but the requirement for large amounts of available data limits application to various of practical problems. In addition, hybrid methods [62; 63; 64; 65] that combine deep learning with well-grounded numerical methods have been studied.
In this paper, we propose a novel method, named _ReSDF_, for reconstructing a SDF from a given implicit level set function \(\phi\) whose zero level set is an interface \(\Gamma\); see Figure 1. By exploiting flexibility of network design and optimization objectives, we compute the SDF through two major ideas. Firstly, inspired by the variable-splitting scheme [66; 67], we introduce an augmented neural network that parametrizes the gradient of the SDF as an auxiliary variable while keeping the number of network parameters. The network is designed so that the approximating SDF accurately distinguishes the interface from the interior and the outer regions and the gradient has a unit norm. Secondly, we propose a training objective consisting of three loss terms. The first loss function comes from the splitting method and it enforces matching vectors between the gradient of the estimated SDF and the auxiliary gradient. The
Figure 1: Concept of the level set reinitialization. The interface \(\Gamma\) is presented by black solid lines. The left is an iso-contour plot of the given level set function \(\phi\) and the right is the iso-contours of the SDF after the reinitialization of \(\phi\).
second loss function imposes a geometric fact that the gradient of the SDF defines the shortest path to the interface. The third loss function is devised to alleviate the nonuniqueness of the gradient caused by multiple shortest paths at a singular point of the SDF. We also provide a theoretical validation of the proposed objectives for reinitializing a level set function. Numerical results confirm that proposed loss functions in conjunction with the designed neural network significantly improve the accuracy compared to the existing PINN approach. The _ReSDF_ also accurately approximates the SDF for the cases of complex and irregular interfaces without tuning sensitive hyper-parameters. The capability of the model is also tested to estimate the distance function on three-dimensional space without increment in the number of parameters of the network.
The rest of the paper is organized as follows. In Section 2, we present the formulation of the problem and a brief discussion about prior works. In Section 3, we introduce the proposed ReSDF by describing the augmented network and the objective function in detail. Numerical experiments are presented in Section 4 to demonstrate the effectiveness and accuracy of ReSDF, followed by some concluding remarks given in Section 5.
## 2 Previous Works
Let \(\Omega\subset\mathbb{R}^{n}\) be a domain and \(\Gamma\subset\Omega\) be a compact hypersurface implicitly represented by a zero level set of a continuous level set function \(\phi:\Omega\rightarrow\mathbb{R}\). The hypersurface \(\Gamma\) divides \(\Omega\) into two disjoint open subsets: the outer region \(\Omega^{+}=\left\{\mathbf{x}\in\Omega\mid\phi\left(\mathbf{x}\right)>0\right\}\) and the inner region \(\Omega^{-}=\left\{\mathbf{x}\in\Omega\mid\phi\left(\mathbf{x}\right)<0\right\}\) satisfying \(\Omega\setminus\Gamma=\Omega^{+}\sqcup\Omega^{-}\). From the given function \(\phi\), the goal is to find the signed distance function (SDF) \(u:\Omega\rightarrow\mathbb{R}\) that satisfies
\[u\left(\mathbf{x}\right)=\begin{cases}d\left(\mathbf{x},\Gamma\right)&\text{ in }\Omega^{+}\\ 0&\text{on }\Gamma\\ -d\left(\mathbf{x},\Gamma\right)&\text{in }\Omega^{-},\end{cases} \tag{1}\]
where \(d\left(\mathbf{x},\Gamma\right)=\min\limits_{\mathbf{y}\in\Gamma}\parallel \mathbf{x}-\mathbf{y}\parallel\) denotes the standard Euclidean distance function to \(\Gamma\). It is a unique viscosity solution to the eikonal equation [68]
\[\parallel\nabla u\parallel =1, \tag{2}\] \[\operatorname{sgn}\left(u\right) =\operatorname{sgn}\left(\phi\right), \tag{3}\]
where \(\phi\) is the given level set function and \(\operatorname{sgn}\) is the signum function, which takes either \(1\), \(0\), or \(-1\) for points in \(\Omega^{+}\), \(\Gamma\), or \(\Omega^{-}\), respectively.
The variational approaches to approximate the distance function (2) in [69; 70; 71] are to minimize the energy functional related to the eikonal equation:
\[\mathcal{E}\left(u\right)=\int_{\Omega}\left(\left\|\nabla u\right\|-1\right)^ {2} \tag{4}\]
in conjunction with the Dirichlet boundary condition \(u\left(\Gamma\right)=0\) as a constraint relaxed by a penalty term:
\[\min\limits_{u}\,\mathcal{E}\left(u\right)+\int_{\Gamma}\left|u\right|. \tag{5}\]
Several efforts have also been made to improve the ill-posedness or convergence of the variational problem (4), including penality methods introducing external energy functionals for shifting the zero level set toward the target interface [6] and for preserving the shape of the free interface [72], and effective splitting schemes [36; 73] using alternating direction method of multipliers [74].
As neural PDE surrogates have proliferated as an impactful area of research, several efforts have been made to reconstruct the SDF using neural networks. Prior studies mostly resort to the steady eikonal equation to find the SDF. Lichtenstein et al. [75] propose a hybrid method that integrates neural networks into the FMM [25]. They replace the local numerical solver in the FMM with a neural network trained from data. The accuracy of the numerical solution is improved by leveraging the expressive power of neural networks. Since the approach requires a wealth of training data with true distance values, it might be difficult to be used in application where lots of true distances are hard to obtain. The performance relies on the amount of available data and would be unfavourable outside of the data on which the network is trained. Another approach directly uses a neural network to parametrize the SDF. Gropp et al. [8] adopt the PINN approach to learn the SDF from an unorganized cloud of points. They directly use a neural network to parametrize the SDF of the given point cloud by minimizing the eikonal-embedded loss function.
Recent works [76; 77] leverage a framework of PINN [49]. Similar to the variational approaches (4), they convert the problem of solving the eikonal equation (2) into an optimization problem in which the loss function embeds the knowledge of (2):
\[\mathcal{L}_{Eik}\left(\theta\right)=\frac{1}{\mid\mathcal{D}\mid}\sum_{ \mathbf{x}\in\mathcal{D}}\Bigl{(}\|\nabla u_{\theta}\left(\mathbf{x}\right) \|-1\Bigr{)}^{2}+\lambda\mathcal{L}_{R}\left(\theta\right), \tag{6}\]
where \(u_{\theta}\) is a neural network parametrized by \(\theta\), which is an approximator of the solution to (2). The soft penalty term \(\mathcal{L}_{R}\) enforces an additional constraint on the solution such as weight normalization or boundary conditions. The expectation is taken with respect to a collection \(\mathcal{D}\) of scattered collocation points usually chosen by uniform random sampling. The objective of the residual of the eikonal equation characterizes the deviation of \(u_{\theta}\) from the SDF. The trained network \(u_{\theta}\left(\mathbf{x}\right)\) serves as a suitable approximation of the solution. Fayolle [76] also suggests an alternative PINN-based approach that relies on \(p\)-Poisson distances [78]. As discussed in the numerical section, we compare the proposed method with the results of the PINN-based approach for irregular and complex interfaces and check the robustness of using various initial level set functions \(\phi\).
## 3 Proposed Method
In this section, we propose a learning-based approach to recover the SDF (_ReSDF_) of a given hypersurface implicitly represented by a level set function. In order to increase the expressiveness of the network we use an augmented network that parameterizes the gradient of SDF as an auxiliary output while keeping the number of parameters. Moreoever, novel objectives are designed to exploit a global property and alleviate a singularity of the SDF by harnessing the geometric properties of the SDF.
### Augmented Network Representation
In this section, inspired by variable-splitting methods [66; 67] in optimization, an augmented network structure is considered to separately parametrize the gradient of the SDF as an auxiliary variable. The augmented network has two outputs that satisfy the following characteristics: (i) the primary output \(u_{\theta}\) approximates the SDF and it automatically vanishes on the interface as well as satisfies the sign condition (3) and (ii) the gradient field represented by the auxiliary output \(V_{\theta}\) has a unit norm; see Figure 2.
We parameterize a single network \(N_{\theta}:\mathbb{R}^{n}\rightarrow\mathbb{R}\times\mathbb{R}^{n}\) to output \(\psi_{\theta}\left(\mathbf{x}\right)\in\mathbb{R}\) together with an auxiliary value \(\Psi_{\theta}\left(\mathbf{x}\right)\in\mathbb{R}^{n}\)
\[N_{\theta}\left(\mathbf{x}\right)=\Bigl{(}\psi_{\theta}\left(\mathbf{x} \right),\Psi_{\theta}\left(\mathbf{x}\right)\Bigr{)}. \tag{7}\]
because it is more efficient to approximate the SDF and its gradient using a single common network rather than learning with two individual networks. The first scalar function \(\psi_{\theta}\) and the other vector-valued
component \(\Psi_{\theta}\) are employed to parametrize \(u_{\theta}\) and \(V_{\theta}\), respectively, through a designed architecture in Figure 2. The neural perceptron \(\tilde{\Psi}_{\theta}\) is defined using a multi-layer fully-connected neural network:
\[\tilde{\Psi}_{\theta}\left(\mathbf{x}\right)=W\left(f_{L}\circ\cdots\circ f_{0} \left(\mathbf{x}\right)\right)+\mathbf{b},\ \mathbf{x}\in\mathbb{R}^{n},\]
where \(L\in\mathbb{N}\) is a given depth, \(W\in\mathbb{R}^{n+1\times d_{L}}\) is a weight of the output layer, \(\mathbf{b}\in\mathbb{R}^{n+1}\) is a final bias vector and the perceptron (also known as the hidden layer) \(f_{\ell}:\mathbb{R}^{d_{\ell-1}}\rightarrow\mathbb{R}^{d_{\ell}}\) is defined by
\[f_{\ell}\left(\mathbf{y}\right)=\sigma\left(W_{\ell}\mathbf{y}+\mathbf{b}_{ \ell}\right),\ \mathbf{y}\in\mathbb{R}^{d_{\ell-1}},\text{ for all }\ell=0,\ldots,L,\]
for \(W_{\ell}\in\mathbb{R}^{d_{\ell}\times d_{\ell-1}}\), \(\mathbf{b}_{\ell}\in\mathbb{R}^{d_{\ell}}\), and a non-linear activation function \(\sigma\). The dimensions \(d_{\ell}\) of the hidden layers are also called by the width of the network. As the input to the network is a point \(\mathbf{x}\) in \(\Omega\), the input dimension of the input layer \(f_{0}\) is \(d_{0}=n\). The output layer recovers the \((n+1)\)-dimensional output values using the matrix product between the output of the final hidden layer \(f_{L}\) and \(W\) in addition to a bias vector \(\mathbf{b}\). A shorthand notation \(\theta\) is used for all parameters in the weights \(\{W,W_{0},\cdots,W_{L}\}\) and biases \(\{\mathbf{b},\mathbf{b}_{0},\cdots,\mathbf{b}_{L}\}\). Given the current parameter configuration, the parameters \(\theta\) are successively adapted by minimizing an assigned loss function explained in Section 3.2.
Representation of SDFWe aim to adjust the network output \(\psi_{\theta}\) so that the approximated SDF satisfies the sign condition (3). Thanks to the level set representation, the sign of the SDF is easily obtained. Similar to [76], we treat the condition on the sign of the SDF as a hard constraint by parameterizing the primary output as
\[u\left(\mathbf{x}\right)\approx u_{\theta}\left(\mathbf{x}\right)\coloneqq \mathcal{S}\left(\phi\left(\mathbf{x}\right)\right)\left|\psi_{\theta}\left( \mathbf{x}\right)\right|, \tag{8}\]
where the quantity \(\mathcal{S}\left(\phi\left(\mathbf{x}\right)\right)\) is a smoothed sign function of \(\phi\left(\mathbf{x}\right)\):
\[\mathcal{S}\left(y\right)=\gamma\tanh\left(\beta y\right),\ y\in\mathbb{R}\]
with a scaling factor \(\gamma=1.4\approx 1/\tanh 1\) and a smoothing parameter \(\beta=70\) are fixed in all examples. Clearly, the designed artificial neural network as an ansatz of the SDF automatically brings the sign condition, \(\operatorname{sgn}\left(u_{\theta}\right)=\operatorname{sgn}\left(\phi\right)\). In particular, \(u_{\theta}\) vanishes on the target interface \(\Gamma\). Therefore, the zero-level set of the approximated SDF can accurately preserve the interface without smearing out.
Figure 2: The _ReSDF_ model architecture is visualized. The input \((x,y)\) passes through the shared shallow fully-connected network on the shaded region. The _ReSDF_ parametrizes \(u_{\theta}=\operatorname{sgn}\left(\phi\right)\psi\) as an ansatz of the SDF together with the auxiliary output \(V_{\theta}=\Psi/\mid\Psi\mid\) which approximates the gradient of the SDF.
Approximation of the gradient fieldIn contrast to the existing approaches, which treat the unit norm property of the gradient as a soft constraint, we design the augmented output
\[\nabla u\left(\mathbf{x}\right)\approx V_{\theta}\left(\mathbf{x}\right):\mathbb{R }^{n}\to S^{n-1}\]
to automatically satisfy the unit norm constraint. To this end, we define a neural network that lies in the unit sphere \(S^{n-1}\) by normalizing the auxiliary output \(\Psi\):
\[V_{\theta}\left(\mathbf{x}\right)\coloneqq\frac{\Psi_{\theta}\left(\mathbf{x} \right)}{\mid\Psi_{\theta}\left(\mathbf{x}\right)\mid}. \tag{9}\]
In other words, \(\Psi_{\theta}\) learns only the direction of the gradient of SDF and it is automatically adjusted to the unit length by normalization.
To sum up, the SDF and its gradient are estimated from the output of the augmented network (7) as
\[u\left(\mathbf{x}\right)\approx u_{\theta}\left(\mathbf{x}\right) \coloneqq\mathcal{S}\left(\phi\left(\mathbf{x}\right)\right)\left| \psi_{\theta}\left(\mathbf{x}\right)\right|, \tag{10}\] \[\nabla u\left(\mathbf{x}\right)\approx V_{\theta}\left(\mathbf{x}\right) \coloneqq\frac{\Psi_{\theta}\left(\mathbf{x}\right)}{\mid\Psi_{ \theta}\left(\mathbf{x}\right)\mid}. \tag{11}\]
The resultant \(u_{\theta}\) vanishes on \(\Gamma\) and \(V_{\theta}\) has unit length. Moreover, it is worth emphasizing that \(u_{\theta}\) and \(V_{\theta}\) share the same weights and biases from the input layer till the last hidden layer. This enables a reduction in the number of parameters and faster training than using two separate networks.
### Design of loss functions
A formulation of the loss function is another key ingredient to achieve effective learning using two outputs of the augmented networks (7). In this section, we explain the following objective function consisting of three loss terms:
\[\mathcal{L}(u,V)=\int_{\Omega}\left\|\nabla u\left(\mathbf{x}\right)-V\left( \mathbf{x}\right)\right\|^{2}+\int_{\Omega}\left|\phi\left(\mathbf{x}-u\left( \mathbf{x}\right)V\left(\mathbf{x}\right)\right)\right|^{2}+\int_{\Omega} \left\|V\left(\mathbf{x}\right)-V\left(\mathbf{x}-\eta u\left(\mathbf{x} \right)V\left(\mathbf{x}\right)\right)\right\|^{2}, \tag{12}\]
where \(\eta>0\) is a fixed constant. The main characteristics are summarized:
* The first term enforces the vector matching between the gradient of the primary output and the auxiliary output.
* The second term imposes a global property using the fact that the SDF and its gradient determine the shortest path to the interface.
* The third term alleviates a singularity where the SDF is not differentiable.
In A, it is proved that the SDF is a minimizer of the functional \(\mathcal{L}(u,V)\). The Euler-Lagrange equation for the functional \(\mathcal{L}(u,V)\) is nonlinear and it is not straightforward to find a minimizer by conventional methods. It highlights one of the main advantages of the deep learning approach to minimize such a complex loss function.
Gradient matching objectiveThe gradient matching (GM) objective directly enforces to reduce a gap between the gradient of the primary output \(u_{\theta}\) and the unit-norm auxiliary output \(V_{\theta}\) in (7):
\[\mathcal{L}_{\text{GM}}\left(\theta\right)=\frac{1}{\mid\mathcal{D}\mid}\sum _{\mathbf{x}\in\mathcal{D}}\left\|\nabla u_{\theta}\left(\mathbf{x}\right)-V _{\theta}\left(\mathbf{x}\right)\right\|^{2}. \tag{13}\]
The gradient of \(u_{\theta}\) in (13) by auto-differentiation library (autograd) [79] calculates the exact derivatives of the networks. In the view of the variable-splitting method, the variational problem (5) is reformulated by
\[\min_{u,\;V}\int_{\Gamma}\left|u\right|+\int_{\Omega}\left(\left\|V\right\|-1 \right)^{2},\;\text{subject to }\nabla u=V.\]
By regularizing the constraint as a penalty term, an unconstrained problem is presented:
\[\min_{u,\;V}\int_{\Gamma}\left|u\right|+\int_{\Omega}\left(\left\|V\right\|-1 \right)^{2}+\lambda\int_{\Omega}\left\|\nabla u-V\right\|^{2}, \tag{14}\]
where \(\lambda>0\) is a regularization parameter. Since \(u_{\theta}\) automatically vanishes on \(\Gamma\) in the proposed network structure in Section 3.1, the first loss term in (14) is no longer needed. The network structure in which the \(V_{\theta}\) is designed to have a unit norm allows us to exclude the second term in (14). This leads us to consider \(\mathcal{L}_{\text{GM}}\) (13). Since the existing PINN methods are used to learn the eikonal equation with a single network output, the network must learn the direction of the gradient while matching its norm to be one. In the matching loss function (13), \(\Psi_{\theta}\) learns the direction of the gradient of the SDF and \(\nabla u_{\theta}\) is matched pointwise to the normalized vector \(V_{\theta}\) of \(\Psi_{\theta}\).
Shortest path objectiveThe shortest path (SP) objective explains how the gradient of SDF makes the shortest path to the interface \(\Gamma\):
\[\mathcal{L}_{\text{SP}}\left(\theta\right)=\frac{1}{\mid\mathcal{D}\mid}\sum _{\mathbf{x}\in\mathcal{D}}\left|\phi\left(\mathbf{x}-u_{\theta}\left(\mathbf{ x}\right)V_{\theta}\left(\mathbf{x}\right)\right)\right|^{2}. \tag{15}\]
The loss function \(\mathcal{L}_{\text{SP}}\) specifies a relations between the interface \(\Gamma\) and the points away from \(\Gamma\) by using the gradient of the SDF. The following proposition delivers a justification of using (15) in the view of geometric property of the SDF.
**Proposition 1**.: _[_80, 81_]_ _Let \(\Gamma\subset\Omega\) be a compact hypersurface of a domain \(\Omega\subset\mathbb{R}^{n}\). Suppose \(u:\Omega\rightarrow\mathbb{R}\) is the signed distance function to \(\Gamma\). Then, \(u\) is differentiable except on a set of zero measure. Moreover, for any point \(\mathbf{x}\in\Omega\) where \(u\left(\mathbf{x}\right)\) is differentiable, it satisfies_
\[\mathbf{x}_{\Gamma}=\mathbf{x}-u\left(\mathbf{x}\right)\nabla u\left(\mathbf{ x}\right)\in\Gamma, \tag{16}\]
_where \(\mathbf{x}_{\Gamma}=\arg\min_{\mathbf{y}\in\Gamma}d(\mathbf{x},\Gamma)\)._
The property (16) describes the shortest path of a point \(\mathbf{x}\) to the target interface. The shortest path explains that a point \(\mathbf{x}\in\Omega\) came from a point on the interface \(\mathbf{x}_{\Gamma}=\arg\min_{\mathbf{y}\in\Gamma}d(\mathbf{x},\Gamma)\), that is, the closest point to \(\mathbf{x}\) on \(\Gamma\). Note that \(\mathcal{L}_{\text{GM}}\) (13) matches a local relation between the gradient of \(u_{\theta}\) and \(V_{\theta}\) pointwisely. On the other hand, the shortest path loss function \(\mathcal{L}_{\text{SP}}\) dictates non-local relation between points and directly enforces \(u_{\theta}\) to satisfy the geometric property along with \(V_{\theta}\). With a help of a given function \(\phi\) whose zero level set represents \(\Gamma\), we have
\[\mathbf{x}-u\left(\mathbf{x}\right)\nabla u\left(\mathbf{x}\right)\in\Gamma \iff\phi\left(\mathbf{x}-u\left(\mathbf{x}\right)\nabla u\left(\mathbf{x} \right)\right)=0, \tag{17}\]
then \(\left|\phi\left(\mathbf{x}-u_{\theta}\left(\mathbf{x}\right)\nabla u_{\theta} \left(\mathbf{x}\right)\right)\right|\) in the loss function (15) is a reasonable choice and train it to approach zero. Note that this loss function requires the derivative of \(u_{\theta}\) with respect to the spatial variable \(\mathbf{x}\) and the computed gradient is multiplied by \(u_{\theta}\) again. It is a challenging optimization because the gradient calculation is deep and the loss landscape is complex. However, as we have \(V_{\theta}\) as an approximation of the gradient of the SDF, we replace \(\nabla u_{\theta}\) with \(V_{\theta}\) in (17). By replacing \(\nabla u_{\theta}\) with \(V_{\theta}\), the chain rule for computing the gradients of the objective (15) is simpler than using \(\nabla u_{\theta}\).
_Regularizing singularity objective_. The regularizing singularity (RS) objective resolves a singularity of the SDF:
\[\mathcal{L}_{\mathrm{RS}}\left(\theta\right)=\frac{1}{\mid\mathcal{D}\mid}\sum_{ \mathbf{x}\in\mathcal{D}}\left\|V_{\theta}\left(\mathbf{x}\right)-V_{\theta} \left(\mathbf{x}-\eta u_{\theta}\left(\mathbf{x}\right)V_{\theta}\left(\mathbf{ x}\right)\right)\right\|^{2}, \tag{18}\]
where \(\eta>0\). At a singular point \(\mathbf{x}\in\Omega\), since there are multiple closest points \(\mathbf{x}_{\Gamma}\) (16) on the interface, the gradient of the SDF at \(\mathbf{x}\) is not unique. In (18), we detour the singularity of the SDF by defining \(V_{\theta}\left(\mathbf{x}\right)\) at a singular point \(\mathbf{x}\) as a reliable gradient value near the interface \(V_{\theta}\left(\mathbf{x}-t\nabla u_{\theta}\left(\mathbf{x}\right)\right)\) for some \(t\). The idea is supported by the following proposition.
**Proposition 2**.: _Let \(\Gamma\subset\Omega\) be a compact hypersurface of a domain \(\Omega\subset\mathbb{R}^{n}\) and \(u:\Omega\rightarrow\mathbb{R}\) the signed distance function to \(\Gamma\). For any point \(\mathbf{x}\in\Omega\) where \(u\) is differentiable, the gradient of \(u\) is constant along the ray \(s\left(t\right)\) emanating from \(\mathbf{x}\) in direction \(\nabla u\left(\mathbf{x}\right)\):_
\[s\left(t\right)=\mathbf{x}-tsgn\left(u\left(\mathbf{x}\right)\right)\nabla u \left(\mathbf{x}\right),\ \forall\ t\in\left[0,\left|u\left(\mathbf{x}\right)\right|\right).\]
The proof is provided in B. In practice, as in \(\mathcal{L}_{\mathrm{SP}}\), we replace \(\nabla u_{\theta}\) with \(V_{\theta}\) and we set \(\eta=0.99\) so that the point gets close to the target interface having reliable gradient value. The loss function \(\mathcal{L}_{\mathrm{RS}}\) enables accurate and stable training of \(V_{\theta}\) and provides an appropriate approximation of SDF for various irregular interfaces.
To sum up, the proposed _ReSDF_ optimizes the augmented network through the objective
\[\mathcal{L}_{\mathrm{total}}\left(\theta\right)=\mathcal{L}_{\mathrm{GM}}\left( \theta\right)+\mathcal{L}_{\mathrm{SP}}\left(\theta\right)+\mathcal{L}_{\mathrm{ RS}}\left(\theta\right). \tag{19}\]
It can be extended to the weighted sum of each loss term where the positive regularization parameters balance the role of each. We may impose a larger weight on the larger component to accelerate its convergence. A more provable alternative to hand-tuned weights is the use of adaptive regularization parameter methods [82; 53; 83]. However, in contrast to most PINN methods, which are sensitive to the regularization parameter, we experimentally confirm that the objective (19) is transferable across a wide variety of interfaces \(\Gamma\) and level set functions \(\phi\).
### Numerical Considerations
There are a few numerical considerations to clarify the implementation of the proposed method. In order to minimize biased training of the neural network proportional to the values of \(\phi\), we consider the following normalization of \(\phi\)
\[\phi_{\mathrm{normalized}}\left(\mathbf{x}\right)=\frac{\phi\left(\mathbf{x} \right)}{\sqrt{\max_{\Omega}\left(\phi\right)\max_{\Omega}\left(-\phi\right)}},\]
where the maximum is computed over points in \(\Omega\). The values of \(\phi\) over the entire computational domain \(\Omega\) are balanced in the objective \(\mathcal{L}_{\mathrm{SP}}\) and the normalization alleviates the imbalance of \(\phi\) and enables the model to recover the SDF well from the strongly distorted level set function \(\phi\). To remove a singularity caused by the absolute value function in (8), we use a smooth approximation. For a point \(x_{0}\) at which \(\mathcal{S}^{\prime}\left(x_{0}\right)=0.6\max_{x}\mathcal{S}^{\prime}\left(x\right)\), we define a quadratic approximation
\[ABS_{\infty}\left(x\right)\coloneqq\begin{cases}\alpha x^{2},&\text{if}\ \left|x\right|\leq x_{0}\\ \left|x\right|+q,&\text{if}\ \left|x\right|>x_{0},\end{cases}\]
where \(\alpha\) is chosen such that \(\left(\mathcal{S}\cdot ABS_{\infty}\right)^{\prime}\left(x_{0}\right)=1\), and \(q\) is taken so that \(ABS_{\infty}\) is continuous at \(x_{0}\). Moreover, we adopt the leaky rectified linear unit (LeakyReLU) activation function:
\[\text{LeakyReLU}\left(x\right)=\begin{cases}x&\text{if}\ x\geq 0,\\ 0.01x&\text{otherwise},\end{cases}\]
which is possibly a good representation for a non-smooth function.
The second term of the loss functions \(\mathcal{L}_{\text{RS}}\) has a complex form in which the network is composited within the network. It causes practical difficulties in optimization when the loss function has a stacked network. In the implementation, we optimize \(\mathcal{L}_{\text{RS}}\) as follows:
\[\mathcal{L}_{\text{RS}}\left(\theta\right)=\frac{1}{\mid\mathcal{D}\mid}\sum_ {\mathbf{x}\in\mathcal{D}}\left\|V_{\theta}\left(\mathbf{x}\right)-V_{\hat{ \theta}}\left(\mathbf{x}-\eta u_{\theta}\left(\mathbf{x}\right)V_{\theta} \left(\mathbf{x}\right)\right)\right\|^{2},\]
where we do not update the network \(V_{\hat{\theta}}\) by disabling the gradient calculation of \(V_{\hat{\theta}}\). Then, the computational graph is no longer associated with the parameters of \(V_{\hat{\theta}}\) and it is regarded as a fixed function in the loss function.
The SDF is Lipschitz continuous with the Lipschitz constant \(1\). In the multilayer perceptron structure approximating \(u_{\theta}\), the activation function and weights \(W_{\ell}\) determine the Lipschitz condition of the approximated SDF. Since we are using LeakyReLU whose Lipschitz conatant is \(1\) as an activation function, we can obtain the desired condition by adjusting the weights. The commonly used method to enforce the Lipschitz condition is _weight clipping_[84]. It clamps the weights \(W_{\ell}\) to a bounded box \(W_{\ell}\in\left[-M,M\right]^{d_{\ell}\times d_{\ell-1}}\) after each gradient update. In the implementation, the clipping parameter \(M=0.1\) is used.
## 4 Numerical Results
In this section, we present numerical results of _ReSDF_ on several problems to validate the effectiveness and accuracy for re-distancing the given level set functions. For quantitative measurements, the accuracy of the trained model is measured by the difference with the exact solution \(u\) and exact gradient \(\mathbf{n}\) using the following norms:
\[\left\|u_{\theta}-u\right\|_{L^{2}} =\frac{1}{\left|\Omega\right|}\left(\int_{\Omega}\left|u_{\theta }\left(\mathbf{x}\right)-u\left(\mathbf{x}\right)\right|^{2}d\mathbf{x} \right)^{1/2},\quad\left\|u_{\theta}-u\right\|_{L^{\infty}}=\max_{\mathbf{x} \in\Omega}\left|u_{\theta}\left(\mathbf{x}\right)-u\left(\mathbf{x}\right) \right|, \tag{20}\] \[\left\|V_{\theta}-\mathbf{n}\right\|_{L^{2}} =\frac{1}{\left|\Omega\right|}\left(\int_{\Omega}\left|V_{ \theta}\left(\mathbf{x}\right)-\mathbf{n}\left(\mathbf{x}\right)\right|^{2}ds \right)^{1/2},\quad\left\|V_{\theta}-\mathbf{n}\right\|_{L^{\infty}}=\max_{ \mathbf{x}\in\Omega}\left|V_{\theta}\left(\mathbf{x}\right)-\mathbf{n}\left( \mathbf{x}\right)\right|,\]
where \(\left|\Omega\right|\) is the volume of the computational domain \(\Omega\). Since the errors on the interface \(\Gamma\) are crucial for the cases of evolving the interface, we also check the errors, \(\left\|V_{\theta}-\mathbf{n}\right\|_{L^{2}}^{\Gamma}\) and \(\left\|V_{\theta}-\mathbf{n}\right\|_{L^{\infty}}^{\Gamma}\), which are only measured on the interface \(\Gamma\) for a few examples. For qualitative comparison, the results of _ReSDF_ are compared with the existing PINN approach and the first [25] and second-order [85] fast marching method (FMM), where implementation builds on an open-source code 1. The PINN method based on [76] uses the residual of the eikonal equation (6) as the loss function. As suggested in [76], a fully connected network with depth \(8\) and width \(512\) with skip connection from the input and the fourth hidden layer are used in which all weights and biases are initialized by the geometric initialization scheme [86]. We optimize the network by Adam optimizer [87] with learning rate of \(10^{-4}\). We also comply with all other experimental configurations as provided in [76]. Note that the results of the PINN approach can be improved because the performance relies on specific geometric network parameter initialization [86] and is sensitive to hyper-parameter selection.
Footnote 1: [https://github.com/scikit-fmm/scikit-fmm](https://github.com/scikit-fmm/scikit-fmm)
For all examples of testing _ReSDF_, the Adam optimizer is applied with a learning rate of \(10^{-3}\), which decayed by \(0.9\) if the loss function does not improve after \(30\) evaluations. We evaluate the models per \(100\) epochs and trained them until the learning rate decayed by \(40\) times. In all numerical experiments, a single NVIDIA RTX 3090 GPU is used.
Figure 3: The graph (left) and iso-contours (right) of level set functions, a stiff shape \(\phi_{1}\) (23), an oscillatory shape \(\phi_{2}\) (25), and a discontinuous shape \(\phi_{3}\) (24) are presented.
The computational domains of all examples are \(\Omega^{\text{3d}}=\left[-1.1\right]^{3}\) or \(\Omega=\Omega^{\text{3d}}\cap\left(\mathbb{R}^{2}\times\left\{0\right\}\right)\). The collocation points evenly distributed on the domain \(\Omega^{\text{3D}}\) are presented by
\[\Omega_{n}^{\text{3d}}=\left\{\left(-1+ih,-1+jh,-1+kh\right):i,j,k\in\left\{0, \ldots,n-1\right\},\,h=\frac{2}{n-1}\right\}. \tag{21}\]
Accordingly, \(\Omega_{n}=\Omega_{n}^{\text{3d}}\cap\left(\mathbb{R}^{2}\times\left\{0\right\}\right)\). All interfaces and given level set functions used in numerical examples are listed below. In case of the closed form of the exact SDF is not known for given interfaces, we approximate it by creating a finite number of points, around 7000, on the interface and finding the minimum Euclidean distance from them.
1. A circle centered at the origin with the radius \(r=\frac{1}{2}\) is used in Example 1. The explicit form of the exact SDF is an equation of the cone: \[u\left(x,y\right)=\sqrt{x^{2}+y^{2}}-r.\] (22) Three level set functions are considered in Example 1: * An unbalanced function that is relatively flat and stiff on \(\Omega^{-}\) and \(\Omega^{+}\), respectively; see Figure 3-(a): \[\phi_{1}\left(x,y\right)=e^{x+y}u(x,y).\] (23) * A rapidly oscillating non-monotone function; see Figure 3-(b): \[\phi_{2}\left(x,y\right)=\begin{cases}x^{2}+\frac{3}{2}y^{2}&\text{ if }u(x,y)>0,\\ (u(x,y)-r_{0})^{2}-r_{0}^{2}&\text{ otherwise }.\end{cases}\] (24) * A discontinuous function with a jump on the interface; see Figure 3-(c): \[\phi_{3}\left(x,y\right)=\left(1.2\sin\left(4\pi x\right)\sin\left(4\pi y \right)+2\right)e^{-\left(x^{2}+y^{2}\right)/2}u(x,y).\] (25)
2. A square centered at the origin with the length of the side \(r=1\) is used in Example 2. The level set function whose zero level is the square is given; see the coutours in Figure 5-(a): \[\phi_{4}\left(x,y\right)=\frac{1}{2}\max\left\{\mid x\mid-\frac{r}{2},\mid y \mid-\frac{r}{2}\right\}.\] (26)
3. Flower-shaped interfaces with five-fold symmetry represented by the zero level set of the level set function with the polar coordinates \(\left(r,\theta\right)\) are used in Example 3; see the iso-contours in Figures 6-(a) and (b): \[\phi_{5}^{\alpha}\left(x,y\right)=\left(0.5\sin\left(6\pi x\right)\sin\left(6 \pi y\right)+1\right)\left(5r-2-\alpha\cos\left(5\theta\right)\right),\] (27) where \(\alpha=0.5\) and \(1\).
4. A dumbbell-shaped interface represented by the zero level set of the level set function is used in Example 3; see the iso-contours in Figure 6-(c): \[\phi_{6}\left(x,y\right)=10x^{4}\left(2x^{2}-1\right)+y^{2}-\frac{1}{10}.\] (28)
5. A heart-shaped interface represented by the zero level set of the level set function is used in Example 3; see the coutours in Figure 6-(d): \[\phi_{7}\left(x,y\right)=\left(\cos\left(x,y\right)+2\left(x+y\right)^{2}+\frac{ 3}{2}\right)\left(2.2\left(y+0.2-x^{\frac{2}{3}}\right)^{2}+1.7x^{2}-0.6\right).\] (29)
6. Multiple circular interfaces represented by the zero level set of minimum of the level set functions \(u_{j}\left(x,y\right)=(x-x_{j})^{2}+(y-y_{j})^{2}-r_{j}^{2}\) are used in Example 4: \[\phi_{8}\left(x,y\right) =e^{x+y}\min_{x,y}\left\{u_{j}(x,y):j=1,2\right\},\] (30) \[\phi_{9}\left(x,y\right) =e^{x+y}\min_{x,y}\left\{u_{j}(x,y):j=3,4,5\right\},\] (31) where \((x_{1},y_{1},r_{1})=(-0.2,0,0.3)\), \((x_{2},y_{2},r_{2})=(-0.2,0,0.3)\), \((x_{3},y_{3},r_{3})=(-0.4,0.3,0.45)\), \((x_{4},y_{4},r_{4})=(0.5,0.3,0.3)\), and \((x_{5},y_{5},r_{5})=(0.3,-0.5,0.4)\); see the iso-contours in Figures 10-(a) and (c).
7. A sphere centered at the origin with the radius \(r=\frac{1}{2}\) is represented by the zero level set of the level set function used in Example 5; see the coutours on \(z=0\) in Figure 11-(a): \[\phi_{10}\left(x,y,z\right)=\left(\left(x-1\right)^{2}+\left(y-1\right)^{2}+ \left(z+1\right)^{2}+0.1\right)\left(x^{2}+y^{2}+z^{2}-r^{2}\right).\] (32) Multiple ellipsoidal interfaces represented by the zero level set of minimum of the level set functions \(v_{j}(x,y,z)=10(x-x_{j})^{2}+5(y-y_{j})^{2}+(z-z_{j})^{2}-1\) are used in Example 5; see the iso-contours on \(z=0\) in Figure 11-(c): \[\phi_{11}\left(x,y,z\right)=\min_{x,y,z}\left\{v_{j}(x,y,z):j=1,2\right\},\] (33) where \((x_{1},y_{1},z_{1})=(-0.2,0,0)\) and \((x_{2},y_{2},z_{2})=(0.2,0,0)\).
significantly with different batch sizes of training collocation points. Even with a coarse grid, we still obtain accurate predictions. We can also observe that the resulting errors decrease as the network width size increases. This is a general phenomenon in which the expressive power increases as the size of the network increases.
The robustness of changing the level set functions is examined by using diverse initial level set functions; a stiff shape \(\phi_{1}\) (23), an oscillatory shape \(\phi_{2}\) (25), and a discontinuous shape \(\phi_{3}\) (24). In this case, the model is trained with a fixed width of size 64 on collocation points \(\Omega_{7}\). The results are compared with that of the PINN approach. The accuracy of two models is listed in Table 3. The results of _ReSDF_ have a smaller error than the results of the PINN approach by a factor of \(10^{-3}\) and \(10^{-2}\) in the \(L^{2}\) and \(L^{\infty}\) norms, respectively. In order to present visual differences, we compare the predictions of the trained _ReSDF_ and PINN against the exact SDF in Figure 4. It shows a similar result with \(\phi_{1}\). However, the PINN approach fails to predict SDF with \(\phi_{2}\) and \(\phi_{3}\).
Figure 4: The iso-contours of the results of _ReSDF_ (left) and PINN approach (right) from three level set functions, a stiff shape \(\phi_{1}\) (23), an oscillatory shape \(\phi_{2}\) (25), and a discontinuous shape \(\phi_{3}\) (24) are presented with the exact solution. The red solid curve is the zero level set of mentioned level set functions.
to see how good numerical solution it is, we compare the results with the second order FMM (FMM\({}^{2}\)) in Table 4. The \(L^{2}\) errors of the SDF and its gradient predicted by _ReSDF_ on \(\Omega_{6}\) are between the \(L^{2}\) errors of the result of FMM\({}^{2}\) on \(\Omega_{6}\) and \(\Omega_{8}\). It means that the accuracy of _ReSDF_ on a small number of collocation points is comparable to FMM\({}^{2}\) on a large number of collocation points and it is consistent with previous findings in Example 1.
Example 3: To demonstrate that the proposed model can be applied to various interfaces, the level set functions, \(\phi_{5}^{\alpha}\) (27) with \(\alpha=0.5\) and \(1\), \(\phi_{6}\) (28), and \(\phi_{7}\) (29), are used to be reinitialized; see in Figure 6. Due to the rapid change of the gradient field of the level set functions \(\phi_{5}^{1}\) and \(\phi_{6}\), the clipping parameter \(M=5\) is used to improve the representability of \(V_{\theta}\). Other parameters are consistent with other cases.
Throughout the examples in Figure 7, the results are compared with the existing PINN approach. The numerical results verify that the proposed method has better agreement with the exact SDF than PINN approach which is prone to returning erroneous predictions for chosen irregular interfaces. Furthermore, the result of _ReSDF_ with the level set function \(\phi_{7}\) 29 on \(\Omega_{6}\) is qualitatively compared with the results of FMM and FMM\({}^{2}\) on \(\Omega_{n}\), \(n=6,\ldots,9\) in Figures 8 and 9, respectively. In the neural network of _ReSDF_, we use the width 128 and depth 4. Since the batch size is small to \(\Omega_{6}\), the learning rate is initialized by \(5\cdot 10^{-4}\). FMM produces reliable results when the mesh is sufficiently refined. Comparing the results in Figure 8, the result of the first-order FMM on \(\Omega_{9}\) is similar to the result of _ReSDF_ trained on the collocation points \(\Omega_{6}\). Similarly, in Figure 9, the result of _ReSDF_ on \(\Omega_{6}\) is a comparable to the results of the second-order FMM on between \(\Omega_{8}\) and \(\Omega_{9}\).
Example 4: We investigate the ability of _ReSDF_ to approximate the SDF when the interface is a union of several interfaces. Such a complex mixed interface is relevant to practical problems such as the rising of multiple bubbles in multi-phase flows. The zero level sets of the level set functions \(\phi_{8}\) (30) or \(\phi_{9}\) (31) could depict cases of merged bubbles or multiple bubbles in the water, respectively. In Figure 10, predicted SDFs and the exact solutions are presented. Although the SDF has singularities over the computational domain, _ReSDF_ provides accurate interface representations for multiple nested interfaces.
Figure 5: The iso-contours of the level set function \(\phi_{4}\) (26) (left) and the results of _ReSDF_ with \(\phi_{4}\) (right) are presented with the exact solution. The red solid curve is the zero level set of \(\phi_{4}\).
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & \multicolumn{2}{c}{Uniform nodes} & \multicolumn{2}{c}{Random nodes} \\ \cline{2-5} & \(\left\|u_{\theta}-u\right\|_{2}\) & \(\left\|u_{\theta}-u\right\|_{\infty}\) & \(\left\|u_{\theta}-u\right\|_{2}\) & \(\left\|u_{\theta}-u\right\|_{\infty}\) \\ \hline \(\phi_{8}\) & \(6.16\cdot 10^{-4}\) & \(1.43\cdot 10^{-2}\) & \(6.99\cdot 10^{-4}\) & \(1.48\cdot 10^{-2}\) \\ \(\phi_{9}\) & \(6.46\cdot 10^{-4}\) & \(1.89\cdot 10^{-2}\) & \(6.01\cdot 10^{-4}\) & \(1.98\cdot 10^{-2}\) \\ \hline \hline \end{tabular}
\end{table}
Table 5: The \(L^{2}\) and \(L^{\infty}\) errors between exact and predicted SDF with \(\phi_{8}\) (30) and \(\phi_{9}\) (31) are listed. Uniform nodes are fixed by the collocation points \(\Omega_{7}\) and random nodes are sampled from the uniform distribution.
Figure 7: The iso-contours of the results of _ReSDF_ (left) and PINN approach (right) from three level set functions, \(\phi_{5}^{\alpha}\) (27) with \(\alpha=0.5\) and \(1\) and \(\phi_{6}\) (28) are presented with the approximated exact solution. The red solid curve is the zero level set of mentioned level set functions.
Figure 8: The result of _ReSDF_ with the level set function \(\phi_{7}\) 29 on \(\Omega_{6}\) is presented and compared to PINN approach. The results of using FMM on \(\Omega_{a}\), \(n=6,\ldots,9\) are also presented.
Figure 9: Signed distance functions of the zero level set of \(\phi_{7}\) 29 computed by the second-order FMM (FMM\({}^{2}\)) on \(\Omega_{n}\), \(n=6,\ldots,9\) are presented.
Figure 10: The iso-contours of the level set functions \(\phi_{8}\) (30) and \(\phi_{9}\) (31) (left) and the results of _ReSDF_ with \(\phi_{8}\) and \(\phi_{9}\) (right) are presented with the exact solution. The red solid curves are the zero level set of \(\phi_{8}\) or \(\phi_{9}\).
Another strength of the deep-learning-based approach in _ReSDF_ is that scattered collocation points can be used without any methodological modifications. We investigate the effect of the deployment of training collocation points with level set functions \(\phi_{8}\) (30) or \(\phi_{9}\) (31). The uniform collocation points \(\Omega_{7}\) and random points of the same number of \(\Omega_{7}\) sampled from the uniform distribution \(\mathcal{U}\left(-1,1\right)\) are used. Running five times with different random seeds, the average values of the \(L^{2}\) and \(L^{\infty}\) errors are reported in Table 5. The results confirm that the errors obtained by different distributions are of the same order of magnitude. In other words, it verifies that the performance of the model is not sensitively tied to the uniform distribution of the training collocation points.
Figure 11: The iso-contours of the level set functions \(\phi_{10}\) (30) and \(\phi_{11}\) (31) (left) and the results of _ReSDF_ with \(\phi_{10}\) and \(\phi_{11}\) (right) are presented on the \(z=0\) plans with the exact solution. The red solid curves are the zero level set of \(\phi_{10}\) and \(\phi_{11}\).
whole computational domain and alleviates the singularity of the SDF. We confirm that the proposed method produces accurate and robust results without tunable parameter adjustments through various experimental examples ranging from highly distorted or discontinuous level set functions to complex and irregular interfaces.
## 6 Acknowledgements
This work was supported by the NRF grant [2021R1A2C3010887], the ICT R&D program of MSIT/IITP[1711117093, 2021-0-00077], and the European Union's Horizon 2020 Research and Innovation Programme under the Programme SASPRO 2 COFUND Marie Sklodowska-Curie grant agreement No. 945478.
## Appendix A Theoretical Justification
The following theorem proves that the SDF is the minimizer of \(\mathcal{L}(u,V)\) (12) except on a set of measure zero.
**Theorem 1**.: _Let \(\Gamma\) be a hypersurface in a domain \(\Omega\subset\mathbb{R}^{n}\). The optimal solution to the functional (12) is the SDF to \(\Gamma\), except on a set of measure zero._
Proof.: Let \(u^{*}\) be the optimal solution to the functional (12). For any point \(\mathbf{x}\in\Omega\) and a unit vector \(\mathbf{v}\in\mathbb{R}^{n}\), define a function \(f_{\mathbf{v}}:\mathbb{R}^{+}\rightarrow\mathbb{R}\) by
\[f_{\mathbf{v}}\left(t\right)\coloneqq u^{*}\left(\mathbf{x}-t\mathbf{v} \right).\]
Clearly, it follows that \(f_{\mathbf{v}}\left(0\right)=u^{*}\left(\mathbf{x}\right)\) and \(\left\|f_{\mathbf{v}}^{\prime}\left(\mathbf{x}\right)\right\|\leq 0\), because
\[\left\|f_{\mathbf{v}}^{\prime}\left(t\right)\right\|=\left\|\mathbf{v}\cdot \nabla u^{*}\left(\mathbf{x}-t\mathbf{v}\right)\right\|\leq\left\|\mathbf{v} \right\|\cdot\left\|\nabla u^{*}\left(\mathbf{x}-t\mathbf{v}\right)\right\| \leq 1,\ \forall t.\]
Suppose \(f_{\mathbf{v}}\left(T\right)=0\) for some \(\mathbf{v}\in S^{n-1}\) and \(T\in\mathbb{R}^{+}\). Then, we have
\[\left|\frac{f_{\mathbf{v}}\left(T\right)-f_{\mathbf{v}}\left(0\right)}{T-0} \right|\leq 1,\]
Figure 12: Iso-surfaces cut into a section \(z=0\) of the predicted SDF with \(\phi_{10}\) (30) (left) and \(\phi_{11}\) (31) (right) are presented.
which can be reformulated as
\[\left|f_{\mathbf{v}}\left(0\right)\right|=\left|u^{*}\left(\mathbf{x}\right) \right|\leq\left|T\right.\text{ }\] (A.1)
Recall that for the distance function \(d\left(\cdot,\Gamma\right)\), there exists a unit vector \(\mathbf{v}\in S^{n-1}\) such that
\[\mathbf{x}-d\left(\mathbf{x},\Gamma\right)\cdot\mathbf{v}\in\Gamma,\]
which is identical to
\[f_{\mathbf{v}}\left(d\left(\mathbf{x},\Gamma\right)\right)=0.\]
The inequality we deduced in (A.1) leads to
\[\left|u^{*}\left(\mathbf{x}\right)\right|\leq d\left(x,\Gamma\right).\]
However, because \(u\) is the optimal solution, it satisfies \(\mathbf{x}-u^{*}\left(\mathbf{x}\right)\nabla u^{*}\left(\mathbf{x}\right)\in\Gamma\) for points \(\mathbf{x}\in\Omega\) at which \(u^{*}\) is differentiable. Therefore, it can be written as \(f_{\nabla u^{*}\left(\mathbf{x}\right)}\left(u^{*}\left(\mathbf{x}\right) \right)=0\), and hence
\[d\left(\mathbf{x},\Gamma\right)\leq\left|u^{*}\left(\mathbf{x}\right)\right|.\]
Combining both inequalities, we see that
\[\left|u^{*}\left(\mathbf{x}\right)\right|=d\left(\mathbf{x},\Gamma\right),\]
concluding the proof.
Note that according to the proof we can obtain the SDF without the last term in (12) where the SDF is differentiable. In practice, we employ \(\mathcal{L}_{RS}\) to facilitate training at singular points discussed in Section 3.2.
## Appendix B Proof of Proposition 2
Proof.: For a given point \(\mathbf{x}\in\Omega\), let us define a function \(g:\mathbb{R}^{+}\rightarrow\mathbb{R}\) as
\[g_{\mathbf{x}}\left(t\right)=u\left(\mathbf{x}-t\nabla u\left(\mathbf{x} \right)\right).\]
It satisfies
\[\begin{cases}g_{\mathbf{x}}\left(0\right)=u\left(\mathbf{x}\right),\\ g_{\mathbf{x}}\left(u\left(\mathbf{x}\right)\right)=u\left(\mathbf{x}-u\left( \mathbf{x}\right)\nabla u\left(\mathbf{x}\right)\right)=0,\end{cases}\] (B.1)
where the last equation is deduced from the property (16). Also, since the signed distance function has a gradient with unit norm, we get
\[\left|g_{\mathbf{x}}^{\prime}\left(t\right)\right|=\left|\nabla u\left( \mathbf{x}\right)\cdot\nabla u\left(\mathbf{x}-t\nabla u\left(\mathbf{x} \right)\right)\right|\leq 1.\] (B.2)
From (B.1) and (B.2), it follows that
\[\left|g_{\mathbf{x}}^{\prime}\left(t\right)\right|=1,\]
for \(t\) between \(0\) to \(u\left(\mathbf{x}\right)\). This implies that \(\nabla u\left(\mathbf{x}\right)\) and \(\nabla u\left(\mathbf{x}-t\nabla\left(\mathbf{x}\right)\right)\) are parallel. This concludes
\[\nabla u\left(\mathbf{x}\right)=\nabla u\left(\mathbf{x}-t\mathrm{sgn}\left(u \left(\mathbf{x}\right)\right)\nabla u\left(\mathbf{x}\right)\right),\forall \text{ }t\in\left[0,\left|u\left(\mathbf{x}\right)\right|\right).\] (B.3) |
2307.09375 | CertPri: Certifiable Prioritization for Deep Neural Networks via
Movement Cost in Feature Space | Deep neural networks (DNNs) have demonstrated their outperformance in various
software systems, but also exhibit misbehavior and even result in irreversible
disasters. Therefore, it is crucial to identify the misbehavior of DNN-based
software and improve DNNs' quality. Test input prioritization is one of the
most appealing ways to guarantee DNNs' quality, which prioritizes test inputs
so that more bug-revealing inputs can be identified earlier with limited time
and manual labeling efforts. However, the existing prioritization methods are
still limited from three aspects: certifiability, effectiveness, and
generalizability. To overcome the challenges, we propose CertPri, a test input
prioritization technique designed based on a movement cost perspective of test
inputs in DNNs' feature space. CertPri differs from previous works in three key
aspects: (1) certifiable: it provides a formal robustness guarantee for the
movement cost; (2) effective: it leverages formally guaranteed movement costs
to identify malicious bug-revealing inputs; and (3) generic: it can be applied
to various tasks, data, models, and scenarios. Extensive evaluations across 2
tasks (i.e., classification and regression), 6 data forms, 4 model structures,
and 2 scenarios (i.e., white-box and black-box) demonstrate CertPri's superior
performance. For instance, it significantly improves 53.97% prioritization
effectiveness on average compared with baselines. Its robustness and
generalizability are 1.41~2.00 times and 1.33~3.39 times that of baselines on
average, respectively. | Haibin Zheng, Jinyin Chen, Haibo Jin | 2023-07-18T15:59:37Z | http://arxiv.org/abs/2307.09375v1 | # CertPri: Certifiable Prioritization for Deep Neural Networks via Movement Cost in Feature Space
###### Abstract
Deep neural networks (DNNs) have demonstrated their outperformance in various software systems, but also exhibit misbehavior and even result in irreversible disasters. Therefore, it is crucial to identify the misbehavior of DNN-based software and improve DNNs' quality. Test input prioritization is one of the most appealing ways to guarantee DNNs' quality, which prioritizes test inputs so that more bug-revealing inputs can be identified earlier with limited time and manual labeling efforts. However, the existing prioritization methods are still limited from three aspects: certifiability, effectiveness, and generalizability. To overcome the challenges, we propose _CertPri_, a test input prioritization technique designed based on a movement cost perspective of test inputs in DNNs' feature space. CertPri differs from previous works in three key aspects: (1) _certifiable_ - it provides a formal robustness guarantee for the movement cost; (2) _effective_ - it leverages formally guaranteed movement costs to identify malicious bug-revealing inputs; and (3) _generic_ - it can be applied to various tasks, data, models, and scenarios. Extensive evaluations across 2 tasks (i.e., classification and regression), 6 data forms, 4 model structures, and 2 scenarios (i.e., white-box and black-box) demonstrate CertPri's superior performance. For instance, it significantly improves 53.97% prioritization effectiveness on average compared with baselines. Its robustness and generalizability are 1.41\(\sim\)2.00 times and 1.33\(\sim\)3.39 times that of baselines on average, respectively. The code of CertPri is open-sourced at [https://anonymous.4open.science/r/CertPri](https://anonymous.4open.science/r/CertPri).
Deep neural network, test input prioritization, deep learning testing, movement cost, certifiable prioritization
## I Introduction
Deep neural networks (DNNs) [1] have performed impressive success in many fields, including computer vision [2, 3, 4], natural language processing [5, 6], and software engineering [7, 8], etc. However, like traditional software systems, DNNs are also vulnerable in terms of quality and reliability [9, 10, 11, 12]. Meanwhile, these vulnerabilities could lead to serious losses such as a crash caused by a Google self-driving car [13], and even irreversible disasters such as the deadly car crash caused by the autopilots of Tesla [14] and Uber [15]. Therefore, it is crucial to detect the misbehavior of DNN-based software and ensure DNNs' quality.
Much effort has been put into DNN-based software quality assurance [16, 17, 18, 19, 20, 12]. One of the most appealing methods is test input prioritization [21, 22], which prioritizes test inputs so that more bug-revealing inputs (e.g., misclassified inputs) can be identified earlier with limited time and manual labeling efforts. Moreover, these inputs facilitate the debugging of DNN-based software, which could improve DNNs' quality [23, 24] and reduce their retraining cost [25]. There are several prioritization methods, mainly including four aspects, i.e., coverage-based [26, 12, 20], surprise-based [27, 28, 22], confidence-based [23, 25, 29], and mutation-based [24] methods. The first two methods prioritize test inputs based on DNNs' neuron coverage and surprise-adequacy activation traces, respectively. Confidence-based methods identify bug-revealing inputs by measuring the classifier's output probabilities. Mutation-based methods design a series of mutation operations, and then analyze the mutated output probabilities based on supervised learning. These methods make great progress in identifying bug-revealing inputs earlier, but they still suffer from the following problems.
First, the existing methods are empirically lacking formal guarantees, which results in vulnerability against malicious attacks, i.e., prioritizing bug-revealing inputs at the back when attacked. More specifically, taking neuron activation suppression as an optimization objective, the adversary could craft imperceptible adversarial perturbations [30, 31]. These bug-revealing inputs with perturbations will be prioritized at the back due to low neuron coverage or inadequate activation traces, i.e., coverage-based and surprise-based methods lose their effectiveness. Similarly, confidence-based methods fail to work when increasing the highest probability value [32, 33] by adding well-designed perturbations to inputs. Mutation-based methods include input mutation and model mutation, both of which are similar to data augmentation [34, 35] and network modification [36, 37], respectively. Once the mutation operations are leaked, the adversary can bypass these operations by crafting malicious bug-revealing inputs. Then, mutation-based methods will be invalid. Therefore, a formal robustness guarantee for certifiable prioritization is required.
Second, almost all methods either suffer from prioritization effectiveness or efficiency issues. For instance, coverage-based methods have been demonstrated to be ineffective and time-costly [23]. Surprise-based methods improve test input prioritization by utilizing more advanced metrics (e.g., surprise-adequacy and activation frequency), but are computationally expensive due to more parameter tuning. Confidence
-based methods apply output probabilities to perform fast and lightweight prioritization, and their effectiveness is better than the previous two. However, once adversarial [32] or poisoned [38] inputs are injected into the test dataset, their effectiveness drops largely. Mutation-based test input prioritization, PRIMA [24], is the state-of-the-art (SOTA) method for DNNs, outperforming the confidence-based methods by an average of 10%, but with a time cost increase of more than 100 times. Moreover, PRIMA is a supervised prioritization method, i.e., its effectiveness is affected by training dataset size and category balance.
Third, most methods suffer from the generalizability issue, including the generalizability of tasks, data forms, model structures, and application scenarios. For instance, confidence-based methods rely on DNNs' output probabilities, and thus may not be directly generalized to a regression task. Meanwhile, their prioritization effectiveness for sequential data form (e.g., text data [5]) has been demonstrated to drop by an average of 30% in the existing study [24]. Mutation-based methods could be generalized to various data and tasks, but specialized domain knowledge is required to design diverse data-specific (e.g., structured data [39] and graph data [40]) mutation strategies. Moreover, it is unclear whether their model mutation strategies can still perform well on other model structures, such as graph convolution network (GCN) [40]. Except for confidence-based methods, the other three types are all designed for white-box testing and need to acquire DNNs' details. In black-box scenarios, these white-box methods will seriously degrade performance or even fail to work.
To overcome these challenges, our design goals are as follows: (1) we intend to take formal guarantee into account when designing a certifiable prioritization method; (2) we want the certifiability to serve prioritization effectiveness without degrading efficiency; (3) we plan to evaluate its generalizability.
One of the main contributions of DNNs is automatic feature extraction [1], which maps test inputs from the data space to the feature space. Based on the mapping ability, Zheng _et al_. [41] improved the robustness of DNNs by pushing the test input to the target position (i.e., class center) based on inverse perturbation. Further, we find that the inverse perturbation measures the movement cost of test inputs in feature space. Thus, we explore the variation in the movement cost of different test inputs and give an example as shown in Figure 1. We compute the movement cost (i.e., inverse perturbation based on infinite norm) of 5,000 test inputs from ImageNet [42] on a pre-trained VGG model [2]. We first divide test inputs into 10 groups according to the original prediction probability, and then calculate the movement cost of reaching a position with a higher probability. As shown in Figure 1, we find that there was a significant difference (p-value=2.38E-07 based on T-test) in the movement cost of correctly and incorrectly predicted test inputs, which can be considered for prioritization. However, the inverse perturbation [41] is obtained through iterative training, which is still empirical. To satisfy the certifiability requirement, we further derive a formal guarantee of the inverse perturbation with the Lipschitz continuity assumption [43].
According to the utility analysis and certifiability consideration, we design a certifiable prioritization technique, _CertPri_, which reduces the problem of measuring misbehavior probability to the problem of measuring the movement difficulty in feature space, i.e., the movement cost of the test inputs being close to or far from the class centers. Then, we compute the certifiable inverse perturbation based on the generalized extreme value theory (GEVT) [44]. Based on the formal robustness guarantee, CertPri is valid for identifying malicious bug-revealing inputs, as well as clean bug-revealing inputs, without degrading efficiency.
To compute the inverse perturbation, the model gradient is used [45]. Since DNNs are an end-to-end learning paradigm [46], the gradient can be directly computed when data and models are available in a white-box scenario. In a black-box scenario, the approximate gradient can be computed by gradient estimation [47]. Furthermore, to evaluate CertPri's generalizability, we conduct extensive experiments on various tasks, data forms, data types, model structures, training and prioritization scenarios.
The main contributions are as follows.
* Through inverse perturbation analysis and measurement, we first implement a formal robustness guarantee for the movement cost, which provides a new perspective for measuring DNNs' misbehavior probability.
* Based on the formal guarantee of movement costs, we propose an effective and efficient technique, CertPri, which leverages certifiability to facilitate the prioritization effectiveness.
* To evaluate CertPri's generalizability, we conduct extensive experiments on various tasks, data, models and scenarios, the results of which show the superiority of CertPri compared with previous works.
* We publish CertPri as a self-contained open-source toolkit online for facilitating DNNs' prioritization research.
## II Background
In this section, we first introduce the basic knowledge of DNNs, and then give the definitions of inverse perturbation.
### _Deep Neural Networks_
DNN consists of several layers, each of which contains a large number of neurons [1]. Generally, the basic tasks of DNNs include classification and regression. These tasks are accomplished by building models with various basic structures, such as fully connected network (FCN) [16], convolutional neural network (CNN) [2], long short-term memory (LSTM) [48], and GCN [40]. The classification and regression models are as follows.
The classification model predicts which class the test input belongs to. Suppose we have a \(K\)-class classifier \(f^{C}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{K}\). Given a test input \(\mathbf{x}_{0}\in\mathbb{R}^{d}\), the classifier will output a
vector of \(K\) values normalized by softmax function [49], e.g., \(\{f_{i}^{C}(\mathbf{x}_{0})|_{1\leq i\leq K}\}\), each of which represents the probability that \(\mathbf{x}_{0}\) belongs to the \(i\)-th class, where \(\{d,K\}\in\mathbb{Z}^{+}\) and \(K\geq\)2. \(c(\mathbf{x}_{0})=\arg\max_{1\leq i\leq K}f_{i}^{C}(\mathbf{x}_{0})\) represents the predicted label of \(\mathbf{x}_{0}\) and \(f_{i}^{C}(\mathbf{x}_{0})\in(0,1)\).
The regression model describes a mapping between the test input and the output. Suppose we have a regression model \(f^{R}:\mathbb{R}^{d_{1}}\rightarrow\mathbb{R}^{d_{2}}\). Given a test input \(\mathbf{x}_{0}\in\mathbb{R}^{d_{1}}\), the regression model will output a vector with \(d_{2}\) elements activated by linear or ReLU functions [50], e.g., \(\{f_{i}^{R}(\mathbf{x}_{0})|_{1\leq i\leq d_{2}}\}\), each of which represents a fit to the ground-truth, where \(\{d_{1},d_{2}\}\in\mathbb{Z}^{+}\). \(r(\mathbf{x}_{0})=f^{R}(\mathbf{x}_{0})\) represents the fitted prediction output of \(\mathbf{x}_{0}\) and \(f_{i}^{R}(\mathbf{x}_{0})\in[\min_{r},\max_{r}]\).
### _Definitions of Inverse Perturbation_
We give definitions of inversely perturbed test input, minimum inverse perturbation, and lower bound.
**Definition 1** (inversely perturbed test input).: Given \(\mathbf{x}_{0}\), we say \(\mathbf{x}_{0}^{*}\) is an inversely perturbed test input of \(\mathbf{x}_{0}\) with inverse perturbation \(\mathbf{\mu}\) and \(l_{p}\)-norm \(\Delta_{p}\) if \(\mathbf{x}_{0}^{*}=\mathbf{x}_{0}+\mathbf{\mu}\) is moved to the target position and \(\Delta_{p}=||\mathbf{\mu}||_{p}\). An inversely perturbed test input for a classifier is \(\mathbf{x}_{0}^{*}\in\mathbb{R}^{d}\) that moves towards the class center of \(c(\mathbf{x}_{0})\), where the class center is defined as \(f_{center}^{C}(\mathbf{x}_{0})=\min\big{\{}f_{c}^{C}(\mathbf{x}_{0})\times[1+\log(1+f_ {c}^{C}(\mathbf{x}_{0}))],1\big{\}}\). For a regression model, \(\mathbf{x}_{0}^{*}\in\mathbb{R}^{d_{1}}\) moves from \(r(\mathbf{x}_{0})\) to regression center, which is defined as \(f_{i}^{R,\pm}(\mathbf{x}_{0})=\operatorname{clip}_{\min_{r}}^{\max_{r}}\big{(}f_{ i}^{R}(\mathbf{x}_{0})+|f_{i}^{R}(\mathbf{x}_{0})|\times\log\big{[}1+\tanh(f_{i}^{R }(\mathbf{x}_{0}))\big{]}\big{)}\).
**Definition 2** (minimum inverse perturbation \(\Delta_{p}^{\min}\) and its lower bound \(\gamma_{L}\)).: Given a test input \(\mathbf{x}_{0}\), the minimum \(l_{p}\) inverse perturbation of \(\mathbf{x}_{0}\), denoted as \(\Delta_{p}^{\min}\), is defined as the smallest \(\Delta_{p}\) over all inversely perturbed test inputs of \(\mathbf{x}_{0}\). Suppose \(\Delta_{p}^{\min}\) is the minimum inverse perturbation of \(\mathbf{x}_{0}\). A lower bound of \(\Delta_{p}^{\min}\), denoted by \(\gamma_{L}\) where \(\gamma_{L}\leq\Delta_{p}^{\min}\), is defined such that any inversely perturbed test inputs of \(\mathbf{x}_{0}\) with \(||\mathbf{\mu}||_{p}\leq\gamma_{L}\) will never reach the target position.
The lower bound of inverse perturbation measures the minimum movement cost for a test input to reach the target position (i.e., class center or regression center). \(\gamma_{L}\) guarantees that the inversely perturbed test input will never move to the target position for inverse perturbation with \(||\mathbf{\mu}||_{p}\leq\gamma_{L}\), certifying the movement cost of the test input. All the notations are summarized in Table I.
## III CertPri Methodology
In this section, we present a technical description of CertPri. First, we discuss the prioritization feasibility based on a movement cost view. Then, we introduce CertPri, which provides a formal robustness guarantee based on the Lipschitz continuity assumption [43] and estimates the movement cost based on GEVT [45]. Finally, we prioritize inputs via movement costs.
### _A Movement View in Feature Space_
A well-trained DNN implements feature extraction through multiple hidden layers, each of which filters redundant features and amplifies key features during forward propagation [51]. If we regard the feature mapping of hidden layers as data movement in feature space, almost all test inputs are pushed towards the target position in forward propagation, while bug-revealing inputs fail to reach the target position at the end of forward propagation.
First, we investigate the movement process of test inputs in forward propagation. Taking a classifier based on a FCN with three hidden layers on MNIST [52] dataset as an example, the t-SNE [53] based distribution of test inputs in feature space is visualized in Figure 2. During the forward propagation as shown in Figure 2 (a), we observe that most test inputs directional approach the correct class, which realizes the right prediction. However, several test inputs move without direction leading to the DNN's misbehavior (i.e., misclassification), which are the bug-revealing inputs that need to be identified. It is intuitively based on the feature purity theory of test inputs [23]. For a bug-revealing test input, it not only contains features of multiple classes, but the highest feature purity is close to the second highest one, and even the feature purity of each class is close. Thus, the movement of this test input in the forward propagation is directionless.
Then, we further analyze the movement cost of correctly and incorrectly predicted test inputs based on the inverse
Fig. 1: An example of movement cost for test inputs with different probability levels, where “*” and “*” represent correctly and incorrectly predicted test inputs, respectively. Subfigure (a) represents movement costs required for test inputs whose top-1 prediction probability belongs to (0, 0.1) to reach a position with a probability higher than 0.1, where the \(x\)-axis represents the original top-1 prediction probability and the \(y\)-axis represents the movement cost. We repeat 5 times and compute the average movement cost of each test input.
perturbation [41]. As shown in Figure 2 (b), the ground truth of \(\mathbf{x}_{0}\) is **II** but is misclassified as **III**. As \(\mathbf{x}_{0}\) proceeds along the gradient direction, its class center of **III** can be reached with low movement cost. However, correctly predicted test inputs require high movement cost to reach their class centers. It is consistent with the interpretation based on feature purity [23]. The incorrectly predicted test inputs improve the feature purity of the corresponding class after adding inverse perturbation, which turns random movement into directional movement in forward propagation. Therefore, only a low movement cost is required to reach the class center. The correctly predicted test inputs contain high feature purity, which keeps them stable in feature space. Thus, further movement requires a high cost. Note that the class center is given by **Definition 1**, which differs for each test input [41].
Based on the above analysis, we reduce the problem of measuring misbehavior probability of prioritization to the problem of measuring the movement cost in feature space. Thus, we prioritize test inputs by comparing their lower bounds of minimum movement cost to the target position. The test input never reaches the target position when the movement cost is less than the lower bound \(\gamma_{L}\). However, \(\gamma_{L}\) is not easy to find. Below we show how to derive a formal inverse perturbation guarantee of a test input with the Lipschitz continuity assumption [43].
### _Formal Guarantees for Movement Cost_
We first give the lemma about the Lipschitz continuity. According to the lemma, we then provide a formal guarantee for the lower bound of the inverse perturbation. Specifically, our analysis obtains a lower bound of \(l_{p}\)-norm minimum inverse perturbation \(\gamma_{L}=\min\frac{f^{C}_{center}(\mathbf{x}_{0})-f^{C}_{p}(\mathbf{x}_{0})}{L^{ \epsilon}_{q}}\) for a classifier and \(\gamma_{L}\!=\!\min\frac{\sum_{i}|f^{R}_{i,(\mathbf{x}_{0})}-f^{R}_{i}(\mathbf{x}_{0} )|}{d_{2}\times L^{\epsilon}_{q}}\) for a regression model.
**Lemma 1** (Norms and corresponding Lipschitz constants [43]).: _Let \(D\subset\mathbb{R}^{d}\) be a convex bounded closed set and let \(h(\mathbf{x}):S\rightarrow\mathbb{R}\) be a continuously differentiable function on an open set containing \(D\). For a Lipschitz function \(h(\mathbf{x})\) with Lipschitz constant \(L_{q}\), the inequality \(|h(\mathbf{a})-h(\mathbf{b})|\leq L_{q}||\mathbf{a}-b||_{p}\) holds for any \(\{\mathbf{a},\mathbf{b}\}\in D\), where \(L_{q}=\max\{||\nabla_{\mathbf{a}}h(\mathbf{x})||_{q}:\mathbf{x}\in D\}\), \(\nabla_{\mathbf{a}}h(\mathbf{x})=(\frac{\partial h}{\partial\mathbf{x}_{1}},...,\frac{ \partial h}{\partial\mathbf{x}_{d}})\) is the gradient of \(h(\mathbf{x})\), \(\frac{1}{p}+\frac{1}{q}=1\) and \(1\leq\{p,q\}\leq\infty\)._
**Theorem 1** (Formal guarantee on lower bound \(\gamma_{L}\) of inverse perturbation for classification model).: _Let \(\mathbf{x}_{0}\in\mathbb{R}^{d}\) and \(f^{C}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{K}\) be a \(K\)-class classifier with continuously differentiable components. For all \(\mathbf{\mu}\in\mathbb{R}^{d}\) with \(||\mathbf{\mu}||_{p}\leq\min\frac{f^{C}_{center}(\mathbf{x}_{0})-f^{C}_{p}(\mathbf{x}_{0} )}{L^{\epsilon}_{q}}\), \(f^{C}_{c}(\mathbf{x}_{0}+\mathbf{\mu})=f^{C}_{center}(\mathbf{x}_{0})\) holds with \(\frac{1}{p}+\frac{1}{q}=1\), \(1\leq\{p,q\}\leq\infty\) and \(L^{\epsilon}_{q}\) is the Lipschitz constant for the function \(f^{C}_{center}(\mathbf{x})-f^{C}_{c}(\mathbf{x})\) in \(l_{p}\)-norm. In other word, \(\gamma_{L}=\min\frac{f^{C}_{center}(\mathbf{x}_{0})-f^{C}_{c}(\mathbf{x}_{0})}{L^{ \epsilon}_{q}}\) is a lower bound of minimum inverse perturbation for moving to the class center._
_Proof_. According to Lemma 1, the assumption that the function \(h(\mathbf{x}):=f^{C}_{center}(\mathbf{x})-f^{C}_{c}(\mathbf{x})\) is Lipschitz continuous with Lipschitz constant \(L^{\epsilon}_{q}\) gives:
\[|h(\mathbf{a})-h(\mathbf{b})|\leq L^{\epsilon}_{q}||\mathbf{a}-\mathbf{b}||_{p}. \tag{1}\]
Let \(\mathbf{a}=\mathbf{x}_{0}+\mathbf{\mu}\) and \(\mathbf{b}=\mathbf{x}_{0}\), we get:
\[|h(\mathbf{x}_{0}+\mathbf{\mu})-h(\mathbf{x}_{0})|\leq L^{\epsilon}_{q}||\mathbf{\mu}||_{p}, \tag{2}\]
which can be rearranged by removing the absolute symbol:
\[\begin{array}{l}-L^{\epsilon}_{q}||\mathbf{\mu}||_{p}\leq h(\mathbf{x}_{0}+\mathbf{\mu}) -h(\mathbf{x}_{0})\leq L^{\epsilon}_{q}||\mathbf{\mu}||_{p},\\ \Rightarrow h(\mathbf{x}_{0})-L^{\epsilon}_{q}||\mathbf{\mu}||_{p}\leq h(\mathbf{x}_{0}+\mathbf{ \mu})\leq h(\mathbf{x}_{0})+L^{\epsilon}_{q}||\mathbf{\mu}||_{p}.\end{array} \tag{3}\]
When \(h(\mathbf{x}_{0}+\mathbf{\mu})\!=\!0\), the inversely perturbed test input is moved to the class center. As represented by Equation (3), \(h(\mathbf{x}_{0})-L^{\epsilon}_{q}||\mathbf{\mu}||_{p}\) is the lower bound of \(h(\mathbf{x}_{0}+\mathbf{\mu})\). If \(h(\mathbf{x}_{0})-L^{\epsilon}_{q}||\mathbf{\mu}||_{p}\geq 0\) for sufficiently small inverse perturbation \(||\mathbf{\mu}||_{p}\), the inversely perturbed input cannot reach the class center, i.e.,
\[\begin{array}{l}h(\mathbf{x}_{0})-L^{\epsilon}_{q}||\mathbf{\mu}||_{p}\geq 0\ \Rightarrow||\mathbf{\mu}||_{p}\leq\frac{h(\mathbf{x}_{0})}{L^{\epsilon}_{q}}\\ \Rightarrow||\mathbf{\mu}||_{p}\leq\frac{f^{C}_{center}(\mathbf{x}_{0})-f^{C}_{c}( \mathbf{x}_{0})}{L^{\epsilon}_{q}}.\end{array} \tag{4}\]
To realize \(f^{C}_{c}(\mathbf{x}_{0}+\mathbf{\mu})=f^{C}_{center}(\mathbf{x}_{0})\) we take the minimum of the bound on \(||\mathbf{\mu}||_{p}\), i.e., the test input will never move to the class center when \(||\mathbf{\mu}||_{p}\leq\min\frac{f^{C}_{center}(\mathbf{x}_{0})-f^{C}_{c}(\mathbf{x}_{0} )}{L^{\epsilon}_{q}}\).
**Theorem 2** (Formal guarantee on lower bound \(\gamma_{L}\) of inverse perturbation for regression model).: _Let \(\mathbf{x}_{0}\in\mathbb{R}^{d_{1}}\) and \(f^{R}:\mathbb{R}^{d_{1}}\rightarrow\mathbb{R}^{d_{2}}\) be a regression model with continuously differentiable components. For all \(\mathbf{\mu}\in\mathbb{R}^{d_{1}}\) with \(||\mathbf{\mu}||_{p}\leq\min\frac{\sum_{i}|f^{R}_{i,+}(\mathbf{x}_{0})-f^{R}_{i}(\mathbf{x}_ {0})|}{L^{\epsilon}_{q}}\), \(\frac{1}{d_{2}}\sum|r(\mathbf{x}_{0}+\mathbf{\mu})-r(\mathbf{x}_{0})|\leq\delta\) holds with \(\frac{1}{p}+\frac{1}{q}=1\), \(1\leq\{p,q\}\leq\infty\) and \(L^{\epsilon}_{q}\) is the Lipschitz constant for the function \(\frac{\sum_{i}|f^{R}_{i,+}(\mathbf{x})-f^{R}_{i}(\mathbf{x}_{0})|}{d_{2}\times L^{ \epsilon}_{q}}\) in \(l_{p}\)-norm. In other word, \(\gamma_{L}=\min\frac{\sum_{i}|f^{R}_{i,+}(\mathbf{x}_{0})-f^{C}_{i}(\mathbf{x}_{0})|}{L^{ \epsilon}_{q}}\) is a lower bound of minimum inverse perturbation. The complete proof is deferred to [https://anonymous.4open.science/r/CertPri/SupplementaryMaterials.pdf](https://anonymous.4open.science/r/CertPri/SupplementaryMaterials.pdf).
Fig. 2: An example of t-SNE [53] based distribution visualization of test inputs in feature space. The classifier is a FCN with three hidden layers, the test inputs are handwritten digits of ”0”(I), “1”(II) and “2”(III) from MNIST dataset [52].
### _Movement Cost Estimation via GEVT_
The formal guarantees give that \(\gamma_{L}\) is related to \(h(\mathbf{x}_{0})\) and its Lipschitz constant \(L_{q}\), where \(L_{q}=\max\{||\nabla_{\mathbf{x}}h(\mathbf{x})||_{q}:\mathbf{x}\in B_{p}(\mathbf{x}_{0},\mathcal{ R})\}\), \(B_{p}(\mathbf{x}_{0},\mathcal{R}):=\{\mathbf{x}||_{\mathbf{x}=\mathbf{x}||_{p}\leq\mathcal{R}}\}\) is a hyperball with center \(\mathbf{x}_{0}\) and radius \(\mathcal{R}\). The value of \(h(\mathbf{x}_{0})\) is easily accessible at the model output. Thus, we show how to get \(\max||\nabla_{\mathbf{x}}h(\mathbf{x})||_{q}\).
For a random variable sequence \(\{\mathbf{x}_{0}^{(j)}\}\) sampled from \(B_{p}(\mathbf{x}_{0},\mathcal{R})\), its corresponding gradient norm can be regarded as a new random variable sequence \(\{||\nabla_{\mathbf{x}}h(\mathbf{x}_{0}^{(j)})||_{q}\}\) characterized by a cumulative distribution function (CDF) [45]. Therefore, we estimate \(\max||\nabla_{\mathbf{x}}h(\mathbf{x}_{0}^{(j)})||_{q}\) with a small number of samples based on GEVT, which ensures that the maximum value of a random variable sequence can only follow one of the three extreme value distributions [44]. The CDF of \(||\nabla_{\mathbf{x}}h(\mathbf{x}_{0}^{(j)})||_{q}\) is as follows:
\[G_{\xi}(\mathbf{z})=\exp\left(-(1+\xi\mathbf{z})^{-\frac{1}{\xi}}\right), \tag{5}\]
where \(1+\xi\mathbf{z}>0\), \(\mathbf{z}=\frac{||\nabla_{\mathbf{x}}h(\mathbf{x}_{0}^{(j)})||_{q}-u}{\sigma}\), \(\xi\in\mathbb{R}\) is a extreme value index, \(u\in\mathbb{R}\) and \(\sigma\in\mathbb{R}^{+}\) are the expectation and variance. The parameters \(u\), \(\sigma\) and \(\xi\) determine the location, scale and shape of \(G_{\xi}(\mathbf{z})\), respectively. \(G_{\xi}(\mathbf{z})\) belongs to either the Gumbel (\(\xi=0\)), Frechet (\(\xi>0\)) or Weibull (\(\xi<0\)) distributions.
We plot curves of probability density function (PDF) and its CDF for different extreme value distributions in Figure 3, where PDF (denoted as \(g_{\xi}(\mathbf{z})\)) is the differential of CDF. The Weibull distribution shows an interesting property in Figure 3, i.e., a finite right end-point (denoted as \(-\frac{1}{\xi}\)), which limits the upper bound of the distribution. Therefore, we adopt the Weibull to describe the gradient norm distribution, where the right end-point is the estimation of \(\max||\nabla_{\mathbf{x}}h(\mathbf{x})||_{q}\).
### _Prioritization through Movement Cost_
Based on the movement cost perspective in Section III-A, we can conclude that a test input with a small value of \(\gamma_{L}\) should be prioritized at the front. Therefore, we compute the \(\gamma_{L}\) value of each test input, and then prioritize these inputs according to their \(\gamma_{L}\) values from small to large. **Algorithm 1** shows the details of CertPri by taking the classification model as an example. We first establish the gradient norm distribution of a test input by randomly sampling in the hyper-ball (the loop from lines 4 to 9). Then, we estimate the location of the maximum gradient norm based on the Weibull distribution and compute the lower bound of the movement cost based on **Theorem 1** (lines 10 and 11). Finally, we repeat the above operations for each test input (the loop from lines 2 to 12) and prioritize them according to the ascending result of their \(\gamma_{L}\) values (line 13). Note that for algorithmic illustration, we only compute one \(g_{i,j}^{\max}\) (line 8) for each iteration. To implement the best efficiency of GPU, we usually evaluate these values in batches, and thus a batch of \(g_{i,j}^{\max}\) can be returned.
Furthermore, we can easily generalize CertPri to a regression model through replacing the \(h(\mathbf{x})\) function at line 1 with \(h(\mathbf{x}):=\frac{\sum_{i}|f_{i,j}^{R}(\mathbf{x})-f_{i}^{R}(\mathbf{x})|}{d_{2}}\) based on **Theorem 2**. Besides, we can extend CertPri to a black-box scenario based on the gradient estimation [47].
## IV Experimental Setting
In this section, we introduce the experimental setup, including the subjects we considered, the baselines we compared with, the measurements we used, and the implementation details we conducted.
### _Subjects_
We adopt 50 pairs of datasets and models as subjects, as shown in Table II. To sufficiently evaluate CertPri's effectiveness, efficiency, robustness, generalizability and guidance, we carefully consider the diversity of subjects from six dimensions. To our best knowledge, this is the most large-scale and diverse study in the field.
(1) **Various tasks of deep models.** We employ both classification models (ID: 1-32, 37-45, 47-49) and regression models (ID: 33-36, 46, 50). The number of classes ranges from 2 to 1,000 across all the classification tasks.
(2) **Various data forms of test inputs**. We consider six data forms of test inputs from 14 datasets, including image, text, speech, signal, graph, and structured data. Specifically, we collect _6 image datasets_, i.e., CIFAR10 [55] (a 10-class ubiquitous object dataset with 32\(\times\)32 pixels), ImageNet [42] (a 1,000-class object recognition benchmark with 224\(\times\)224 pixels), DrivingSA [56] (a steer angle regression dataset for autonomous driving from Udacity with 128\(\times\)128 pixels), Fashion-MNIST (FMNIST) [57] (a 10-class dataset of
Fig. 3: Curves of PDF and CDF for different distributions, where \(\xi=0\) for Gumbel, \(\xi=0.6\) for Fréchet and \(\xi=-0.6\) for Weibull, \(u=0\), \(\sigma=1\).
Zalando's article images with 28\(\times\)28 pixels), Ants_Bees (a binary insect dataset containing ants and bees for transfer learning with 224\(\times\)224 pixels), and Cats_Dogs (a binary pet dataset containing cats and dogs for transfer learning with 224\(\times\)224 pixels), _2 text datasets_, i.e., IMDB [58] (a binary movie review sentiment dataset encoded as a list of word indexes) and Reuters (a 46-class newswire dataset), _one speech dataset_, i.e., VCTK10 (a 10-class dataset of ten English speakers with various accents selected from VCTK Corpus with 601\(\times\)64 phonemes), _one signal dataset_, i.e., RML8PSK [59] (a signal regression dataset modulated by 8PSK from RadioML 2016.10a with 127\(\times\)2 points as input and 2 points as output), _one graph dataset_, i.e., Cora [60] (a 7-class citation network of scientific publications with 5,429 links), and _3 structured datasets_, i.e., Adult [61] (a binary income dataset with 13-dimensional attributes), COMPAS (a binary crime prediction dataset with 400-dimensional attributes), and Boston (a housing price regression dataset with 13-dimensional input and 1-dimensional output).
(3) **Various data types of test inputs**. We mainly consider three data types, including original test inputs, adversarial test inputs, and adaptive attacked test inputs. For _adversarial test inputs_, we perform three widely-used adversarial attacks (i.e., basic iterative method (BIM) [62], Carlini & Wagner (C&W) [63] and FineFool [64]) to generate the same number of adversarial examples as the corresponding original test inputs for CIFAR10 and ImageNet, respectively. Then, following previous works [23, 24], we construct an adversarial test input set for each of the two datasets under each adversarial attack through randomly selecting half of the original test inputs and half of the adversarial examples, represented as "+BIM", "+C&W", "+FineFool". For _adaptive attacked test inputs_, we follow the strategy introduced in Section I to produce them. Regarding the surprise-based method [28], we consider two objectives, flipping the labels of test inputs, and reducing their surprise (i.e., the activation difference between test inputs and the training inputs), which can be conducted by MAG-GAN [32]. Regarding confidence-based methods [23, 25], we first flip the labels of test inputs based on BIM [62], and then increase their highest probability value (up to 0.99 for CIFAR10 and 0.90 for ImageNet) by continuously adding perturbations. Regarding the mutation-based method [24], we first break its ranking model by adding noise to the extracted features, and then add perturbations to the test inputs based on back propagation [54] to generate noisy features. Finally, we construct an adaptive attacked test input set for each of the two datasets under each adaptive attack through randomly selecting half of the original test inputs and half of the adaptive attacked examples, represented as "+AdapS", "+AdapC", "+AdapM". When crafting adversarial or adaptive attacked test inputs, we assume that the adversary knows the prioritization details.
Additionally, regarding VGG19-AD model, we implement three test input types respectively, i.e., the original one, the patched test inputs (randomly blocking 10% of pixels for each test input), and the saturation-modified test inputs (modifying the intensities of saturation channel [65] for each test input).
(4) **Various structures of deep models**. We employ CNN (ID: 1-41, 43), LSTM (ID: 42, 44-46), GCN (ID: 47) and FCN (ID: 48-50). The number of layers ranges from 3 to 101 across all models.
(5) **Various training scenarios**. We set up three training scenarios, including normal training, poisoning and transfer learning. For the _poisoning scenario_, we first craft a poisoned training set on FMNIST dataset, denoted as "FMNIST_P", where the poisoning method is DeepPoison [66], the source label and poisoned label are "Pullover" and "Coat", respectively. For the _transfer learning scenario_, we deploy the source domain as ImageNet with 1,000-class and the target domain as insect images or pet images with 2-class.
(6) **Various prioritization scenarios**. We set up two prioritization scenarios, including white-box and black-box. All details of the model and test inputs are available and used for prioritization in the _white-box scenario_ (ID: 1, 3-9, 11-17, 19
25, 27-33, 35-50), while only model outputs and test inputs are available in the _black-box scenario_ (ID: 2, 10, 18, 26, 34).
### _Baselines_
We consider five baselines, i.e., likelihood-based surprise adequacy (LSA) [28], distance-based surprise adequacy (DSA) [28], multiple-boundary clustering and prioritization (MCP) [25], DeepGini [23] and PRIMA [24]. LSA and DSA are surprise-based methods. MCP and DeepGini are efficient confidence-based methods for lightweight prioritization. PRIMA is an effective mutation-based method with the SOTA performance. The application scope of each method is shown in Table III, where "\(\mathcal{X}\)" means PRIMA can only perform input mutation in the black-box scenario. Note that coverage-based methods have been shown to be significantly less effective [23] and thus are omitted. All baselines are configured according to the best performance setting reported in the respective papers.
### _Measurements_
We investigate CertPri's prioritization performance from five aspects, including prioritization _effectiveness_, _efficiency_, _robustness_, _generalizability_ and _guidance_.
(1) We evaluate the effectiveness of CertPri through the ratio of the area under curve (RAUC), which transforms the prioritization result to a curve [24], defined as follows for classification tasks:
\[\mathrm{RAUC}\!=\!\frac{\sum_{i=1}^{N}n_{i}}{N\!\times\!N^{\prime}\!+\!\frac{N ^{\prime}\!-\!N^{\prime 2}}{2}},\ \mathrm{where}\ n_{i}\!=\!\left\{\!\!\begin{array}{l}n_{i \!+\!1}\!+\!1,\ c(\mathbf{x}_{i})\ \mathrm{is\ correct}\\ n_{i\!-\!1},\ \mathrm{otherwise}\end{array}\right., \tag{6}\]
where \(N\) and \(N^{\prime}\) are the number of prioritized test inputs and bug-revealing inputs, \(i\geq 1\) and \(n_{0}=0\). In other words, the numerator and denominator represent the area under curve of the prioritization method and the ideal prioritization, respectively.
For regression tasks, RAUC is calculated based on the mean-square error (MSE) of prediction, as follows:
\[\mathrm{RAUC}=\frac{\sum_{i=1}^{N}m_{i}}{\sum_{j=1}^{N}M_{j}}\ \mathrm{where}\left\{ \begin{array}{l}m_{i}=m_{i\!+\!1}+\mathrm{MSE}\left(f^{R}(\mathbf{x}_{i}) \right)\\ M_{j}\!=M_{j\!+\!1}+\mathrm{MSE}\left(f^{R}(\mathbf{x}_{j})\right)\end{array} \right., \tag{7}\]
where \(m_{i}\) and \(M_{j}\) represent the accumulated MSE between the predicted results and the ground-truth of the prioritization method and the ideal prioritization, respectively. \(i,j\geq 1\), \(m_{0}=M_{0}=0\). Moreover, we follow the setup of Wang _et al._[24], using RAUC-100, RAUC-200, RAUC-300, RAUC-500 and RAUC-all as fine-grained metrics, i.e. \(N\)=100, 200, 300, 500 and all. Larger RAUC is better.
(2) We evaluate the prioritization efficiency of CertPri through prioritization speed, i.e., the time cost of prioritize 1,000 test inputs (#seconds/1,000 Inputs). Smaller is better.
(3) We evaluate the prioritization robustness of CertPri through RAUC stability, denoted as RobR. Larger RobR is better, as follows:
\[\mathrm{RobR}=\frac{\mathrm{RAUC}\!-\!\mathrm{all\ of\ adaptive\ attacked\ test\ inputs}}{\mathrm{RAUC}\!-\!\mathrm{all\ of\ original\ test\ inputs}}\times 100\%. \tag{8}\]
(4) We evaluate the prioritization generalizability of CertPri based on reward value, denoted as GenRew, as follows:
\[\mathrm{GenRew}=\frac{1}{N_{rep}\times N_{sub}}\sum_{i=1}^{N_{rep}}\sum_{j=1}^ {N_{sub}}\frac{n_{pm}-k_{i,j}+1}{n_{pm}}, \tag{9}\]
where \(N_{sub}\) represents the number of selected subjects, \(N_{rep}\!=\!5\) and \(n_{pm}\!=\!6\) are the number of repetitions and prioritization methods in our experiments, respectively. \(k_{i,j}\!\in\!\{1,2,...,n_{pm}\}\) represents that the method ranks in the \(k\)-th position in descending order of RAUC-all for the \(j\)-th subject at the \(i\)-th experimental repetition. GenRew \(\in[\frac{1}{n_{pm}},1]\). Lager GenRew is better.
(5) We evaluate the prioritization guidance from two aspects: performance and robustness improvements for DNNs. The former subtracts the accuracy of original model from that of retrained model, while the latter subtracts the attack success rate [32] of retrained model from that of original model.
### _Implementation Details_
To fairly study the performance of the baselines and CertPri, our experiments have the following settings. **(1) Hyperparameter settings** based on the double-minimum strategy: we conduct a preliminary study based on a small dataset, and find that \(N_{b}\)3, \(N_{rsb}\)5 and \(0.02\max(\mathbf{x})_{\mathrm{i}}\mathcal{R}_{\mathrm{i}}0.05\max(\mathbf{x})\) for Algorithm 1 are effective in general. To guarantee CertPri's effectiveness, we follow the double-minimum strategy, i.e., \(N_{b}\)=6, \(N_{rsb}\)=10, \(\mathcal{R}\)=0.04\(\max(\mathbf{x})\), \(p\)=2 for Algorithm 1.
**(2) Model training**: we download and use pretrained models on ImageNet. For other datasets, we train appropriate models as follows. The learning rate ranges from 1E-04 to 1E-02, the optimizer is Adam, training:validation:test = 7:1:2, and an early stop strategy is used to avoid overfitting. **(3) Preprocessing**: we fill in missing data points based on the mean value. There is no specific mutation strategy provided in PRIMA [24] for speech, signal, graph data forms and GCN model, thus we derive it from the existing mutation operations. **(4) Data recording**: we repeat the experiment 5 times and record about 9,000 raw data.
We conduct all the experiments on a server with one Intel i7-7700K CPU running at 4.20 GHz, 64 GB DDR4 memory, 4 TB HDD and one TITAN Xp 12 GB GPU card.
## V Experimental Results and Analysis
We evaluate CertPri through answering the following research questions (RQ).
**RQ1. Effectiveness**: How _effective_ is CertPri?
**RQ2. Efficiency**: How _efficient_ is CertPri.?
**RQ3. Robustness**: How _robust_ is CertPri?
**RQ4. Generalizability**: How _generic_ is CertPri?
**RQ5. Guidance**: Can CertPri _guide_ the retraining of DNNs?
### _Effectiveness (RQ1)_
How _effective_ is CertPri in prioritizing test inputs?
When reporting the results, we focus on the effectiveness of the following aspects: overall, data forms, data types, training scenarios, and prioritization scenarios. The evaluation results are shown in Tables IV, V, VI, VII, VIII and IX.
forms (i.e., image, text, etc). For instance, in Table VI, almost all RAUC values of CertPri are the highest, except RAUC-all on speech and graph data forms. We investigate their model feature space and find that their decision boundaries are smoother than others, which causes gradient vanishing. Therefore, we extend the sampling radius \(\mathcal{R}=0.05\max(\mathbf{x})\), which facilitates maximum gradient norm estimation based on GEVT. After radius extension, CertPri realizes the highest average RAUC-all, improving to 0.9127 and 0.8817 for speech and graph data forms, respectively. Additionally, the average RAUC of CertPri is 1.13\(\sim\)1.30 times that of baselines for unstructured data forms, but only 1.07 times for structured data. We speculate that the gradient vanishes during back propagation due to the sparse coding of structured data. Gradient vanishing is beyond the scope of this paper, but it can be improved by batch normalization [67] and non-saturating activation function [50].
**Effectiveness on various data types of test inputs**. CertPri largely outperforms all baselines against adaptive attacks in terms of all average RAUC values, while approaching the SOTA baseline (i.e., PRIMA) against adversarial attacks. For instance, in Table VII, the average RAUC values of CertPri against adaptive attacks range from 0.7006 to 0.8827 with average improvements of 64.73%\(\sim\)114.85% compared with DeepGini and 49.55%\(\sim\)64.11% compared with PRIMA, respectively. This is mainly because we provide a formal guarantee of movement costs in feature space, which cannot be an objective of adaptive attacks. Besides, the average RAUC-100 and RAUC-300 gaps between PRIMA and CertPri are both less than 0.02, which is acceptable and does not hinder its practical application in various data types.
**Effectiveness on various training scenarios**. CertPri outperforms DeepGini and shows competitive performance with PRIMA in both training scenarios. For instance, in Table VIII, the average RAUC values of CertPri range from 0.9769 to 0.9908 with average improvements of 81.20%\(\sim\)116.56% compared with DeepGini in the poisoning scenario, and range from 0.8274 to 0.9532 with average improvements of 11.10%\(\sim\)17.92% compared with DeepGini in transfer learning. Besides, the average RAUC-100 and RAUC-500 gaps between PRIMA and CertPri are both less than 0.02. We speculate that the purity assumption of DeepGini is not tenable in the two training scenarios [24], whereas CertPri and PRIMA facilitate their prioritization based on the movement cost and mutation perspectives, respectively.
**Effectiveness on various prioritization scenarios**. CertPri largely outperforms all baselines in terms of all RAUC values in both prioritization scenarios, which is beneficial to identify bug-revealing inputs in software engineering testing with privacy requirements. For instance, in Table IX, the average RAUC values of CertPri range from 0.7909 to 0.9039 with average improvements of 13.66%\(\sim\)44.65% compared with baselines in the white-box scenario, and range from 0.7227 to 0.8688 with average improvements of 8.94%\(\sim\)92.12% compared with baselines in the black-box scenario. The outstanding performance of CertPri is mainly because it leverages the Weibull distribution to determine the exact maximum gradient norm in the white-box scenario, and adopts gradient estimation to satisfy the black-box scenario.
_Answer to RQ1_: CertPri outperforms baselines in two aspects in terms of effectiveness: (1) _overall_-it significantly improves 81.84%, 47.48% and 18.65% RAUC values on average compared with surprise-based, confidence-based and mutation-based baselines, respectively; (2) _specific_-it improves 20.66%, 43.39%, 28.96% and 38.88% RAUC values on average compared with baselines (i.e., DeepGini and PRIMA) for various data forms, data types, training scenarios and prioritization scenarios, respectively.
### _Efficiency (RQ2)_
How _efficient_ is CertPri in prioritizing test inputs?
When answering this question, we refer to the prioritization time costs in the white-box scenarios with image data form (i.e., ID: 1-36). The evaluation results are shown in Table X, where the time cost of PRIMA includes input mutation, model mutation and ranking model training. Here we have the following observation.
**Prioritization efficiency**. CertPri prioritizes test inputs more efficiently than mutation-based methods and is competitive with confidence-based methods, which meets the rapidity requirements of software engineering testing. For instance, in Table X, the efficiency of CertPri is 41.17 times and 52.86 times that of DSA and PRIMA, respectively. This is mainly because CertPri only needs to perform gradient computation based on back propagation and extreme value estimation, both of which are lightweight operations. Besides, in Table X, the time cost of CertPri is 1.25 times and 2.94 times more
than that of MCP and DeepGini on average, respectively. The reason is that CertPri involves iterative operations in the extreme value estimation, which increases time costs. Note that CertPri adopts a double-minimum strategy to ensure its effectiveness, which leaves room for efficiency improvements. We can slightly reduce the \(N_{b}\), \(N_{rsb}\), \(\mathcal{R}\) values in **Algorithm 1** to further improve its efficiency without loss of effectiveness.
### _Robustness (RQ3)_
How _robust_ is CertPri against adaptive attacks based on its certifiability?
When reporting the results, we focus on the following aspects: the robustness against adaptive attacks and the utility of robustness. The evaluation results are shown in Table XI and Figure 4.
Implementation details for robustness evaluation. (1) To measure CertPri's robustness, we refer to the variation of RAUC values between original test inputs (ID: 1, 9, 17, 25) and adaptive attacked test inputs (ID: 6-8, 14-16, 22-24, 30-32), i.e., RobR, as shown in Table XI. (2) To demonstrate the robustness utility, taking adaptive attacks on ImageNet dataset as an example (ID: 22-24), we combine CertPri with each baseline to show the promotion effect of CertPri on baselines, as shown in Figure 4, where \(x\)-axis represents the percentage of CertPri components added to baselines.
**Prioritization robustness**. In all cases, CertPri always performs the most robust prioritization results against various adaptive attacks, which facilitates stable identification of bug-revealing inputs. For instance, in Table XI, the average RobR values of CertPri against adaptive attacks range from 99.93% to 103.60%, which is 1.50\(\sim\)1.71 times that of baselines. The outstanding performance of CertPri is mainly because it simplifies the prioritization task into a lower bound measure of the movement cost, which has been formally guaranteed in Section III-B.
**Utility of robustness**. CertPri is not only immune to various adaptive attacks, but also facilitates the robustness of baselines through weighted combinations. For instance, in Figure 4, all curves show an upward trend after combining with CertPri and are always higher than their initial value without CertPri. Furthermore, we speculate that for an empirical prioritization method that outperforms CertPri in general, it can be combined with CertPri to improve robustness against adaptive attacks.
_Answer to RQ3_: CertPri largely outperforms baselines in terms of robust prioritization with average robustness improvements of 41.37%\(\sim\)98.91%. Besides, its robustness can be leveraged to facilitate other methods.
### _Generalizability (RQ4)_
How _generic_ is CertPri in prioritizing test inputs?
When reporting the generalizability, we focus on the ranking of RAUC values for different methods in each subject. The evaluation results are illustrated as radar charts in Figure 5.
Implementation details for generalizability evaluation. (1) Calculate the GenRew value of each dimension separately. Take the task dimension as an example, which includes classification and regression. First, we select all subjects belonging to the classification task and calculate GenRew according to Equation (9), denoted as GR\({}_{c}\). Then, we select all subjects belonging to the regression task and compute GenRew, denoted as GR\({}_{r}\). Finally, we compute the average of GR\({}_{c}\) and GR\({}_{r}\) as the task-dimensional GenRew. (2) Repeat the above operations for the remaining 5 dimensions (i.e., data form/type, structure, training/prioritization scenarios).
**Prioritization generalizability**. CertPri always outperforms all baselines against various dimensions in terms of all average GenRew values. For instance, in Figure 5, the area of CertPri covers all baselines. More specifically, the average GenRew values of CertPri range from 0.9040 to 0.9813 with average improvements of 32.81%\(\sim\)238.54% compared with baselines. This is mainly because CertPri's calculation only involves gradient derivation based on back propagation, which is easy to implement for any DNN. Thus, CertPri can be generalized to various dimensions.
Fig. 4: The promotion results of CertPri for baselines through combining it with each baseline.
_Answer to RQ4_: CertPri is more generic than baselines in all six dimensions with average generalizability improvements of 32.81%\(\sim\)238.54%.
### _Guidance (RQ5)_
Can CertPri _guide_ the retraining of DNNs to improve their performance and robustness?
When reporting the guidance, we focus on two aspects: accuracy improvement and robustness improvement for DNNs. The evaluation results are illustrated as boxplots in Figure 6.
Implementation details for guidance evaluation. Take the classification on CIFAR10 as an example. (1) In terms of performance (ID: 1, 9), we sample original data prioritized at the front 1% and 5% in the training set. We set epoch=5 for retraining due to the small number of data. We compare the test accuracy. (2) In terms of robustness (ID: 3-5, 11-13), we sample adversarial data prioritized at the front 1% and 5% in the adversarial set. We mix the sampled data with the original training set, and set epoch=2 for retraining due to a large number of data. Repeat the above operations 5 times.
**Accuracy improvement for DNNs**. The original test inputs prioritized at the front facilitate model accuracy through retraining, where CertPri demonstrates SOTA accuracy improvement. For instance, in Figure 6 (a), the box position of CertPri is significantly higher than that of baselines. More specifically, the average accuracy improvements of CertPri range from 4.50% to 8.36%, which is 1.65\(\sim\)3.47 times that of baselines. It demonstrates CertPri's outstanding guidance for the accuracy improvement of DNNs.
**Robustness improvement for DNNs**. The adversarial test inputs prioritized at the front facilitate model robustness through retraining, where CertPri outperforms surprise-based methods and shows competitive performance with confidence-based and mutation-based methods. For instance, in Figure 6 (b), the box position of CertPri is significantly higher than that of LSA and DSA, while close to that of MCP, DeepGini and PRIMA. It demonstrates CertPri's guidance for the robustness improvement of DNNs.
_Answer to RQ5_: CertPri guides DNNs' retraining with only first 1% or 5% prioritized test inputs, which improves accuracy by 6.15% and robustness by 31.07% on average.
## VI Threats to Validity
Three aspects may become the threats to validity of CertPri.
The _internal_ threat to validity mainly lies in the center position determination. There are differences in movement costs to reach different centers, especially for non-uniformly distributed data in feature space. To reduce the internal threat, we customize the center position for each test input in **Definition 1**, i.e., relative center. Besides, we collect a large number of subjects with great diversity, and perform extensive experiments to verify CertPri's utility.
The _external_ threats to validity mainly lie in the non-differentiable component and gradient vanishing. To reduce these threats, we further extend **Theorems 1** and **2** to a special case of non-differentiable functions, i.e., a model with ReLU activation, as shown in [https://anonymous.4open](https://anonymous.4open). science/r/CertPri/SupplementaryMaterials.pdf. Then, we leverage batch normalization and non-saturating activation to reduce the probability of gradient vanishing and enlarge the sampling radius of the hyper-ball.
The _construct_ threats to validity mainly lie in the hyperparameters in CertPri, including \(N_{b}\), \(N_{rsb}\) and \(\mathcal{R}\) values in **Algorithms 1**. Larger hyperparameter values produce better effectiveness, but reduce efficiency. To reduce the threat from the hyperparameters, we conduct a double-minimum strategy. Besides, the norm value \(p\)=2 and the extreme value distribution type is Weibull in our experiments. In future work, we can explore prioritization results for various norm types and extreme value distributions.
## VII Related Works
To solve the labeling-cost problem in DNN testing, several works on test input prioritization are proposed [22, 23, 24, 25, 24, 26, 27, 28, 69, 29].
From the perspective of statistical analysis, there are coverage-based [20, 26, 70, 12], surprise-based [22, 27, 68, 28], and confidence-based methods [23, 25, 29]. Feng _et al._[23] comprehensively analyzed coverage-based methods and concluded that their effectiveness and efficiency are unsatisfactory for prioritization tasks. To improve effectiveness, Byun _et al._[22] prioritized test inputs based on surprise adequacy metrics. Zhang _et al._[27] observed the activation
Fig. 5: Generalizability comparison for six dimensions across all subjects, measured by GenRev.
Fig. 6: Guidance comparison of accuracy and robustness improvements for different methods under the first 1% and 5% prioritized data sampling (from left to right).
pattern of neurons, and produced prioritization results according to the activation patterns between the training set and test inputs. Ma _et al._[68] considered the interaction between test inputs and model uncertainty, and determined bug-revealing inputs with higher uncertainty. These surprise-based methods improve performance, but their prioritization results are related to the training set quality. Furthermore, Zhang _et al._[29] prioritized test inputs based on noise sensitivity analysis, independent of the training set. Shen _et al._[25] proposed MCP, in which clusters test inputs into the boundary areas and specify the priority to select them evenly. Feng _et al._[23] proposed DeepGini, which prioritizes test input by measuring set impurity. These confidence-based methods are effective and efficient, but only for classification tasks.
Drawing lessons from the mutation view in software engineering [71, 72], Wang _et al._[24] proposed PRIMA, which gives priority to test inputs that generate different predictions through diversity mutations (i.e., input-level and model-level). PRIMA demonstrates SOTA performance, but cannot be applied to black-box scenarios.
Different from them, CertPri prioritizes test inputs based on movement view in feature space. Besides, it takes robustness certification into account. Robustness certification [45, 48, 73] provides formal guarantees for DNNs against norm bounded attacks, which also facilitates CertPri against adaptive attacks. To the best of our knowledge, CertPri is the first to consider the prioritization robustness and introduce formal guarantees to provide certifiability.
## VIII Conclusions
We propose a certifiable prioritization method for bug-revealing test input identification earlier, CertPri, to efficiently solve the labeling-cost problem in DNN testing and build trustworthy deep learning systems. CertPri provides a new perspective on prioritization, which reduces the problem of measuring misbehavior probability to the problem of measuring the movement difficulty in feature space. Based on this view, we give formal guarantees about lower bounds \(\gamma_{L}\) on movement cost, and compute \(\gamma_{L}\) value based on GEVT. The priority of each test input is determined in ascending order of \(\gamma_{L}\) value. Furthermore, we generalize CertPri in black-box scenarios by gradient estimation. CertPri is compared with baseline on various tasks, data forms, data types, model structures, training and prioritization scenarios. The results show that CertPri outperforms baselines in terms of effectiveness, efficiency, robustness, generalizability and guidance.
|
2308.11142 | Graph Encoding and Neural Network Approaches for Volleyball Analytics:
From Game Outcome to Individual Play Predictions | This research aims to improve the accuracy of complex volleyball predictions
and provide more meaningful insights to coaches and players. We introduce a
specialized graph encoding technique to add additional contact-by-contact
volleyball context to an already available volleyball dataset without any
additional data gathering. We demonstrate the potential benefits of using graph
neural networks (GNNs) on this enriched dataset for three different volleyball
prediction tasks: rally outcome prediction, set location prediction, and hit
type prediction. We compare the performance of our graph-based models to
baseline models and analyze the results to better understand the underlying
relationships in a volleyball rally. Our results show that the use of GNNs with
our graph encoding yields a much more advanced analysis of the data, which
noticeably improves prediction results overall. We also show that these
baseline tasks can be significantly improved with simple adjustments, such as
removing blocked hits. Lastly, we demonstrate the importance of choosing a
model architecture that will better extract the important information for a
certain task. Overall, our study showcases the potential strengths and
weaknesses of using graph encodings in sports data analytics and hopefully will
inspire future improvements in machine learning strategies across sports and
applications by using graphbased encodings. | Rhys Tracy, Haotian Xia, Alex Rasla, Yuan-Fang Wang, Ambuj Singh | 2023-08-22T02:51:42Z | http://arxiv.org/abs/2308.11142v1 | Graph Encoding and Neural Network Approaches for Volleyball Analytics: From Game Outcome to Individual Play Predictions
###### Abstract
This research aims to improve the accuracy of complex volleyball predictions and provide more meaningful insights to coaches and players. We introduce a specialized graph encoding technique to add additional contact-by-contact volleyball context to an already available volleyball dataset without any additional data gathering. We demonstrate the potential benefits of using graph neural networks (GNNs) on this enriched dataset for three different volleyball prediction tasks: rally outcome prediction, set location prediction, and hit type prediction. We compare the performance of our graph-based models to baseline models and analyze the results to better understand the underlying relationships in a volleyball rally. Our results show that the use of GNNs with our graph encoding yields a much more advanced analysis of the data, which noticeably improves prediction results overall. We also show that these baseline tasks can be significantly improved with simple adjustments, such as removing blocked hits. Lastly, we demonstrate the importance of choosing a model architecture that will better extract the important information for a certain task. Overall, our study showcases the potential strengths and weaknesses of using graph encodings in sports data analytics and hopefully will inspire future improvements in machine learning strategies across sports and applications by using graph-based encodings.
Department of Computer Science, University of California, Santa Barbara
{rhystracy, haotianxia, yfwang, ambuj}@cs.ucsb.edu, [email protected]
## Introduction
As the sport of volleyball has gotten more popular and players have begun joining at younger and younger ages, the level of play has also increased. This increase in popularity and the subsequent improvements in player skill have lead to increasing demands for tactical analysis and better game strategies. These demands come with a greater necessity for computational analysis of the sport.
With recent increases in interest for sports data analytics through all sports, we have seen an increasing number of studies looking into predicting game events [14], analyzing team and individual player performance[11], overall sport development[15], and predicting or analyzing overall team performance across sports. For example, the sports of Basketball, Soccer, and Baseball have seen several studies.
Basketball has seen several datasets released and other analysis[18, 19], [20], [21, 22, 23] for predicting game outcomes, improving player development, predicting rising stars, or identifying opposing team's offensive and defensive strategies. Soccer [10, 13, 14, 20, 24, 25, 26, 27, 28, 29, 30] have also seen studies focused toward a variety tasks-including game event and outcome predictions, posture analysis, game lineup prediction, and injury risk assessment.
Despite growing interest in sports analytics, studies on the sport of Volleyball have been limited so far. The few studies that have been conducted have tended to have small scopes and use basic naive approaches, yet they have yielded a promising start and a good baseline to compare with. With the increasing need for more and more sophisticated data analytics strategies, we wish to introduce specialized encodings and models for the sport of volleyball to improve upon these current baseline approaches without the need for gathering additional data.
## Related Work
There have been a few recent datasets and studies for indoor volleyball[19, 20], but they are primarily focused toward computer vision and are not the most useful for tactical analysis or tracking in-depth game statistics since they are missing several important game variables. There has also been a recent beach volleyball dataset[20] that has been more useful for tactical analysis, but due to differences between beach volleyball and indoor volleyball-primarily the additional players and more strict positions in indoor volleyball-this dataset is limited solely to beach volleyball analysis. Lastly, a recent study[20] introducing a specialized indoor volleyball dataset has made a noticeable leap in the field.
Xia et al.[20] has introduced a simple yet powerful natural language to represent a volleyball rally
as a sequence of rounds that each consist of a sequence of 1-4 contacts (pass, set, hit, block) and gathered a large dataset of NCAA and Professional men's volleyball games. This dataset has allowed much deeper and more useful game statistics to be captured and analyzed, and this paper also demonstrated some promising results in several never-before-analyzed tactical analysis tasks. The authors demonstrated some impressive results with simple naive models when predicting the winner of a rally, predicting where a setter will set the ball, and predicting what type of hit a hitter will use. All of these tasks are valuable for a team to determine more optimal offensive and defensive strategies under different scenarios. This paper does, however, leave room for improvement. The data used in this study is raw, the approaches are naive, and the authors never explore any forms of encoding their data to improve performance since they are simply introducing baselines that their language is capable of. Given the temporal sequence heavy format of the volleyball language, we decided to explore temporal graph based encodings for this language and dataset.
Graph based encodings have shown promising results for analyzing other sports, such as American football, basketball, and soccer. One recent study uses graph encodings and Graph Neural Networks to predict various sports outcomes[14]. This paper focuses on creating a sports-agnostic way of representing game-states using graph encodings. Through this technique, they are able to capture inter-player relationships and local player relationships that can otherwise not be taken into account when training a model. Further, the paper tests its player-specific graph approach on American football and a popular esports game, Counter-Strike. Through their methods, they demonstrate a reduction in test loss by 20% and 9% for football and Counter-Strike respectively. A similar work uses GNNs for predicting future player locations and movements[15]. The work focuses on multi-agent sports such as basketball and soccer and takes advantage of both GNNs and Variational RNNs to generate these future locations. Through these techniques, they show that the statistical player distribution of their generative model predictions outperform previous works. Further, they also test their model using conditional prediction to answer questions such as: How will the players move if A passes to B instead of C? Insight into these conditional relationships is highly tactically useful when determining new strategies or how to optimize individual play.
Since many sports can have data be well represented with graph structures, graph encodings would likely work well in the sport of volleyball in several areas. With the current leading dataset[16] especially, using a graph encoding might better represent the data. For instance, the original dataset does not give the deep learning models any information as to which variables belong to which contact in a given "ball round". On top of this, there is no information describing the temporal ordering of the contacts given to the models. To give this additional context information to the deep learning models and allow for a better representation of a volleyball rally without needing to gather any additional data, we propose encoding the volleyball round data into a graph structure and using Graph Neural Networks to complete the same tasks.
## Graph Encoding
To analyze our graph encoding, we must first look at what information the baseline dataset has to offer.
### Underlying Data Representation
For this study, we will use the leading indoor volleyball dataset [16][16]. This dataset splits volleyball matches into a sequence of rallies and splits each orally into a sequence of rounds. Each round contains variables describing round information (team and round number), various locations of ball contacts, pass and set ratings, hit type used, blocking information, and serve type. As explained earlier, all of this information--besides the two round level variables--relates to each individual ball contact. Pass contact location and pass rating relate to the pass contact, set rating and location relate to the set contact, etc. All of this points to a contact-level encoding being an excellent option to test. Traditional sports GNN based approaches usually focus on mapping player interactions with graph structures as the interactions and locations of players are typically the most important variables to analyze for free-flowing sports such as basketball or soccer, however since the sport of volleyball has the teams separated by the net and a very rigid rule set to follow, the player interactions and locations become less useful to analyze since there is less freedom and instead the interactions with and locations of the ball and ball contacts become much more important information to focus on. Thus, this VREN Dataset[16] primarily focuses on ball interactions, and our analysis with a contact-level graph encoding will attempt to augment the information from these ball interactions.
### Encoding Methods
In order to attach the variables to their correct contacts and encode the temporal format of the contacts, we must consider each contact as a node of the graph and add each contact's important information to that node. For example the set contact node will hold the setter location, set rating, and set destination variables and the hit contact node will hold the hitter location and hit type variables. Then to encode the temporal aspect of the contacts, we will connect each consecutive contact with a one way edge from first contact to second contact. For example, in a round with a pass, a set, and a hit, we would connect the nodes with one way edges from the pass node to the set node and from the set node to the hit node. All edges will have equal weights, and the dataset does not include any other useful information to include in edge attributes. Though a simple graph encoding method, it fundamentally changes how a neural network will analyze the data.
### Rally Outcome Prediction Task
Since the baseline rally outcome prediction task in the VREN paper[16] considers all information in a round (except the winning team and win/lose reason), we will use all the nodes for a given round to make a prediction.
s such, we end up with a graph involving a pass node, then set node, then hit node, then block node. These graphs will involve the exact same information as is used in the baseline task, but will just have a new graph encoding.
### Set Location Prediction Task
The baseline set location prediction task in the VREN paper only uses the information in a round up to when the setter is about to set the ball. If we were to follow this same strategy, our encoding would involve only 2 nodes (pass and set nodes) in each graph, and very small graph sizes seem to yield poor GNN performance. However, we can include information from the hit and block nodes of the previous round (if there is a previous round in that rally). From further baseline testing, this additional information does not noticeably effect performance in any baseline model, so any performance changes with this task will be solely from the graph encoding. Therefore, for this task each graph involves the previous round's hit node (if it exists), the previous round's block node (if it exists), the current round's pass node, and the current round's set node only including information from before the setter contacts the ball--such as where the setter will set the ball from.
### Hit Type Prediction Task
Similar to the set location prediction task, the baseline hit type prediction task in the VREN paper only uses the information in a round up to when a hitter is about to hit the ball. For this task, we decided to add in the previous round's block node to reach a consistent graph size of 4 nodes for better GNN performance. As with the set prediction task, this additional information yielded no difference in baseline performance. So for this task, each graph involves the previous round's block node (if it exists), the current round's pass node, the current round's set node, and the current round's hit node only including information from before the hitter contacts the ball--such as the location the hitter will be hitting from.
## Methods
With the graph encodings set up, next we turn to the models we will test. To keep comparison consistent with the baseline, we will test a GCN to compare with CNN, a Graph GRU to compare with LSTM, and a Graph Transformer to compare with Transformer. To Implement all of these models, we will use Spektral, a GNNs package built on Tensorflow and Keras. Since this package does not include a Graph Transformer Convolution layer, we implemented one modeled off of the Graph Transformer architecture introduced in Shi et al.(Shi et al.2021) which has shown excellent results for Graph-based learning tasks. We used Tensorflow and Keras to build this Graph Transformer Convolution Layer off of the base MessagePassing layer available in Spektral. This Graph Transformer architecture performs self-attention on graph edges with queries embedded from the node features for the origin of the edge and keys and values embedded from the node features for the terminal of the edge. This architecture also includes gated residual connections between layers, a key factor making this architecture a Transformer.
Figure 1: Our framework for taking raw round sequence data, encoding into a Graph structure, then making a rally outcome prediction
### Models Tested
For the rally outcome prediction task, we will analyze all three of these GNN structure's performances just like the baseline, but for the other two tasks, we will analyze the Transformer and CNN/GCN architectures only since the RNNs tend to struggle on these volleyball tasks for a number of reasons. The GCN architecture we used for all tasks involved one Graph Convolution Layer, a graph global pooling layer to get a graph level value, then 3 dense layers. The Graph GRU architecture we used involved a Gated Graph Convolution Layer with a GRU Gate, a graph global pooling layer, then 2 dense layers. Lastly the Graph Transformer architecture we used for all tasks involved one custom Graph Transformer Convolution Layer, a graph global pooling layer, then 2 dense layers. The rally outcome models had a single float output with a sigmoid activation function on the final output layer to yield a probability value between 0 and 1; these models used MSE as their loss function for training, and MAE, binary accuracy, AUC, and Brier Score as metrics for validation and testing. The set location models had an array of 9 values as an output with a softmax activation function on the final output layer to yield probability predictions for each set location that sum to 1; these models used the cross-entropy loss function for training and categorical accuracy as metrics for validation and testing. The hit type models had an array of 8 values as an output with a softmax activation function on the final output layer to yield probability predictions that sum to 1; these models also used the cross-entropy loss function for training and categorical accuracy as metrics for validation and testing.
### Hit Type Prediction Task Modification
We decided to modify the original Hit Type Prediction task to make it even more useful for tactical analysis. To do this, we decided to ignore blocked hits. First, this will make it easier to predict hit types as the other hit types are relatively predictable given the set rating, locations, and other information in this dataset, but whether the ball is blocked or not depends on much more than the information in the dataset. This means that including the "blocked" hit type will end up confusing the models since they are missing the necessary information to predict if a player will be blocked. Second, removing blocked hits will focus the task on more important predictions. Predicting if a hitter will be blocked provides essentially no use in tactical analysis, as strategy on both the offensive and defensive sides will not change if you know that a hitter will or will not be blocked. The offensive team will see no tactical gain in not covering their hitter when they know they won't be blocked and the defensive team will see no tactical gain in allowing their non-blocking players to relax on the court because they know the ball will be blocked. The other hitting types are much more tactically useful to be able to predict, so we will focus the Hit Prediction Task on those hitting types and seek to improve performance there. We ran the same baseline models on our modified version of this task for comparison with our graph-based approach.
## Results and Analysis
We found that the graph encodings provided noticeable improvements in most cases and consistently standardized models' performances between the NCAA and Professional games. In the baseline models, the models would consistently perform much better or worse on either the NCAA or Pro game for every task. This was due to differences in the level of play and consistency between the NCAA and Professional games in the dataset. For example, the baseline models were better at predicting rally outcomes and hit types in the professional games because they are less random and more deterministic given the better, more consistent, and more mentally strong play by professional players; additionally the baseline models were better at predicting the set locations because professional setters are more skilled and make more randomized sets to confuse the opposing team. With the graph encodings, however, the difference in model performance between NCAA and Pro games was small or negligible in almost all cases. This decrease in this performance gap is because the graph encoding makes the underlying volleyball relationships between the data much more clear to the models. In some tasks, these relationships are more or less clear to the baseline models in NCAA or Professional play, but by manually explaining these relationships to the models, the relationships become equally clear and thus the models' performances becomes more similar. For most cases, this improved understanding of the underlying relationships between contacts in the game can boost performance, and for others--where contact-by-contact information may not be as useful or where models had difficulty analyzing the additional information--it struggled to improve performance.
Next we present an in depth analysis of each task's results.
### Rally Outcome Prediction Results
Overall, the use of our graph encodings and GNNs yielded significant improvements for the rally outcome prediction task as shown in Table 1 below. GCN and Graph GRU both yielded huge improvements for both the NCAA and Pro testing games over baseline in all metrics but brier score. Graph Transformer yielded a large improvement in NCAA game performance and a slight but noticeable boost in Pro game performance in all metrics except for brier score (which performed slightly better in the Pro game and slightly worse in the NCAA game). These improvements suggest that the graph encoding gives the model a much more detailed picture of what is happening in the rally allowing for more detailed--and thus better performing--predictions. Since a Transformer does an excellent job at analyzing the relationships between different variables with its attention mechanism, it does not benefit as much from the graph encoding. The Graph Transformer did perform noticeably better on the NCAA game, however, so this again suggests that the lower level of play in the NCAA game made it harder to find these underlying relationships as compared to the Pro game, but the graph encoding was able to make these relationships more clear and thus standardized performance between the two levels of play.
### Set Location Prediction Results
The set location prediction task saw noticeable improvements when using our graph encoding as shown in Table 2. Not only did a base CNN perform better than the baseline Transformer (the only model tested in VRENXia et al. (2022)), but the GCN improved upon the CNN's performance even further. Additionally the Graph Transformer saw a similar performance boost over the baseline Transformer. With our addition of the baseline CNN and the improved performance with the graph encodings, we found a much better understanding of what information goes into a setters decision to set the ball. First, given the fact that setters try to be as random as they can be when making set location predictions, a simpler model seems to do better. The simple CNN model gives a 2-3% performance boost over the baseline Transformer, which is pretty noticeable in a hard to predict task, and GCN yields a similar performance boost over Graph Transformer. Secondly, it seems that the graph encoding and additional information from the previous round does help improve set prediction performance, but the improvement is not as significant as the previous task. This result, combined with the better performance of the convolution models over the transformer models, would suggest that the location a setter will set to depends less on contact-by-contact information than the outcome of the rally. Instead, the location a setter will set to likely depends more on simpler information (such as setters location on the court, pass rating, etc), which would allow a simpler convolution model to extract this information better. Additionally, this theory would match up with volleyball expert's opinions on what influences the set location during a rally. Lastly, there is likely other useful information for predicting where a setter will set the ball that is not included in this dataset, such as how rushed the setter is and how high the pass is.
### Hit Type Prediction Results
For the hit prediction task, the graph encodings and additional information from the previous round yielded mixed results but for the most part not much change as shown in Table 3. The graph encoding noticeably improved the performance of the Graph Transformer over the baseline for the NCAA games and was relatively the same for the Pro games, and the GCN performed slightly worse than baseline CNN for both sets of games. These results would suggest that the encoded contact-by-contact information is not as important for this task; the additional information may slightly improve models that can use this information well--such as a Transformer--but may harm simpler models that cannot analyze this information as well. Overall this would point to the location a hitter hits to being heavily influenced by individual variables, but with some small influence from other contact-by-contact information. When including the "blocked" hit type, the simpler base CNN model outperformed the baseline Transformer model, but when this hit type is excluded they performed identically. This would suggest that a player getting blocked may depend more heavily on simple information (such as the location of the hitter or number of blockers), but the hit type a hitter chooses to use may depend slightly more on the contact-by-contact information. Additionally, removing the "blocked"
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Level of game & Model & Binary Accuracy(\%) & AUC & Brier Score & Mean Absolute Error \\ \hline \multirow{8}{*}{college} & Transformer\({}^{*}\) & 74.38 & 0.82 & 0.18 & 0.34 \\ & CNN\({}^{*}\) & 69.06 & 0.75 & 0.20 & 0.40 \\ & LSTM\({}^{*}\) & 65.91 & 0.75 & 0.21 & 0.41 \\ & Graph Transformer & **81.15** & **0.87** & **0.15** & **0.27** \\ & Graph GRU & 77.81 & 0.86 & 0.33 & 0.31 \\ & GCN & 78.20 & 0.86 & 0.20 & 0.30 \\ \hline \multirow{8}{*}{professional} & Transformer\({}^{*}\) & 80.00 & 0.85 & **0.16** & 0.32 \\ & CNN\({}^{*}\) & 71.59 & 0.76 & 0.20 & 0.39 \\ \cline{1-1} & LSTM\({}^{*}\) & 70.06 & 0.75 & 0.20 & 0.40 \\ \cline{1-1} & Graph Transformer & **81.15** & **0.87** & **0.16** & **0.27** \\ \cline{1-1} & Graph GRU & 77.83 & 0.86 & 0.29 & 0.31 \\ \cline{1-1} & GCN & 78.20 & 0.86 & 0.17 & 0.30 \\ \hline \multicolumn{2}{l}{\({}^{*}\) model prediction results from VRENXia et al. (2022).} \\ \end{tabular}
\end{table}
Table 1: rally outcome prediction task performance of each model on college-level games & professional games. There are significant improvements among all three models compared to the baseline performance. The Graph Transformer gives the best result.
\begin{table}
\begin{tabular}{c c c} \hline Level of game & Model & Categorical Accuracy(\%) \\ \hline \multirow{3}{*}{college} & Transformer\({}^{*}\) & 54.65 \\ & GCN & **59.10** \\ & CNN & 57.43 \\ & Graph Transformer & 56.57 \\ \hline \multirow{3}{*}{professional} & Transformer\({}^{*}\) & 51.65 \\ & GCN & **59.10** \\ & CNN & 53.30 \\ \cline{1-1} & Graph Transformer & 56.57 \\ \hline \multicolumn{2}{l}{\({}^{*}\) model prediction results from VRENXia et al. (2022).} \\ \end{tabular}
\end{table}
Table 2: Categorical Accuracy for setting location prediction in both professional and college level games. All three models are improved compared to the baseline result
hit type yielded significant improvements in performance in all cases. Just like the set prediction task, this task would very likely benefit from having additional information, such as how tight to the net the set is and where the ball is located in relation to the hitter's body.
### Analysis
From these results, we can gather that predictions for outcomes that are largely influenced by contact-level information (like the rally outcome) will largely benefit from using a graph encoding to bolster the contact-level information inputted into a model. And similarly, outcomes that depend on contact-level information less (like the type of hit a hitter uses) will benefit less or see mixed results. Additionally, the use of our graph encoding allows the models to analyze the game in a much more advanced and human way. The baseline models view the variables in the round globally and then they try to analyze the connections between them. This is similar to how a beginner of the sport would view a volleyball game: they view a whole round and understand general connections between big events that factor into certain outcome--like which team wins the rally. To truly understand the flow of a rally, however, it's more important to look at how consecutive contacts are influencing each other. For example, it's more important to understand how serving to a certain player may affect the rating and location of the pass, which may limit the players a seter can set to, which can significantly decrease a teams chance of winning that rally. This sort of analysis is already used by coaches, analysts, and fans who are more knowledgeable of the sport to evaluate how a game is going and how to change strategy to improve. GNNs primarily focus their analysis on graph edges, and in our encoding specifically this is the relationships between consecutive contacts. Thus the use of GNNs with our graph encodings yields a much more advanced analysis of the data which noticeably improved our results overall. Another takeaway is that these baseline tasks can be significantly improved--both in prediction ability and in tactical usefulness--with simple adjustments. Removing blocked hits greatly improved prediction results at the same time as making the predictions more tactically focused and applicable.
## Conclusion and Future Work
In this paper, we introduce a novel graph encoding to add additional contact-by-contact volleyball context to an already available volleyball dataset without any additional data gathering. This graph encoding can yield large improvements in prediction tasks that depend heavily on this information, but may not yield much benefit for tasks that do not. Ultimately, encodings are specialized tools that will not work in all situations. Overall these results show the potential strengths and weaknesses of using graph encodings in sports data analytics and hopefully will inspire future improvements in machine learning strategies across sports and applications by using graph based encodings. Additionally, we demonstrate the importance of choosing a model architecture that will better extract the important information for a certain task--in some sports analysis tasks a much simpler model will perform noticeably better on the given data. Lastly, we were also able to gain a much better understanding of the underlying relationships in a volleyball rally from these results; for a coach or player, this is extremely useful for making more informed game decisions.
In future studies, we hope to gather more sophisticated data than included in the dataset we explored in this study and analyze other encoding formats to see if that can improve prediction results.
|
2305.18824 | Graph Neural Networks for Contextual ASR with the Tree-Constrained
Pointer Generator | The incorporation of biasing words obtained through contextual knowledge is
of paramount importance in automatic speech recognition (ASR) applications.
This paper proposes an innovative method for achieving end-to-end contextual
ASR using graph neural network (GNN) encodings based on the tree-constrained
pointer generator method. GNN node encodings facilitate lookahead for future
word pieces in the process of ASR decoding at each tree node by incorporating
information about all word pieces on the tree branches rooted from it. This
results in a more precise prediction of the generation probability of the
biasing words. The study explores three GNN encoding techniques, namely tree
recursive neural networks, graph convolutional network (GCN), and GraphSAGE,
along with different combinations of the complementary GCN and GraphSAGE
structures. The performance of the systems was evaluated using the Librispeech
and AMI corpus, following the visual-grounded contextual ASR pipeline. The
findings indicate that using GNN encodings achieved consistent and significant
reductions in word error rate (WER), particularly for words that are rare or
have not been seen during the training process. Notably, the most effective
combination of GNN encodings obtained more than 60% WER reduction for rare and
unseen words compared to standard end-to-end systems. | Guangzhi Sun, Chao Zhang, Phil Woodland | 2023-05-30T08:20:58Z | http://arxiv.org/abs/2305.18824v1 | # Graph Neural Networks for Contextual ASR with the Tree-Constrained Pointer Generator
###### Abstract
The incorporation of biasing words obtained through contextual knowledge is of paramount importance in automatic speech recognition (ASR) applications. This paper proposes an innovative method for achieving end-to-end contextual ASR using graph neural network (GNN) encodings based on the tree-constrained pointer generator method. GNN node encodings facilitate lookahead for future word pieces in the process of ASR decoding at each tree node by incorporating information about all word pieces on the tree branches rooted from it. This results in a more precise prediction of the generation probability of the biasing words. The study explores three GNN encoding techniques, namely tree recursive neural networks, graph convolutional network (GCN), and GraphSAGE, along with different combinations of the complementary GCN and GraphSAGE structures. The performance of the systems was evaluated using the Librispeech and AMI corpus, following the visual-grounded contextual ASR pipeline. The findings indicate that using GNN encodings achieved consistent and significant reductions in word error rate (WER), particularly for words that are rare or have not been seen during the training process. Notably, the most effective combination of GNN encodings obtained more than 60% WER reduction for rare and unseen words compared to standard end-to-end systems.
pointer generator, contextual speech recognition, end-to-end, graph neural networks, audio-visual
## I Introduction
End-to-end ASR systems often struggle with the accurate recognition of rare "long-tail" words that were not present in the training data. To combat this issue, Contextual biasing which applies external contextual knowledge to the ASR system during inference, becomes a crucial factor in addressing the long-tail word problem in various applications [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14]. Contextual knowledge is often represented as a list (referred to as a _biasing list_) of words or phrases (referred to as _biasing words_) that are likely to appear in a given context. Biasing lists can be sourced from various resources, such as a user's contact book or playlist, recently visited websites and visual information from presentation slides _etc._ Despite their infrequent occurrence and thus the limited influence on the overall word error rate (WER), biasing words significantly impacts understanding the content as biasing words are mostly content words such as nouns or proper nouns, which are crucial for downstream tasks. The inclusion of a word in a biasing list increases its likelihood of being correctly recognised, making contextual biasing a crucial component for the accurate recognition of rare content words.
End-to-end trainable ASR systems [15, 20] are designed to encapsulate all necessary knowledge within a single, static model, making it difficult to integrate dynamic context-specific knowledge at test-time. To overcome this challenge, specialised contextual biasing methods have been proposed, such as shallow fusion (SF) with a weighted finite-state transducer (WFST) or an adapted language model (LM) that incorporates contextual knowledge [1, 2, 3, 11, 12, 14], attention-based deep context approaches [4, 5, 6, 7, 8], as well as deep biasing (DB) with a prefix tree for improved efficiency when dealing with large biasing lists [6, 9, 10]. More recently, contextual biasing components with a pointer generator mechanism [46, 47, 48] that directly modifies the output distribution have been proposed [34, 38], which can be jointly optimised with ASR systems. In particular, the tree-constrained pointer generator (TCPGen) component proposed in [34] builds a neural shortcut by directly interpolating the original model distribution with the TCPGen distribution estimated from contextual knowledge structured as a prefix-tree, based on a dynamic interpolation weight predicted by the TCPGen component. TCPGen performance was further boosted by encoding the prefix-tree using a tree recursive neural network (tree-RNN) [35].
This paper substantially extends the work in [35] and proposes to use three types of graph neural networks (GNN) for prefix-tree encoding in TCPGen 1. These include tree-RNN, graph convolutional network (GCN) [43] with its variant GCNII [42], and GraphSAGE with the max-pooling aggregator [44]. While tree-RNN is a representative GNN model with a single recursive layer, GCN and GraphSAGE are two popular and effective multi-layer GNN designs. Specifically, GCN encodes the tree by utilising spectral representation, while GraphSAGE, as a spatial method, directly explores the graph topology [28]. To further enhance the performance of GNN tree encodings, this paper proposes attentive and bilinear combination approaches to exploit the complementarity between GCN and GraphSAGE. Additionally, this paper introduces an effective parameter-tying scheme for both GCN and GraphSAGE to improve their performance with deeper structures.
Footnote 1: The main code is at [https://github.com/BriansIDP/espnet/tree/GNN](https://github.com/BriansIDP/espnet/tree/GNN).
GNN encodings provide more powerful node representations in the prefix tree of TCPGen, allowing for "lookahead" functionality where each node contains not only its own word piece information but also information about its child branches. This improved node representation in TCPGen leads to more accurate generation probability predictions for biasing words, enabling better contextual biasing by incorporating information about future word pieces during each ASR decoding
step. TCPGen with GNN encodings, as a generic component for end-to-end ASR, is integrated into both attention-based encoder-decoder (AED) [15, 16, 17, 18, 19, 26] and neural transducer architecture (N-T) [20, 21, 22, 23, 24].
Experiments were conducted with two different setups: (1). a simulated contextual ASR task using LibriSpeech audiobook data, and, (2). an audio-visual speech recognition pipeline [35] with the AMI meeting data. In addition to the consistent and large reductions in error rates achieved by TCPGen with tree-RNN in [35], using the proposed GNN encodings, especially combined ones achieved further significant improvements in the word error rate (WER) of rare content words.
The remainder of this paper is organised as follows. Section II reviews related work. Section III introduces the TCPGen component. Section IV describes the details of applying proposed GNNs. Section V and VI present the experimental setup and results. Section VII gives conclusions.
## II Related work
### _End-to-end contextual speech recognition_
Recently, various contextual biasing algorithms have been developed for end-to-end ASR. One prominent research stream focuses on representing biasing lists as external weighted finite-state transducers (WFSTs), which are integrated into a class-based language model (LM) via shallow fusion (SF) [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]. These methods often depend on context prefixes such as "call" or "play", limiting their ability to handle the diverse grammar in natural speech. On the other hand, deep context approaches, often using attention mechanisms, have also been proposed, which encode the biasing list into a vector to use as input for the end-to-end ASR models [4, 5, 6, 7, 8]. While deep context approaches eliminate the dependence on syntactic prefixes seen in SF methods, they require more memory and are less effective for handling large biasing lists.
The study in [9] combined the use of deep context and shallow fusion of a WFST in an N-T, leading to improved efficiency by limiting the biasing vector extraction to a subset of word pieces determined by a prefix tree representation of the biasing list, which is referred to as deep biasing (DB). The prefix-tree-based method was further expanded in [10] to include RNN LMs for shallow fusion, resulting in improved recognition of biasing words. Prior research only analysed industry datasets, however, [10] proposed and validated a simulation of contextual biasing on open-source data by incorporating a large number of distractors into the list of biasing words in the utterance. More recently, [38] and [34] simultaneously proposed a neural shortcut between the biasing list and the final model output distribution. In contrast to [38], TCPGen [34] adopted a structured prefix-tree representation for biasing lists, which also enabled the future development of tree-RNN encodings [35].
### _GNN for speech and language processing_
GNNs have been extensively employed in a multitude of speech and language tasks. In language-related applications, such as sentence-level text classification or word-level sequence labelling, GNNs are often utilized to capture the syntactic dependencies or semantic relations among words in a sentence. Furthermore, the encoding of subword-unit-based tree structures using GNNs [39, 40] has been explored for the purpose of generating more effective word representations. GNNs have also been applied to named-entity recognition [52] and neural machine translation [53, 54].
GNNs also have numerous applications in speech processing, such as text-to-speech synthesis, where GNNs model the syntactic and semantic relationships in the text. In GraphTTS [45], the authors structured the sentence into a hierarchical tree by dividing the utterance into words and then further into characters. This allowed the system to capture prosodic relationships among different parts of the input. Additionally, GNNs are used in paralinguistic tasks as the syntactic encoder, including sentiment classification and hate speech detection tasks [49, 50, 51]. In [35], a tree-RNN structure was used to encode a word piece prefix-tree in the TCPGen component for contextual biasing.
## III Tree-constrained pointer generator
TCPGen is a neural network-based module that employs a combination of symbolic prefix-tree search and a neural pointer generator for contextual biasing, allowing for end-to-end optimisation. The biasing list is structured as a prefix tree at the word piece-level. At each output stage, TCPGen computes a probability distribution over a set of valid word pieces that are constrained by the prefix tree. TCPGen also predicts a dynamic generation probability, signifying the amount of contextual biasing required at the specific output step. The final output distribution is obtained by taking a weighted summation of the TCPGen distribution and the original AED or N-T output distribution (see Figure 1).
The key symbolic representation of the external contextual knowledge in TCPGen is the prefix tree. For simplicity, examples and equations in this section are presented for a specific search path, which can be generalised easily to beam-search with multiple paths. In the example prefix tree with biasing words (turner, vignette and turn) shown in Fig. 2, if the previously decoded word piece is Tur, word pieces in_ and n form the set of valid word pieces \(\mathcal{Y}_{i}^{\text{tree}}\). Denoting \(\mathbf{x}_{1:T}\) and \(y_{i}\) as input acoustic features and output word piece, \(\mathbf{q}_{i}\) as the query vector carrying the decoding history and acoustic information, \(\mathbf{K}=[...,\mathbf{k}_{j},...]\) as the key vectors, scaled dot-product attention is performed between
and \(\mathbf{K}\) to compute the TCPGen distribution \(P^{\text{ptr}}\) and the output vector \(\mathbf{h}_{i}^{\text{ptr}}\) as shown in Eqns. (1) and (2).
\[P^{\text{ptr}}(y_{i}|y_{1:i-1},\mathbf{x}_{1:T})=\text{Softmax}( \text{Mask}(\mathbf{q}_{i}\mathbf{K}^{\text{T}}/\sqrt{d})), \tag{1}\] \[\mathbf{h}_{i}^{\text{ptr}}=\sum_{j}P^{\text{ptr}}(y_{i}=j|y_{1:i- 1},\mathbf{x}_{1:T})\,\mathbf{v}_{j}^{\text{T}}, \tag{2}\]
where \(d\) is the size of \(\mathbf{q}_{i}\) (see [33]), \(\text{Mask}(\cdot)\) sets the probabilities of word pieces that are not in \(\mathcal{Y}_{i}^{\text{ptr}}\) to zero, and \(\mathbf{v}_{j}\) is the value vector relevant to \(j\).
TCPGen can be employed in both AED and N-T. In AED, the query is the combination of the context vector and the embedding of the preceding word piece, while the keys and values are derived from the decoder word piece embedding using a shared projection matrix. The generation probability in AED is computed from the concatenation of decoder hidden states and TCPGen output vectors \(\mathbf{h}_{i}^{\text{ptr}}\), followed by Sigmoid activation function to be constrained to \((0,1)\). In N-T, the pointer generator is applied to each combination of the encoder and the predictor step, where the TCPGen distribution is calculated using the concatenation of the corresponding encoder and predictor hidden states as the query. Keys and values in N-T are computed from the predictor word piece embeddings. The generation probability for N-T is derived from the joint network output and the TCPGen output vector \(\mathbf{h}_{i}^{\text{ptr}}\). To ensure that the probability of the null symbol in N-T is unchanged, \(P_{i}^{\text{ptr}}(\varnothing|\mathbf{x}_{1:T},y_{1:i-1})\) is set to 0 and the generation probability is scaled by \(1-P_{i}^{\text{mdl}}(\varnothing|\mathbf{x}_{1:T},y_{1:i-1})\) where \(P_{i}^{\text{mdl}}\) is the original model distribution before interpolation.
For better flexibility, an _out-of-list_ (OOL) token is included in \(\mathcal{Y}_{i}^{\text{tree}}\) indicating that no suitable word piece can be found in the set of valid word pieces. To ensure that the final distribution sums to 1, the generation probability, \(P_{i}^{\text{gen}}\), is scaled as \(\hat{P}_{i}^{\text{gen}}=P_{i}^{\text{gen}}(1-P^{\text{ptr}}(\text{OOL}))\), and the final output can be calculated as shown in Eqn. (3).
\[P(y_{i})=P^{\text{mdl}}(y_{i})(1-\hat{P}_{i}^{\text{gen}})+P^{\text{ptr}}(y_{i })P_{i}^{\text{gen}}, \tag{3}\]
where dependencies, \(y_{1:i-1},\mathbf{x}_{1:T}\), are omitted for clarity. \(P^{\text{mdl}}(y_{i})\) represents the output distribution from the standard end-to-end model.
### _Biasing-driven LM discounting (BLMD) for TCPGen_
Log-linear interpolation is often used as a technique to incorporate an external LM via SF. Define the source domain data as the text of the training data for the end-to-end model, and the target domain data as the data used to train an external LM such that it generates better probability estimates for the test data. The LM discounting is defined as Eqn. (4).
\[P^{\text{sf}}(y_{i})=P^{\text{mdl}}(y_{i})\frac{P^{\text{ptr}}(y_{i})^{\alpha }}{P^{\text{src}}(y_{i})^{\beta}}, \tag{4}\]
where \(P^{\text{mdl}}(Y)\) is the probability from the end-to-end system, \(P^{\text{src}}(Y)\) is the probability of the source domain LM and \(P^{\text{ptr}}(Y)\) is the target domain LM probability. Extending this idea to contextual biasing with TCPGen, BLMD can be applied as shown in Eqn. (5).
\[P^{\text{sf}}(y_{i}) =(1-P^{\text{gen}})P^{\text{mdl}}(y_{i})\frac{P^{\text{ptr}}(y_{i} )^{\alpha_{1}}}{P^{\text{src}}(y_{i})^{\beta_{1}}}\] \[+P^{\text{gen}}P^{\text{ptr}}(y_{i})\frac{P^{\text{ptr}}(y_{i})^{ \alpha_{2}}}{P^{\text{src}}(y_{i})^{\beta_{2}}}, \tag{5}\]
where \(P^{\text{ptr}}\) is the TCPGen distribution, and the same source and target LMs are used for both distributions, but with different sets of hyper-parameters \(\alpha_{1},\beta_{1}\) and \(\alpha_{2},\beta_{2}\) tuned on the validation set.
## IV GNN Tree Encodings for TCPGen
While looking ahead into future branches of the paths being searched on the prefix tree is greatly beneficial to the correct prediction of the generation probability, node representations in standard TCPGen only contain information about the word piece on that node. To achieve lookahead functionality, GNNs are used to encapsulate future branch information into each node representation. The pipeline of applying GNN encodings in TCPGen is shown in Fig. 3.
The word piece prefix-tree is first encoded with a GNN to obtain encodings associated with each node. Then, the tree with GNN encodings is used by TCPGen, where the key and value for the TCPGen distribution are computed using the encoding of nodes in the set of valid word pieces, in place of word piece embeddings as shown in Eqn. (6).
\[\mathbf{k}_{j}=W^{\text{K}}\mathbf{h}_{n_{j}}^{\text{gm}}\qquad\mathbf{v}_{j}=W ^{\text{V}}\mathbf{h}_{n_{j}}^{\text{gm}}, \tag{6}\]
where \(W^{\text{V}}\) and \(W^{\text{K}}\) are parameter matrices, and \(\mathbf{h}^{\text{gm}}\) is the GNN node encoding obtained using different types of GNN. This paper explores three different types of GNNs, namely the tree-RNN, GCN (including its variant, GCNII) and GraphSAGE with max pooling, together with combinations of GCN and GraphSAGE as two complementary types of GNN. Details of GNN structures applied in TCPGen together with modifications are described in the following sections.
### _Tree-RNN_
Tree-RNN recursively encodes the tree from leaf nodes to the root using an RNN structure. Specifically, at node \(n_{j}\) which contains child nodes \(n_{1},...,n_{k},...,n_{K}\), the vector representation of \(n_{j}\) can be written as Eqn. (7).
\[\mathbf{h}_{n_{j}}^{\text{tmn}}=\text{ReLU}(W_{1}\mathbf{y}_{j}+\sum_{k=1:K}W_{ 2}\mathbf{h}_{n_{k}}^{\text{tmn}}), \tag{7}\]
where \(\mathbf{h}_{n_{k}}^{\text{tmn}}\) is the vector representation of node \(n_{k}\), and \(\mathbf{y}_{j}\) is the embedding vector of the word piece of node \(n_{j}\). \(W_{1}\) and \(W_{2}\) are parameter matrices jointly optimised with the ASR
Fig. 2: An example of prefix tree search and attention in TCPGen. With previous output Tur, in_ and n are two valid word pieces on which attention will be performed. A word end unit is denoted by _–_.
system by allowing gradient back-propagation through \(\mathbf{h}_{n_{k}}^{\text{tran}}\). In this way, each node recursively encodes information from its child nodes, such that the information of the entire branch rooted from it can be incorporated in the node encoding \(\mathbf{h}_{n_{k}}^{\text{tran}}\).
Before the forward pass of the main ASR model, the encoding of each node is obtained by applying Eqn. (7) recursively from leaf to root. Then, for the same example shown in Fig. 2, at the node of Tur, node encodings \(\mathbf{h}_{i_{m}}^{\text{tran}}\) and \(\mathbf{h}_{n}^{\text{tran}}\) are used to calculate the TCPGen distribution and \(\mathbf{h}_{i}^{\text{ptr}}\). Therefore, if Turner appears in the utterance, TCPGen is aware of this entire word as early as in the encoding of Tur. Such lookahead functionality achieves a more accurate prediction of the generation probability to determine when contextual biasing is needed.
Although Tree-RNN achieves the lookahead functionality, it uses a rather simple RNN structure to encode the information of all succeeding nodes on that branch into a single vector representation. Therefore, more powerful and flexible GNN encodings are explored in order to improve performance.
### _Graph Convolutional Network (GCN)_
As an alternative method to Tree-RNN, GCN is applied for tree encodings to achieve better node representations with controllable lookahead distance. GCN is a multi-layer network where each layer computes the encoding of a node as a function of its neighbours based on the graph Laplacian matrix. Each layer of GCN conducts one message passing from immediate neighbouring nodes. For a GCN with \(L\) layers, the encoding of a node covers information from a node that is an \(L\)-hop ahead on branches rooted from it.
Specifically, define \(H^{\text{gen}}(l)=[\mathbf{h}_{n_{1}}^{\text{gen}}(l),...,\mathbf{h}_{n_{N}}^{ \text{gen}}(l)]\) as the node encoding matrix of layer \(l\) whose rows \(\mathbf{h}_{n_{l}}^{\text{gen}}(l)\) are the encoding of the \(n_{j}\) nodes, the GCN layer computation is
\[H^{\text{gen}}(l+1)=f(\hat{P}H^{\text{gen}}(l)W(l)), \tag{8}\]
where \(W(l)\) is the parameter matrix of \(l\), \(f(\cdot)\) is the activation function, \(\hat{A}=A+I_{N}\) is the adjacency matrix with self-loops to enable the information of the current node to be included in the node representation, and \(\hat{D}\) is the degree matrix of \(\hat{A}\). A specific form of normalised graph Laplacian [43],
\[\hat{P}=\hat{D}^{-1/2}\hat{A}\hat{D}^{-1/2}, \tag{9}\]
is used to address the vanishing/exploding gradient problem. Note that as only future branch information is needed in TCPGen, \(\hat{A}\) only contains edges that lead to child nodes, and hence \(\hat{D}\) is computed based on this modified \(\hat{A}\). TCPGen then takes the node encodings of the final layer, \(H^{\text{gen}}(L)\), to compute key and value vectors in the same way as tree-RNN. As a practical consideration for deep networks in general, residual connections and layer normalisation are added for any GNNs with multiple layers.
Although lookahead with a configurable scope can be achieved by varying the number of layers \(L\), recent research found that the performance of GCN starts to degrade with more than three layers. Apart from the fact that deeper networks are more difficult to train, it was pointed out that the representations of the nodes in GCN are inclined to converge to a certain value and hence become indistinguishable, which is referred to as the _over-smoothing_ problem. One promising method to address this problem is to build a shortcut directly linking to the first GCN layer to ensure a certain fraction of the final representation comes from the current node itself. Thus, GCNII is also investigated in this paper with each layer computed in Eqn (10):
\[H^{\text{gi}}(l+1)=f\Big{(}[(1-\alpha)\hat{P}H^{\text{gi}}(l)+\alpha H^{\text{ gi}}(0)]W_{\beta}(l)\Big{)} \tag{10}\]
where \(H^{\text{gi}}(l)\) is the \(l\)-th layer output of GCNII, hyper-parameter \(\alpha\) scales the shortcut to the first layer, and \(W_{\beta}(l)\) is the parameter matrix defined as:
\[W_{\beta}(l)=(1-\beta_{l})I_{N}+\beta_{l}W(l) \tag{11}\]
where \(\beta_{l}\) is a layer-dependent hyper-parameter which is \(\text{ln}(1/l+1)\) in this paper.
Although GCNII has shown improved performance with deeper GCN of more than 4 layers, the maximum length in our biasing list is usually less than 10. With this depth for tree structures, the over-smoothing problem is less of a concern compared to the best structure of GCNII with 64 layers, and the network complexity is more problematic. Therefore, a simple parameter-sharing scheme is also proposed in this paper for deep GNNs to reduce network complexity. Specifically, the parameter matrices in the first \(K\) layers are shared:
\[W(1)=W(2)=...=W(K) \tag{12}\]
In contrast to other complex graph structures, message-passing operations from child nodes to the root in a tree were similar across different layers. Therefore, having the same weight matrix representing this process effectively reduces the model complexity to a degree that is adequate for tree encoding. In particular, \(K=L-1\) for GCN in this paper so that there are effectively two layer parameters to be trained while maintaining the depth of the network. The first \(K\) layers act as
Fig. 3: Pipeline of encoding prefix-tree with GNN for TCPGen. The prefix-tree is first encoded by a GNN, and the GNN-encoded tree is used by TCPGen to generate the TCPGen distribution where key and value vectors are GNN-based node encodings. The lookahead content for a 2-layer GCN as an example is denoted in {}, i.e. for node ri, \(\mathbf{h}_{n:i}\) covers information about an ne_ and o_
universal message passing and the last layer performs a final information aggregation from neighbouring nodes.
### _GraphSAGE with Max Pooling_
Node encodings for Tree-RNN and GCN are based on summation, whereas previous research [56, 57] has found that using max pooling also achieves competitive performance for word representation based on subword units. Hence, as an alternative GNN structure, GraphSAGE with a max pooling aggregator function is studied in this paper. GraphSAGE is a multi-layer GNN with each layer performing an information aggregation over a sampled set of child nodes followed by an update to the representation of the current node. Although one of the innovations in GraphSAGE is fixed-size sampling, as the training time biasing list is already a sampled subset from the full biasing list, the sampling of GraphSAGE is omitted in this paper. The computation of each layer is
\[\mathbf{h}_{\mathcal{N}_{i}}(l+1)=\max(\{\sigma(W_{1}(l)\mathbf{ h}_{n_{j}}^{\text{seg}}+\mathbf{b}(l)),\forall n_{k}\in\mathcal{N}_{j}\})\] \[\mathbf{h}_{n_{j}}^{\text{seg}}(l+1)=\sigma(W_{2}(l)\text{Concat}( \mathbf{h}_{\mathcal{N}_{j}}(l+1);\mathbf{h}_{n_{j}}^{\text{seg}}(l))) \tag{13}\]
where \(\max(\cdot)\) and \(\text{Concat}(\cdot)\) denote the element-wise max pooling and concatenation operators, \(\mathcal{N}_{j}\) is the set of child nodes of node \(n_{j}\). Although slightly better than GCN, GraphSAGE also degrades when adding more layers. Therefore, it is proposed in this paper to apply parameter-sharing for GraphSAGE, where both \(W_{1}=W_{1}(1)=W_{1}(2)=...=W_{1}(L)\) and \(W_{2}=W_{2}(1)=W_{2}(2)=...=W_{2}(L)\) are separately shared across all layers respectively.
TCPGen with GNN encodings still achieves high efficiency in handling large biasing lists. In training, with a large biasing list of 1000 words, TCPGen with a tree-RNN was 3.5 times slower than the standard AED or N-T model, with a negligible increase in space complexity. Among the three GNNs, GCN achieved the highest efficiency for training as its computation can be parallelised most, whereas the recursive computation in tree-RNN and the max-pooling in GraphSAGE hinder their training speed respectively. As a result, GCN in training is 2.5 times slower than the standard AED model, while GraphSAGE is 3 times slower. Moreover, by generating GNN encodings offline before decoding once the biasing list is available, the time and space complexity during inference is close to the standard AED or N-T for biasing lists of thousands of words.
### _Combination of GNN Encodings_
Combinations [41, 29] of GCN and GraphSAGE are explored in this paper for tree encodings, as they are conceptually complementary. GCN adopts a spectral approach where the graph Laplacian is used to aggregate information, while GraphSAGE directly exploits the graph structure and performs a max-pooling aggregation. To exploit the complementarity between the two GNNs, both additive and multiplicative combination methods are investigated here.
Additive combination performs a weighted sum of node encodings from GCN and GraphSAGE as the final node encodings before being processed by TCPGen (see Eqn. (14)).
\[\mathbf{h}_{n_{j}}^{\text{comb}}=\alpha^{\text{gen}}U_{1}\mathbf{h}_{n_{j}}^{ \text{gen}}+\alpha^{\text{seg}}U_{2}\mathbf{h}_{n_{j}}^{\text{seg}}\,. \tag{14}\]
\(U_{1}\) and \(U_{2}\) are two parameter matrices to rearrange the orders of the element in each GNN encoding as they may not encode information in the same order [41]. Note that \(\alpha^{\text{gen}}+\alpha^{\text{seg}}=1\) are weights that are either fixed or predicted via attention. The attention calculation is performed on each node \(n\) separately, as shown in Eqn. (15).
\[[\alpha_{i,n}^{\text{gen}},\alpha_{i,n}^{\text{seg}}]=\text{ Softmax}(\mathbf{q}_{i}^{T}[\mathbf{h}_{n_{j}}^{\text{gen}},\mathbf{h}_{n_{j}}^{ \text{seg}}]) \tag{15}\]
where \(\mathbf{q}_{i}\) is the same query vector used to calculate the TCPGen distribution. In this way, different sets of weights are assigned to different nodes at different decoder steps.
The multiplicative combination is performed via a low-rank approximation of the bilinear pooling method. The combination is shown in Eqn. (16)
\[\hat{\mathbf{h}}_{n_{j}}^{\text{comb}}=U_{3}(\tanh{(U_{1}\mathbf{h}_{n_{j}}^{ \text{gen}})}\odot\tanh{(U_{2}\mathbf{h}_{n_{j}}^{\text{seg}})}) \tag{16}\]
where \(U_{1}\), \(U_{2}\) and \(U_{3}\) are parameter matrices, and \(\odot\) is the element-wise product between two vectors. Following [41], a shortcut connection from each individual GNN encoding was provided to form the final combined encoding for TCPGen, as shown in Eqn. (17).
\[\mathbf{h}_{n_{j}}^{\text{comb}}=\hat{\mathbf{h}}_{n_{j}}^{\text{comb}}+U_{4 }\mathbf{h}_{n_{j}}^{\text{gen}}+U_{5}\mathbf{h}_{n_{j}}^{\text{gen}} \tag{17}\]
where \(U_{4}\) and \(U_{5}\) are another two parameter matrices.
## V Experimental setup
### _Data_
Experiments were conducted on two distinct datasets, namely the LibriSpeech audiobook corpus and the AMI meeting data, where the latter followed the audio-visual speech recognition pipeline. The LibriSpeech corpus [32], consists of 960 hours of read English from audiobooks, was used for evaluation purposes, with the dev-clean and dev-other sets used for validation, while test-clean and test-other were employed for evaluation. To investigate the impact of critical hyper-parameters, small-scale experiments were carried out using the train-clean-100 subset as the training set. Moreover, models trained on the LibriSpeech dataset were fine-tuned and evaluated on the AMI dataset in accordance with the approach proposed in [35].
The AMI meeting corpus [25] consists of 100 hours of meeting recordings involving 4-5 individuals, which were divided into the train, dev, and eval sets. To demonstrate the effectiveness of contextual biasing on data from another domain with limited training resources, a subset comprising 10% of the utterances from the AMI training set corresponding to 8 hours of audio was used to fine-tune the models previously trained on the LibriSpeech 960-hour data. There were 14 meetings from the dev set and 8 meetings from the eval set accompanied by slides that were selected to formulate the new test set for the audio-visual contextual ASR pipeline.
The 80-dim FBANK features at a 10 ms frame rate concatenated with 3-dim pitch features were used as the model input. SpecAugment [30] with the setting \((W,F,m_{F},T,p,m_{T})=(40,27,2,40,1.0,2)\) was used without any other data augmentation or speaker adaptation.
### _Biasing list selection_
To simulate real-world scenarios in LibriSpeech, the complete list of rare words comprising 200,000 distinct words as suggested in [10] was used. The full rare word list consisted of over 60% out-of-vocabulary (OOV) words that were absent from the LibriSpeech speech training set. Consistent with the method proposed in [10], the biasing lists were extracted by identifying words from the full rare word list that appeared in the reference transcription of each utterance, followed by the addition of a specific number of distractors. During inference, 10.3% of the word tokens in the test sets belonged to the full rare word list.
Fig. 4 shows the visual-grounded contextual ASR pipeline for AMI that utilises optical character recognition (OCR) output for slides. The Tesseract 4 OCR engine, equipped with LSTM models2, was first applied to the slides of each meeting series (e.g., ES2011[a-d]). Subsequently, distinct word tokens were extracted from the OCR output text files, and words in the full rare word list, which also occurred fewer than 100 times in the AMI training set, were selected to form the biasing list for that particular meeting series. These meeting-specific biasing lists were then used for the recognition of all utterances in that meeting series. The sizes of the biasing lists vary between 175 to 576, and the total number of word tokens covered by these lists was 1,751 out of 112,110 word tokens (1.5%). As shown in Fig. 4, these words mainly consisted of highly valuable content words whose accurate recognition was crucial for comprehending the utterance. Therefore, although the biasing lists had a minor impact on the overall word error rate (WER), they were essential for improving the recognition performance of critical words. Details of the meetings with slides and the extraction pipeline are available3.
Footnote 2: OCR implementation at [https://github.com/lesseract-ocr/lesseract](https://github.com/lesseract-ocr/lesseract)
Footnote 3: [https://github.com/the-anonymous-bs/AMfslides_biasing](https://github.com/the-anonymous-bs/AMfslides_biasing)
### _Model specification_
The ESPnet toolkit [31] was used for developing the systems. A unigram word piece model comprising 600 unique word pieces was created on the LibriSpeech data and was applied directly to the AMI data. Both the AED and N-T models employed a Conformer [27] encoder, which comprised 16 conformer blocks comprising 4 attention heads of size 512. The AED used a single-layer LSTM decoder of size 1024 and a location-sensitive attention mechanism featuring 4 heads of size 1024. The N-T, on the other hand, employed a 1024-dimensional predictor and a joint network consisting of a single fully-connected layer of size 1024. GNN encodings for LibriSpeech train-clean-100 experiments used 256-d GNN encodings, whereas the LibriSpeech full-scale experiments used 1024-d for GNN encoders.
LM shallow fusion and BLMD were implemented using a two-layer LSTM-LM with 2048 hidden units trained on the 800 million-word text training corpus of LibriSpeech as the target domain LM for the LibriSpeech experiments. Each source domain LM, trained on the text of the audio training data, used a single-layer LSTM with 1024 hidden units. It is worth noting that each LM had the same word pieces as the corresponding ASR system.
### _Training specifications_
During training, biasing lists with 1000 distractors were used for the experiments conducted on the LibriSpeech dataset, while 100 distractors were used for the AMI data. To create these lists, biasing words were selected from the reference transcription, and additional distractors were added. To prevent the AED model from becoming overly confident about TCPGen outputs, a dropout-inspired technique was employed during training, as described in [10]. Specifically, biasing words that were presented in the reference transcription had a 30% probability of being removed from the biasing list. The Conformer was optimised using the Noam optimiser [33]. Additionally, the hyper-parameters for the BLMD model were determined based on the respective dev sets for each dataset.
### _Evaluation metrics_
In addition to WER, the rare word error rate (R-WER) was used to evaluate the system performance on biasing words that were "rare" in the training data for that system. R-WER is the total number of _error_ word tokens that belong to the biasing list divided by the total number of word tokens in the test set that belong to the biasing list. Insertion errors were counted in R-WER if the inserted word belonged to the biasing list, in contrast to [38]. In addition, OOV WER was also computed in the same way as R-WER but for OOV words in the biasing list. There are altogether 443 such words in LibriSpeech test-clean and test-other sets. Moreover, the slides' rare word error rate (R\({}_{\text{s}}\)-WER) is reported for the AMI experiments calculated in the same way as R-WER, but for the rare words in slides. Insertions of slides biasing words were included in R\({}_{\text{s}}\)-WER.
As rare words are scarce in the dataset, significance tests were performed to ensure that the improvements found by using GNN encodings were statistically significant. Specifically,
Fig. 4: Illustration of the visual-grounded contextual ASR pipeline for the meeting series ES2011 containing meetings ES2011a to ES2011d.
independence was assumed at the book level for LibriSpeech and at the speaker level for AMI. The alternative hypothesis was defined as the GNN system performing better than the standard TCPGen (i.e. one-tailed sign test).
## VI Results
### _LibriSpeech train-clean-100 Results_
Experiments were first performed on the train-clean-100 set to find the best-performing GNN setups. The first investigation was on the number of GNN layers which determines how much lookahead is needed. The plots of WER and R-WER against the number of layers for N-T are shown in Fig. 5, and those for AED are shown in Fig. 6. Note that parameter tying only had effects when the layer number exceeded two.
For N-T, both GCN and GraphSAGE had a clear trend that the performance tended to degrade when the number of layers was increased beyond three and degraded significantly when it reached twelve layers. GraphSAGE inherently suffered less from this problem than GCN as the max pooling operator enabled the gradient for salient nodes to remain at its original value instead of being over-smoothed [42]. This degradation was mitigated by either using the GCNII structure or the parameter-tying scheme proposed in this paper. The best performance was achieved by 6-layer systems using WER as the selection criterion, where GCN and GraphSAGE with tied parameters achieved better performance than GCNII.
Similar observations were found with AED, where GCN and GraphSAGE with tied parameters achieved slightly better performance than GCNII. However, the best performance was achieved using 2-layer GCN and 3-layer GraphSAGE models. This difference is mainly caused by the label-synchronous nature of AED, where the model required knowledge for predicting only the next token, and information further into the future is less useful for this prediction in contrast to N-T.
The best-performing GCN and GraphSAGE with tied parameters were used for model combination. For additive
Fig. 5: Plot of WER (%) and R-WER (%) against the number of GNN layers for N-T on LibriSpeech test-clean data. Systems were trained on train-clean-100 for 120 epochs. Biasing lists with 1000 distractors were used. “Tied” referred to the parameter-tying scheme.
Fig. 6: Plot of WER (%) and R-WER (%) against the number of GNN layers for AED on LibriSpeech test-clean data. Systems were trained on train-clean-100 for 120 epochs. Biasing lists with 1000 distractors were used. “Tied” referred to the parameter-tying scheme.
Fig. 7: Variation of WER and R-WER against model combination weight \(\alpha_{\text{age}}\), and dynamic refers to using attention mechanism for weight calculation.
combination, different fixed combination weights gave results shown in Fig. 7, together with dynamic combination weights. As a result, dynamic combination performed better than most fixed-weight combinations, whereas the best performance was still obtained by the fixed-weight combination for both AED and N-T, with \(\alpha_{\text{age}}=0.2\). By examining the predicted dynamic weights, it was found that unless at the root node of the tree, the dynamic weight almost completely ignored GraphSAGE encodings and hence suffered from mode collapse, and hence GraphSAGE encodings were not properly trained. In fact, since GCN is usually better at handling near-future information, the model learnt to only rely on GCN.
### _LibriSpeech 960-hour Results_
The main results for LibriSpeech full-set experiments were summarised in Table I for AED and in Table II for N-T respectively. Compared to the standard TCPGen, all three types of GNNs achieved significantly better WER and R-WER (at p values smaller than 0.01) on both test sets for both AED and N-T. In particular, using multi-layer GNN, such as GCN and GraphSAGE, achieved clearly better performance to tree-RNN on AED, whereas the performance difference among those three GNN types was less obvious on N-T. The best-performing GNN structure on both AED and N-T was GCN with tied parameters. For AED, GCN achieved 16% relative WER reduction with a 20% R-WER reduction on the test-clean set and 13% relative WER reduction with 19% relative R-WER reduction on the test-other set (comparing row 4 to row 2 in Table I). For N-T, GCN achieved a 9% relative WER reduction with 21% relative R-WER reduction on the test-clean set, and a 7% WER reduction with a 17% relative R-WER reduction on the test-other set (comparing row 4 to row 2 in Table II). With similar levels of R-WER reduction, AED achieved a higher reduction in WER. As analysed in [36], TCPGen produced a much more confident prediction of \(P^{\text{gen}}\) with AED than N-T, where the main reductions in overall WER were attributed to the reduction in R-WER. The improvements using GNN indicated that the GNN encoding improved the prediction of \(P^{\text{gen}}\), which was more beneficial for the overall WER in AED.
Both additive and bilinear combinations of GNN encodings achieved superior performance to individual GNN encodings
with both AED and N-T models. For the AED model, the best performance was achieved by the additive combination with the best set of fixed weights previously found on train-clean-100 experiments, while the bilinear combination achieved very similar performance to the additive one. This led to a total of 31% relative R-WER reduction compared to the standard TCPGen (comparing row 7 to row 2 in Table I), and a total of 56% relative R-WER reduction compared to the baseline Conformer AED model. The performance improvement with the GNN combination was smaller on N-T, with the best results achieved by the bilinear pooling combination. This resulted in a 30% relative R-WER reduction compared to the standard TCPGen (comparing row 9 to row 2 in Table II), and an overall 56% relative R-WER reduction compared to the baseline Conformer N-T model.
Finally, selected systems were evaluated with BLMD, where TCPGen with GNN encodings achieved further performance improvements. The best-performing system for AED was TCPGen with the additive combination of GNN encodings, which achieved an overall 66% relative R-WER reduction on the test-clean set and 58% reduction on the test-other compared to the baseline. For N-T, an overall 57% relative R-WER reduction was achieved on test-clean, and 54% was achieved on test-other. Moreover, the OOV-WER had the same reduction pattern as R-WER for both AED and N-T. The best AED and N-T systems with combined GNN encodings for TCPGen reduced the OOV-WER by over 60%. Notably, BLMD was particularly beneficial for GNN encodings in AED systems, reducing the OOV-WER to 1/3 of the baseline value. This confirmed that even though GNN required more parameters to encode the prefix-tree, TCPGen still generalises well to unseen branches (i.e. OOV words) on the tree.
### _AMI Audio-visual Contextual ASR experiments_
The performance of various GNN encodings for TCPGen was further integrated into the audio-visual contextual ASR pipeline, and results were shown in Table III. In general, reductions in R-WER had a much smaller influence on the overall WER than with LibriSpeech, as rare words only occupied a much smaller portion. The findings were consistent with LibriSpeech, where the best-performing combination methods for AED and N-T were additive and bilinear respectively. However, the relative R-WER improvement was smaller compared to that in the LibriSpeech experiments, as the best-performing system here already had an R-WER that was very close to the overall WER whereas the R-WER in LibriSpeech was still twice as high as the overall WER. Compared to the baseline standard systems, TCPGen in AED achieved over 35% relative R-WER reduction using the best combined GNN encodings, while TCPGen with the best combined GNN encodings in N-T achieved over 30% relative R-WER reduction both with and without BLMD.
## VII Conclusion
This paper proposes three different types of GNN encodings in TCPGen for end-to-end contextual ASR, including tree-RNN, GCN and GraphSAGE. GNN encodings gave a lookahead functionality by incorporating future information on branches starting from the current node. Combination methods that take advantage of the complementarity between GCN and GraphSAGE were also explored. Experiments on LibriSpeech and AMI following an audio-visual contextual ASR pipeline showed consistent and significant WER and R-WER improvement for both AED and N-T systems using GNN encodings. The best combined GNN encodings achieved over 60% R-WER and OOV-WER reductions compared to the baseline standard systems.
|
2306.03954 | Recognition of Handwritten Japanese Characters Using Ensemble of
Convolutional Neural Networks | The Japanese writing system is complex, with three character types of
Hiragana, Katakana, and Kanji. Kanji consists of thousands of unique
characters, further adding to the complexity of character identification and
literature understanding. Being able to translate handwritten Japanese
characters into digital text is useful for data analysis, translation, learning
and cultural preservation. In this study, a machine learning approach to
analyzing and recognizing handwritten Japanese characters (Kanji) is proposed.
The study used an ensemble of three convolutional neural networks (CNNs) for
recognizing handwritten Kanji characters and utilized four datasets of MNIST,
K-MNIST, Kuzushiji-49 (K49) and the top 150 represented classes in the
Kuzushiji-Kanji (K-Kanji) dataset for its performance evaluation. The results
indicate feasibility of using proposed CNN-ensemble architecture for
recognizing handwritten characters, achieving 99.4%, 96.4%, 95.0% and 96.4%
classification accuracy on MNIST, K-MNIS, K49, and K-Kanji datasets
respectively. | Angel I. Solis, Justin Zarkovacki, John Ly, Adham Atyabi | 2023-06-06T18:30:51Z | http://arxiv.org/abs/2306.03954v1 | # Recognition of Handwritten Japanese Characters Using Ensemble of Convolutional Neural Networks
###### Abstract
The Japanese writing system is complex, with three character types of Hiragana, Katakana, and Kanji. Kanji consists of thousands of unique characters, further adding to the complexity of character identification and literature understanding. Being able to translate handwritten Japanese characters into digital text is useful for data analysis, translation, learning and cultural preservation. In this study, a machine learning approach to analyzing and recognizing handwritten Japanese characters (Kanji) is proposed. The study used an ensemble of three convolutional neural networks (CNNs) for recognizing handwritten Kanji characters and utilized four datasets of MNIST, K-MNIST, Kuzushiji-49 (K49) and the top 150 represented classes in the Kuzushiji-Kanji (K-Kanji) dataset for its performance evaluation. The results indicate feasibility of using proposed CNN-ensemble architecture for recognizing handwritten characters, achieving 99.4%, 96.4%, 95.0% and 96.4% classification accuracy on MNIST, K-MNIS, K49, and K-Kanji datasets respectively.
Computer Vision, Convolutional Neural Networks, Ensemble Models, Transfer Learning, Handwriting Recognition, Japanese Handwriting
## I Introduction
The Japanese writing system is unique compared to the English writing system as it has three different character types of _Hiragana_, _Katakana_, and _Kanji_, each serving a different purpose. In Japanese typing system, Hiragana character type is primarily employed for native Japanese function words while Katakana is commonly utilized for words originating in other languages, and Kanji are used to represent characters with Chinese origin that been adapted to the Japanese language. Hiragana and Katakana both contain 46 basic characters, 71 characters including their diacritics. Compared to Kanji, Hiragana and Katakana are relatively simple as they both have characters that point to the same English characters for the sake of translation and pronunciation.
Kanji consists of Chinese "Han" characters. These characters first came about from the pictograms of the words they represent. Over time, Kanji became more simplistic until they became the characters they are today. While there are many several thousands of Kanji characters, roughly 2000 of them are commonly used.
_Kuzushiji_ is the cursive form of Kanji, and despite being used for over 1000 years, most people fluent in Japanese and/or native to Japan, cannot read Kuzushiji documents anymore. This is partly due to the modernization efforts of the Meiji restoration in 1868, the era were the Japanese education system was reformed, which removed Kuzushiji from school curriculum. This has left a substantial Japanese literature that can only be worked on depending on availability of a language expert. An estimated three million Japanese pre-modern books and over one billion records and documents use Kuzushiji [1] and this signifies the importance of developing modern approaches to read, analyze, and translate Kuzushiji documents. Aiming to address the need for tools to read and translate this large corpus of knowledge and information that is tied to history of Japanese culture, research community identified Machine Learning (ML) and Artificial Intelligence (AI) as one of the potential mechanism to develop such tools.
### _Problem statement_
As Kuzushiji continues to be phased out by the school curriculum, scientists and researchers must develop reliable systems to aid in the research and cultural preservation of this literature. Given the complex nature of reading Japanese hand-writing, even when AI and ML are used, it is necessary to study mechanism for automated reading these handwritten Japanese characters using AI and ML. In the absence of strong benchmarks on Japanese handwritten character recognition (JHCR), it is important to develop such benchmarks on existing publicly available datasets, specially for Kuzushiji-Kanji.
The Kuzushiji-Kanji dataset is one if not the most developed dataset aimed at assisting the development of these AI-empowered automated Japanese hand-writing recognition systems. In this project, we provide an initial benchmark performance measure to be used as a stepping stone for future algorithms and methodology aimed at taking on this dataset while assessing the performance of the proposed CNN-ensemble on other publicly available hand-writing datasets as well.
### _Contribution, Novelty and Outline_
The contribution of this work is the development of Ensemble Classifiers composed of 3 Convolutional Neural Networks (CNNs). The feasibility of the proposed CNN-Ensemble method is assessed using four publicly available datasets (see Section III for more information).
The outline of this study is as follows, a brief literature review of the most recent Japanese and Chinese character recognition methods is presented in section II, and description of the datasets used for CNN-Ensemble evaluation are detailed in section III. Section IV introduces the research methodology followed by results on evaluations on four publicly available
datasets in section V. Discussion of the results and study conclusion are discussed in sections V and VII respectively.
## II Related Work
Japanese Handwritten character recognition (JHCR) is an active research area with challenges associated to complexity and variability of Japanese characters. Multiple mechanisms, including Template-based recognition, convolutional neural networks (CNNs) and recurrent neural networks (RNNs) are considered by the research community to address the task of accurately recognizing handwritten Japanese characters.
Template-based recognition, an approach that is traditionally used for JHCR task, is mainly focused on identifying a best match to a given character from a collection of reference templates gathered in a database. Inability to adequately address and deal-with differences in brush stroke orders and shapes are the main limitations of this approach which convinced researchers to explore and consider the use of more advance computer-vision methods based on machine learning and deep learning. Convolutional neural networks (CNNs) have become a popular method for JHCR task owing to its ability to automatically learn and extract deep features from images and being able to address limitations of template-based recognition approaches such as stroke order and shape variations. Das et al. [2] reported success in the character recognition of Japan's Hiragana character set, which is similar to Kanji. Rather than examining characters as a whole, in their approach, each character is passed through a pre-processing and normalization stage and is encircled and split into quadrants. in this method, Center of Gravity (CoG) is found for the whole character and for each quadrant. The proposed algorithm produced feature vectors by searching each quadrant for conjunction points (points where multiple strokes intersect) and endpoints and Euclidean distance is measured between the CoG and the significant features of each quadrant. The proposed method achieved 94.1% recognition rates. Using Japanese cursive kuzushiji character image dataset (Kuzushiji-MNIST or K-MNIST) [3] and CNN, Ueki and Kojima achieved 73.1% classification accuracy on JHCR task [4].
Yang et al. [5] used the CASIA-OLHWDB 1.0 (DB 1.0) and CASIA-OLHWDB 1.1 (DB 1.1) datasets [6] curated by The Institute of Automation, Chinese Academy of Sciences for their Handwritten Chinese Character Recognition (HCCR) study. DB 1.0 dataset is composed of 3740 Chinese characters; each class has 420 examples, and a train-test-split of 336 to 84 is used. The DB 1.1 dataset is composed of 3755 characters; each class has 300 examples, and a train-test-split of 240 to 60 was used. Authors proposed the using combination of a training sampling algorithm based on Leitner's learning box and the integration of a domain-specific knowledge layer in order to improve Chinese Character recognition. Leitner's learning box principle is based on the _"principle of spaced repetition for learning"_ which simply states that things that are harder to learn should appear more frequently. Authors used CNN for their CCR task where training samples are assigned a value that helps to place them in one of three bins of well-recognized (these can be sparsely sampled), confusing (should be sampled more often), and heavily noisy or mislabeled samples (these should rarely be if ever sampled). The proposed approach out-performed state-of-the- art in [7, 8], and [9] by 2.5% on the DB 1.0 dataset, achieving classification accuracy of 97.33% and by 1.4% on the DB 1.1 dataset, achieving 97.06% classification accuracy.
Shi et al. [10] used the HITPU handwritten Chinese character database collected by the Harbin Institute of Technology and Hong Kong Polytechnic University in their HCCR study. The dataset comprises 3755 characters from 200 different writers and a total of 751,000 images. Authors used active shape modeling with landmark labeling/recognition and GA for radical recognition. The radical is a fundamental part of Chinese and Japanese character, which significantly aids in recognizing the overall character. The proposed solution achieved a correct radical matching rate of 97.4%, outperforming state-of-the-art in HCCR ( [11] and [12] ) by roughly 5.9%.
## III Datasets
This study uses the datasets curated by [3]: **Kuzushijl-MNIST**, **Kuzushijl-49** and **K-Kanjl**, referenced as K-MNIST, K-49 and K-Kanji from this point forward, for evaluation and assessment of its proposed CNN-ensemble for JHCR task.
K-MNIST dataset is designed to serve as a replacement for the MNIST dataset, containing 10 classes and following MNIST sample representation format, the K-MNIST samples are represented as \(28\times 28\) grayscale images, total of 70,000 images with a train-test-split of 6 to 1 images per class.
K-49 dataset is a much larger datset in comparison to K-MNIST although it is imbalanced. K-49 dataset contains 48 Hiragana characters and one Hiragana iteration mark, total of 49 classes with samples presented as \(28\times 28\) grayscale images with total of 270,920 sample images. The train-test-split in K-49 dataset is 86 to 14 and the distribution is set in such a way to reflect the class imbalance from actual Japanese literature.
The K-Kanji dataset is an imbalanced dataset with total of 140,426 samples represented as \(64\times 64\) grayscale images with a total of 3,832 classes. The sample distribution ranges from 1766 examples in one class to only a single example in another with some classes having up to 3 different representations due to the original Kanji they are derived from which further
Fig. 1: Sub-sample of K-MNIST, K-49 and K-Kanji dataset.
increases the complexity of this dataset. In this study, only the top 150 most populated classes in the K-Kanji dataset are used.
Figure 1 illustrate samples from 5 different classes of each of these three datasets and Table I presents details about the datsets and their sample sizes and sample distributions in training and testing sets.
## IV Methodology
The ensemble model developed in this study consists of three CNN's as follows:
1. CNN-1 is consisted of a convolutional layer, followed by an average pooling layer, a convolutional layer, an average pooling layer, a functional layer, and a densely connected layer. This model is considered in order to attain a more general understanding of input sample shapes. The reduction in sample dimension size from the two average pooling layers makes each sample more generalizable. However, this would be ineffective without the three convolutional layers which help create enough of a distinction between each sample for the model to be effective on its own.
2. CNN-2 is consisted of two convolutional layers, followed by an average pooling layer, convolutional layer, flatten layer, and a densely connected layer. The architecture is slightly altered and modified for the Kanji dataset by adding a convolutional layer before the flatten layer. This model uses the three convolutional layers to focus its learning on the specific shape of each sample while the pooling layer is used to reduce computational complexity.
3. CNN-3 is the transfer learning model. This model's architecture utilizes a base model that is pre-trained on another dataset. The base model is then used as the head of the transfer learning model before additional layers are appended to it. The mechanism used for transfer learning in CNN-3 model depend on the dataset being used for evaluation (e.g., KMNIST, K49, or Kanji dataset). That is, when CNN-3 model is to be used for evaluating samples in KMNIST dataset, the base model is pre-trained using MNIST dataset and when K-49 is to be used for evaluating the model performance, the CNN-3 base model is pre-trained using KMNIST dataset. Finally, CNN-3 base model is pre-trained by samples of K-49 dataset when the evaluation is done by samples of Kanji dataset. 1. Base Model architecture includes a convolutional layer, followed by a dropout layer, convolutional layer, flatten layer, and a densely connected layer. The architecture is modified for the Kanji dataset by adding an average pooling layer before the flatten layer. This modification is considered due to samples of the Kanji dataset being much larger in size compared to the samples of the other datasets (64\(\times\)64 pixels in K-Kanji dataset versus 28\(\times\)28 pixels in K-MNIST and K-49). The extra average pooling layer is placed to accommodate for the image size differences, reducing computational cost while increasing the accuracy. 2. Additional layers: In CNN-3, the layers of the base model are followed by additional layers of convolutional layer, dropout layer, convolutional layer, average pooling layer, flatten layer, and a densely connected layer. Pre-training the base model allow the model to recognize general image fundamentals. Freezing the weights in the base model helps to retain the knowledge of image fundamentals while the new layers are tuned on the new target domain (training set samples from evaluating dataset). Starting with basic knowledge generally allows transfer learning models to be faster and more accurate than regular models.
Each CNN model (CNN1, CNN-2, and CNN-3) in CNN-Ensemble architecture is connected to a final output layer to allow the accuracy of the output to be evaluated. An illustration of the CNN-ensemble architecture is presented in figure 2.
In this study, the proposed ensemble model is a conglomeration (rather than combination) of CNN-models. As expected from ensemble models, single CNN model architectures are expected to be less accurate in their predictions and conglomerating the models can mitigate single model weaknesses through group voting. The proposed ensemble architecture utilizes the strengths of three different CNN-models to offset incorrect individual predictions and increase overall validation accuracy on each dataset.
The performance of the proposed CCN-ensemble architecture is assessed on all three datasets used in this study (MNIST, K-MNIST, K-49 and K-Kanji), one dataset at the time. Each CNN-model in the proposed CNN-ensemble is trained and validated individually (using the same training and validation sets on each dataset for all three CNN-models) and a voting mechanism based on aggregating votes of all three CNN-models into a single vector and taking the \(maxsum\) of the final vector is used to generate the overall ensemble prediction on the validation set of each dataset.
The results of the individual models and the CNN-ensemble model are presented in table II and further elaborated in section
## V Results
### _MNIST Results_
The three CNN models were tested on the MNIST dataset to gain a performance baseline.
* CNN-1: been able to achieve a 99.00% accuracy on the validation data.
* CNN-2 been able to achieve a 97.30% accuracy on the validation data.
* CNN-3 been able to achieve a 99.03% accuracy on the validation data. The MNIST dataset was used as a benchmark for the other datasets so it did not use any transfer learning.
* CNN-Ensemble produce a 99.35% accuracy on the validation data. The average classification performance across CNN models achieved classification accuracy of 98.35%, indicating that using an ensemble model improved the average model accuracy by 1.0%.
These results can also be found on table II.
### _K-MNIST Results_
The three CNN models been evaluated on the K-MNIST dataset aiming to provide performance comparison between the ensemble model and individual CNN models. The results are as follows:
* CNN-1 achieved a 94.44% accuracy on the validation data.
* CNN-2 achieved a 95.11% accuracy on the validation data.
* CNN-3 achieved a 95.15% accuracy on the validation data. Samples of MNIST dataset been used to pre-train the base model in CNN-3 although marginal performance improvement is observed compare to CNN-1 and CNN-2.
* CNN-Ensemble achieved classification accuracy of 96.37% on the validation data, showing an improvement of 1.47% on average performance achieved across the three CNN models (94.90%) while also outperforming individual CNN models' performances.
These results are also illustrated on table II.
### _K-49 Results_
All three CNN models are also tested on the K-49 dataset, with model 3 using transfer learning from the K-MNIST dataset. The results are as follows:
* CNN-1 achieved a 91.80% accuracy on the validation data.
* CNN-2 achieved a 93.01% accuracy on the validation data.
* CNN-3 achieved a 93.09% accuracy on the validation data. CNN-3 only marginally outperformed CNN models 1 and 2 on the K-49 dataset using transfer learning from the K-MNIST dataset. However, the average training time for the transfer learning model was reduced from 1900 seconds to 987 seconds, or by 48%.
* CNN-Ensemble achieved classification accuracy of 95.04% on the validation data. This model outperformed both average CNN models performances (92.63%, show
Fig. 2: Example CNN ensemble architecture
ing 2.41% performance improvement) and individual CNN models performances.
These results can additionally be found on table II.
### _K-Kanji Results_
K-Kanji dataset is considered as the true target class of this study since, to the best of our knowledge, its performance been never evaluated in literature. This dataset consists of a total of 3832 Kanji characters in a 64\(\times\)64 grayscale images of Kanji handwriting. This dataset consists of a smaller set of 140,426 images and is highly imbalanced where numbers of examples per class can range from 1,766 to even just a single example. For evaluation of the dataset and to assess feasibility of the proposed CNN-Ensemble architecture, only the top 150 populated character classes are considered in this study.
* CNN-1 achieved 92.83% classification accuracy on the validation set.
* CNN-2 achieved 93.49% classification accuracy on the validation set.
* CNN-3 achieved 95.01% classification accuracy on the validation set. The base model in CNN-3 is pre-trained using samples from K-49 dataset. The results indicate the substantial improvement in performance on K-Kanji dataset compared to CNN-1 and CNN-2 models. This is likely due to the increased sample size in K-49 dataset (266,407 samples, see table I for more details) that better accommodated learning through transfer learning (base model in CNN-3), allowing the model to learn more information about the images in the base model.
* CNN-Ensemble achieved 96.43% classification accuracy on the validation data. The average classification performance across 3 CNN models achieved 93.77% classification accuracy. This indicates improvement of 2.65% in CNN-Ensemble compared to average model and outperforming all 3 CNN models. This is likely due to an increased sample size compared to the KMNIST and K-49 datasets. The base model for Kanji included an additional pooling layer because of the increased sample size. This extra layer is likely the contributing factor to the great success of the Kanji transfer learning model.
A summary of these results are presented in table II.
## VI Discussion
In table III, no state-of-the-art performance is provided for K-Kanji dataset since, to the best of our knowledge, this dataset never been considered by research community, possibly due to complexities associated with the dataset, specially the imbalance nature of the dataset which makes it more difficult to effectively train models on smaller sampled character classes.
Table III is augmented from the [16] study to show how a variety of algorithms perform on the MNIST, K-MNIST and K-49 dataset. This table has been updated to show the result of the ensemble architecture and it's performance on the same datasets with the addition of the K-Kanji dataset.
To better understand the performance variations across the 150 classes of the K-Kanji dataset, the testing dataset F1-Scores for each class is illustrated in table IV. The F1-Scores are presented in descending order, ranging from F1-Score of 1 to 0.49. It is noticeable that majority of the classes are lacking adequate number of samples for training and tuning of CNN models which speaks to difficulties in using K-Kanji dataset.
A SHapely Additive explanations (SHAP) analysis is performed on each CNN model used in CNN-Ensemble to better understand the results achieved. SHAP analysis is a methodology used to represent contributions of different features in a given sample in the prediction achieved by a machine learning model [17]. SHAP values, ranging from -1 (commonly represented by blue color in SHAP images) to 1 (commonly represented by red color in SHAP images), indicates the impact of each feature on prediction, by higher SHAPE units indicating higher contribution in achieved prediction. SHAP values are calculate based on acquiring the difference between prediction achieved by machine learning model for a given sample with and without using the feature.
Figure 3 illustrates the results of SHAP analysis on the three CNN models used in this study using three samples, randomly chosen, from each dataset considered in the study, e.g., KMNIST, k-49, and K-Kanji. Brighter red pixels indicate a positive contribution to the classification while the blue pixels indicate negative contribution to the model prediction. The results highlights little to no discernible difference between the models which is an indication of the observed inability of CNN-ensemble model in achieving substantial gain in classification performance compared to individual CNN models, e.g., CNN-1, CNN-2, and CNN-3.
## VII Conclusion
This study discussed the development of a machine learning approach to identify and translate handwritten Japanese characters (Kanji) into its corresponding English translation. It is imperative to develop reliable systems for aiding in the
research and cultural preservation of Kuzushiji literature given that very few individuals are able to translate these documents with millions of such documents still in process of being translated and prepared for use by researches interested in understanding historical context withhold within them.
This study employed an ensemble of three convolutional neural networks (CNNs) and evaluated its performance. One of the aims and main contributions of this study is to develop benchmarks for Japanese handwritten character recognition, using publicly available datasets. To this end, four publicly available datasets of Kuzushiji- MNIST, Kuzushiji-49 and K-Kanji, referred to as as K-MNIST, K-49 and K-Kanji, are used to evaluate the performance of the proposed CNN-Ensemble architecture. The results indicated feasibility of the proposed CNN-Ensemble architecture in correctly predicting various Japanese Handwritten characters, achieving average accuracy
Fig. 3: SHAP Analysis Output of CNN1, CNN-2 and CNN-3 models on three samples for K-MNIST, K-49 and K-Kanji datasets.
of 96% across the datasets used, with individual CNN models showing a range of performance from 91.8% (CNN-1 on K-49 dataset) to 95.15% (CNN-3 on K-MNIST dataset).
|
2303.16211 | Combinatorial Convolutional Neural Networks for Words | The paper discusses the limitations of deep learning models in identifying
and utilizing features that remain invariant under a bijective transformation
on the data entries, which we refer to as combinatorial patterns. We argue that
the identification of such patterns may be important for certain applications
and suggest providing neural networks with information that fully describes the
combinatorial patterns of input entries and allows the network to determine
what is relevant for prediction. To demonstrate the feasibility of this
approach, we present a combinatorial convolutional neural network for word
classification. | Karen Sargsyan | 2023-03-28T07:49:06Z | http://arxiv.org/abs/2303.16211v1 | # Combinatorial Convolutional Neural Networks for Words
###### Abstract
The paper discusses the limitations of deep learning models in identifying and utilizing features that remain invariant under a bijective transformation on the data entries, which we refer to as combinatorial patterns. We argue that the identification of such patterns may be important for certain applications and suggest providing neural networks with information that fully describes the combinatorial patterns of input entries and allows the network to determine what is relevant for prediction. To demonstrate the feasibility of this approach, we present a combinatorial convolutional neural network for word classification.
Keywords:words, combinatorics, deep learning, CNN
## 1 Introduction
The standard description of how deep learning works involves feeding raw data into the input layer of a neural network and using backpropagation to update the weights of the network such that it can learn to extract meaningful features from the data. This contrasts with 'traditional' machine learning methods, which often rely on domain experts to manually design relevant features for the model to effectively learn and make predictions[1]. As an example, one can train a convolutional neural network to classify pictures of cats and dogs by feeding it with raw pixel data from the images. In such task, network achieves \(>\)90% accuracy on a validation dataset[2].
An interesting question is to what degree a deep learning model, when trained for a specific task, is able to identify and utilize features that remain invariant under a bijective transformation on the data entries. As an example, we may change the color for each pixel of a cat picture defined by a triplet of natural numbers (R, G, B) into some other value (R1, G1, B1) using the same fixed bijection on every pixel, which here defines one of the possible transformations of a cat picture. It so happens that in our cats/dogs example, the model's confidence in classifying the new image as a cat drops significantly, while a human observer would have no difficulty in doing so (Fig 1). In this case, the neural network prioritizes features that include particular combinations and arrangement of colored pixels and to a lesser extent combinatorial patterns, which we here understand as features that are invariant under the bijective transformation
of an input. As another example, we trained a convolutional neural network to classify papers into four categories (such as 'Sport') based on their text content, which was input to the network as raw character data[3]. Our network achieved \(>\)88% accuracy on a validation dataset. Next, we transformed the text in the validation dataset by replacing each letter with its corresponding letter from a randomly generated alphabet permutation, which was the same for all texts in the dataset. The accuracy dropped to 24%.
The two examples given above demonstrate that, in general, neural networks do not select features based on combinatorial patterns. One possible reason for
Figure 1: An example of the same one-to-one pixel color transformation applied to a photo of a cat, where the human eye clearly recognizes the cat, while the deep learning model is confused.
this is that the data used to train neural networks is often asymmetric and may not include all possible transformations. For instance, a transformation where each letter in a text is encoded with a possibly different letter may result in gibberish that naturally does not occur in readable texts. In the cat/dog example, the dataset is biased towards natural colors and scenes commonly found in photographs.
While deep learning in general does not provide learning based on combinatorial patterns and leans towards encoding particular trends that are over-represented in training data, it still enjoys enormous success in applications. Therefore, one may ask why to care about prediction based on combinatorial patterns? There are several possible applications of prediction based on combinatorial patterns that motivate its utility. First, some prediction tasks may involve processes that are heavily reliant on combinatorial patterns. For example, it has been shown that viruses and their hosts have similar nucleotide patterns in their genomes, beyond encoding similar amino acids. This similarity indicates that genetic sequences have similar optimization for host ribosomes, which translate them to proteins. This is a crucial enough factor to use for predicting potential new hosts for viruses or estimating their potential danger to the human population[4, 5]. As an additional example, the distribution of a word in the text may provide clues about whether that word is a keyword of the text[6]. Similarly, the distribution of nucleotide words in RNA may signal that the nucleotide word composes an important 3D motif of RNA structure[7].
In certain cases, we may not know to what extent combinatorial patterns are relevant to our prediction task beforehand. Having two models - one that has learned to predict based on combinatorial patterns and another that has not - can help us determine the degree to which these patterns contribute to our ability to make predictions and understand the processes we are studying. Second, unexpected changes in input type for some applications, such as object detection in photographs, may require retraining a neural network with new data. This process may be time-consuming and unacceptable for mission-critical applications that are time-constrained. However, if a successful model has been trained using combinatorial patterns, it is inherently more robust and may require less retraining. Additionally, one can potentially combine two types of learning in a single model to overcome the limitations of a skewed training dataset. For instance, the processes at play when following a cat using a video camera may degrade, resulting in different video quality (Fig.1). This could hinder the system's ability to react promptly to different cat situations. It is easy to imagine similar scenarios involving military drones instead.
Accepting the utility of prediction based on combinatorial patterns, one may propose to extend the dataset by transforming its entries in all possible ways. However, the number of possible transformations for just our example of a cat picture makes the task of covering all or the majority of variants computationally and space-prohibitive. As an approach, one may consider using deep learning to be a guide, where we feed all the available data into the model and expect it to select appropriate features. We suggest feeding the neural network information
that fully describes the combinatorial patterns of the input entries, stripping away all other information, and letting the network decide what is important for predicting. Successfully tackling the combinatorial aspects of the input objects is a prerequisite for this task and in this work, we will concentrate on words as inputs and on combinatorial patterns for words. In the next chapter we will introduce relevant mathematical concepts borrowed from [8, 9, 10] that let us fully describe combinatorial patterns. Then, we present examples of neural networks learning to classify words as palindrome or non-palindrome and estimate the strengths of passwords using combinatorial patterns as input. We compare their performance with a model trained on raw character-based data. Our main goal is to demonstrate the feasibility of this approach and highlight possible applications of mathematical results from the field of abstract algebra.
## 2 Words and Combinatorics
Let us first introduce some notation from [8]. Let \(\alpha\) be a word over some alphabet, and let \(\Omega(\alpha)\) denote the set of distinct letters appearing in \(\alpha\). We define \(S(\alpha)\) as the set of all subwords(i.e. contiguous substrings) of \(\alpha\). By adding an extra subword/letter \(\varepsilon\) to \(S(\alpha)\) we construct a new set \(W(\alpha)=\{\varepsilon\}\cup S(\alpha)\). One may consider \(\varepsilon\) to be an empty subword of \(\alpha\).
For any two subwords of \(\alpha\), \(\lambda=Y_{1}Y_{2}...Y_{s}\) and \(\mu=Z_{1}Z_{2}...Z_{t}\), we make the matrix \((m_{ij})\), where \(m_{ij}=Y_{i}\) if \(Y_{i}=Z_{j}\) and \(m_{ij}=\varepsilon\) otherwise. Using this matrix we construct the associated graph, whose vertices are \((i,j)\) for all \(1\leqslant i\leqslant s\) and \(1\leqslant j\leqslant t\) and edges \((i,j)\rightarrow(k,l)\) if \(k=i+1,l=j+1,m_{ij}\neq\varepsilon,m_{kl}\neq\varepsilon\). For every connected component of the graph, one may produce the associated subword of \(\alpha\) using values of \((m_{ij})\) and a path on the graph starting from the vertex with the lowest index values for a given connected component. Therefore, each subword of \(\alpha\), including an empty subword \(\varepsilon\) might be potentially produced by a connected component of the graph. Different connected components may produce different or the same subwords. It is natural to ask how often a given subword \(\nu\) is produced by connected components of the graph generated by some other subwords \(\lambda\) and \(\mu\). We call such counting \(M(\alpha)_{\nu}(\lambda,\mu)\) the combinatorics for \(\alpha\) following [8], where the reader may find a more detailed description. With an extra set of rules:
\[M(\alpha)_{\nu}(\lambda,\varepsilon)=\delta_{\nu,\varepsilon}*s,M(\alpha)_{\nu }(\varepsilon,\mu)=\delta_{\nu,\varepsilon}*t,M(\alpha)_{\nu}(\varepsilon, \varepsilon)=\delta_{\nu,\varepsilon}, \tag{1}\]
we obtain a map:
\[M(\alpha):W(\alpha)\times W(\alpha)\times W(\alpha)\rightarrow\mathrm{Z}_{ \geqslant 0},(\lambda,\mu,\nu)\longmapsto M(\alpha)_{\nu}(\lambda,\mu). \tag{2}\]
Our goal is to be able to tell whether two words \(a\) and \(b\) demonstrate the same patterns. By our definition there is a bijection \(\varphi\) from \(\Omega(a)\) to \(\Omega(b)\) such that \(a=X_{1}X_{2}...X_{r}\) and \(b=\varphi(X_{1})\varphi(X_{2})...\varphi(X_{r})\). Based on theorems proven [8], we may no longer search for a bijection, but compare combinatorics for words of interest, as \(M(\alpha)_{\nu}(\lambda,\mu)\) combinatorics contain exactly information we are after.
## 3 Combinatorial Convolutional Neural Networks for Words
Combinatorics, as defined in the previous chapter, allows us to provide comprehensive information on patterns in words to neural networks as raw input. Training a neural network on combinatorics for a given word may enable it to select appropriate combinatorial features for a given task of classification. One popular neural network architecture that has shown great success in text and image classification is Convolutional Neural Networks (CNNs). In CNN, for the case of an image, a special convolutional layer presents data as a three-dimensional tensor (a 3D'matrix'), where two dimensions (number of elements along the axis) correspond to the positions of each pixel, and the third dimension corresponds to so-called filters, representing the (R, G, B) colors of the input picture. In the case of \(200\times 200\) pixel colored image, we get \(200\times 200\times 3\) tensor. Convolutional layers following the input layer tend to decrease the first two dimensions and increase the number of filters. The filters of a CNN are designed to extract different features from the input data, such as edges, curves, and textures in the case of image processing. By stacking multiple convolutional layers with different filters, the network can learn hierarchical representations of the input data, where lower layers extract simple features, and higher layers capture more complex and abstract features. With combinatorics, we may utilize this approach and use \(M(\alpha)_{\nu}(\lambda,\mu)\) as filters over dimensions \(\lambda\times\mu\) in the input 3D tensor for a given word \(\alpha\). One particular property, in this case, is that number of elements along each direction of the input tensor is the same and is defined by a number of subwords of the word.
## 4 Case of Word Classification into Palindromes/Non- Palindromes
To demonstrate the feasibility of such an approach, we present a toy example of a classification task where only combinatorial patterns matter. Our goal is to classify 20-letter words into palindromes and non-palindromes. We generated 1000 palindromes and 1000 non-palindromes for the training dataset, as well as 500 examples of each class for both the validation and testing sets. Non-palindromes were randomly generated from an alphabet and checked to conform to the definition. Palindromes were generated from randomly generated words by 'palindromising' them 1.
Footnote 1: full code and datasets to verify the paper’s results will be deposited here [https://github.com/karsar/wordsCCNN](https://github.com/karsar/wordsCCNN).
Our combinatorial CNN to classify 20-letter words into two classes is presented in Fig.2. It consists of convolutional layers interspersed with max-pooling layers, which is the standard first choice to try. The layout is common for CNNs, with one notable difference of decreasing the number of filters as we move from layer to layer, while usually, the number grows. We trained the model using
batches of 32 words and each training epoch contained 30 batches. After the first epoch accuracy on training and validation sets was 92.6% and 96% correspondingly. The final accuracy after 48 epochs was 100% for the training and 99.17% for the validation datasets. Considering both how well our first attempt to build combinatorial convolutional neural networks has gone and our goal to demonstrate feasibility, we did not pursue further possible improvements and optimizations of the network.
As the next step, we used the previously mentioned convolutional neural network that we used to classify papers in order and applied it to learn classifying 20-letter words as palindromes/non-palindromes[3]. It is not surprising that it achieved similar performance, for this particular task, as only combinatorial patterns were relevant. The conventional CNN was forced to learn only combinatorial patterns, which is not typically the case in other applications. Additionally, we didn't modify a neural network that was designed to classify longer texts, therefore, it has more layers and a greater capacity to learn than the one we used for the combinatorial case.
## 5 Case of Password Classification into Strong and Weak
Another example of a classification task where only combinatorial patterns matter is password classification into weak and strong. To test whether our approach of using combinatorial CNNs works, we have randomly generated a dataset of 2000 passwords of length 15 for the training dataset and 1000 passwords of the same length for each of the validation and test datasets. In each dataset half of the passwords are weak and another half is strong. We define a password to be strong if its complexity, taking value from [0,1], calculated by python package _password_strength_ is greater than 0.7.
Figure 2: A neural network layout for dealing 20 letter word classification task.
For this task, we use exactly the same combinatorial CNN as in the previous chapter. There is no need to adjust layer parameters as the words we train on are smaller than in the previous example. Again, as in the previous chapter, we do not search for the most optimal solution. Instead, we stop at the first one that works well. Our training parameters are exactly the same as for the palindrome dataset (batch = 32, 30 steps per epoch). The accuracy achieved on the training dataset after 19 epochs is 100%, while the accuracy on the validation dataset is 99.75%.
## 6 Discussion
Having demonstrated the feasibility of using combinatorial convolutional networks for word classification, we would now like to discuss some possible limitations of our work, as well as future improvements and research directions.
Our main limitation lies in our focus on one-dimensional objects, specifically short-length words. This simplifies our work in two ways. Firstly, we have a clear combinatorial description of words, outlined in [8] that we can rely on. Secondly, we can perform computational experiments without the need for computational optimization of the code. In fact, data generation, combinatorial calculation, and neural network training can all be completed in a single day using a machine without a GPU. There is no reason to expect that the combinatorial CNN will fail in its task upon an increase in word length, as long as its layer sizes are appropriately adjusted.
It is worth pointing out that if the task of classification is based solely on combinatorial patterns, such as in the case of palindrome detection, conventional neural networks will learn to do exactly that very efficiently. However, if providing the best results relies on combining combinatorial patterns with other features that depend on particular choices of letters or pixels, conventional neural networks may prefer such a mix, and the value of our approach will result in ablation studies, e.g. finding to what extent combinatorial patterns are important in a given situation. In cases where there is no clear preference between two options of mixed combinatorial patterns/other features and purely combinatorial patterns, one may choose our approach to build a more robust system.
Last but not least, this work demonstrates how the results of 'pure' mathematics find their way into applications. The almost immediate applicability of results stemming from the study of objects in abstract algebra and published in mathematical journals dedicated to the subject underscores the importance of such research for practical applications as well. We expect that expanding one-dimensional results to higher dimensions and investigating the role of different symmetries in deep learning may present another step forward.
#### 6.0.1 Acknowledgments.
The author would like to express gratitude to Jun Morita for pointing out works on the combinatorial description of words and for providing valuable clarifications. |
2301.12168 | Anticipate, Ensemble and Prune: Improving Convolutional Neural Networks
via Aggregated Early Exits | Today, artificial neural networks are the state of the art for solving a
variety of complex tasks, especially in image classification. Such
architectures consist of a sequence of stacked layers with the aim of
extracting useful information and having it processed by a classifier to make
accurate predictions. However, intermediate information within such models is
often left unused. In other cases, such as in edge computing contexts, these
architectures are divided into multiple partitions that are made functional by
including early exits, i.e. intermediate classifiers, with the goal of reducing
the computational and temporal load without extremely compromising the accuracy
of the classifications. In this paper, we present Anticipate, Ensemble and
Prune (AEP), a new training technique based on weighted ensembles of early
exits, which aims at exploiting the information in the structure of networks to
maximise their performance. Through a comprehensive set of experiments, we show
how the use of this approach can yield average accuracy improvements of up to
15% over traditional training. In its hybrid-weighted configuration, AEP's
internal pruning operation also allows reducing the number of parameters by up
to 41%, lowering the number of multiplications and additions by 18% and the
latency time to make inference by 16%. By using AEP, it is also possible to
learn weights that allow early exits to achieve better accuracy values than
those obtained from single-output reference models. | Simone Sarti, Eugenio Lomurno, Matteo Matteucci | 2023-01-28T11:45:11Z | http://arxiv.org/abs/2301.12168v1 | # Anticipate, Ensemble and Prune:
###### Abstract
Today, artificial neural networks are the state of the art for solving a variety of complex tasks, especially in image classification. Such architectures consist of a sequence of stacked layers with the aim of extracting useful information and having it processed by a classifier to make accurate predictions. However, intermediate information within such models is often left unused. In other cases, such as in edge computing contexts, these architectures are divided into multiple partitions that are made functional by including early exits, i.e. intermediate classifiers, with the goal of reducing the computational and temporal load without extremely compromising the accuracy of the classifications. In this paper, we present Anticipate, Ensemble and Prune (AEP), a new training technique based on weighted ensembles of early exits, which aims at exploiting the information in the structure of networks to maximise their performance. Through a comprehensive set of experiments, we show how the use of this approach can yield average accuracy improvements of up to 15% over traditional training. In its hybrid-weighted configuration, AEP's internal pruning operation also allows reducing the number of parameters by up to 41%, lowering the number of multiplications and additions by 18% and the latency time to make inference by 16%. By using AEP, it is also possible to learn weights that allow early exits to achieve better accuracy values than those obtained from single-output reference models.
Early Exits, Ensemble, Pruning, AEP, Image Classification, Convolutional Neural Networks
## I Introduction
Over the last decade, deep learning has emerged as one of the dominant disciplines in computer science, thanks to the research conducted and the remarkable results obtained. Among the main fields of application, that of visual recognition, and in particular that of image classification, has undoubtedly been the catalyst for a veritable revolution, rooted in the development and refinement of architectures known as convolutional neural networks (ConvNets). Such artificial neural networks involve numerous convolutional layers that extract the information contained in input images and process it to obtain high-level features.
The development of the first ConvNet, called AlexNet [1], was made possible by the increasing production and storage of data and, in particular, by the public release of the ImageNet benchmark [2]. From then on, the succession of discoveries of increasingly high-performance models accelerated. Among the major milestones, architectures such as VGG [3], Inception [4], ResNet [5], DenseNet [6], MobileNet [7], EfficientNet [8], and recently ConvNeK [9] excelled in setting new levels in terms of accuracy, scalability, efficiency and design quality.
Recently, the study of neural networks with early exits has gained importance. An early exit of a neural network is a classifier placed at an intermediate level between the input layer and the traditional single output layer. The objectives of such a design pattern are many and varied, including exploiting the information contained in the intermediate layers of the models, streamlining their overall weight by cutting them, or for purposes related to distributed systems and edge computing [10]. The optimal number of branches with early exits and their positioning represent an important choice in this context, especially for very deep ConvNets or for architectures that are not strictly sequential. Another fundamental step lies in the choice of updating the weights of such architectures with multiple outputs, their individual or joint use, and the management of outputs with degraded performance.
This work shows the analysis of the behaviour of the main ConvNets presented in the literature modified through the proposed early exits technique named Anticipate, Ensemble and Prune (AEP). In particular, AEP is presented as an early exits-weighted ensemble technique. Outputs aggregation strategies for both loss function and inference are discussed, in order to understand which conditions favour an improvement or lead to a loss in classification performance compared to the basic single-output version of the neural networks examined. Finally, through the adoption of a pruning step, it is shown how it is possible to reduce the number of parameters, operations and network latency, further increasing accuracy through the extraction of the optimal sub-ensemble network. The experiments are conducted on a large set of ConvNets and datasets and both in a traditional training context and through the tuning of pre-trained architectures. Unlike the main reference works in the literature, which aim to reduce latency as much as possible without sacrificing model accuracy or to develop techniques for edge computing [11, 12, 13], this work aims to quantify and maximise the accuracy gain that
the use of early exits' ensemble can provide.
The document is further divided into five sections. The Section II summarises related work that has been proposed in the literature and is useful for understanding the rest of the document. The Section III describes the steps that make up the AEP technique. The section IV describes the details of the experiments and the configurations with which they were performed. The section V presents and discusses the results of the experiments. Finally, the section VI concludes the work and suggests some possible research directions.
## II Related Works
Early exits is a deep learning technique that aims to improve the efficiency of neural network models by allowing them to make predictions before the input data is processed by all the available layers. This is done by training multiple sub-models within the backbone model, each with a different level of complexity. One of the first works in which the early exits technique is used was carried out by Panda _et al._ and applied to convolutional neural networks (ConvNets) [14]. In particular, the aim of this study was to create an algorithm capable of identifying the optimal depth within the classification network under examination, so that the computational expense could be dynamically adjusted without losing accuracy. During the same period, Teerapittayanon _et al._ presented their work on early exits aimed at demonstrating the predictive properties of classifiers placed in the intermediate layers of ConvNets, such that the easiest samples to predict are processed by fewer hidden layers, while the most difficult ones traverse the entire architecture [15].
A more recent approach aimed at reducing the energy cost and complexity of single-output ConvNets has been proposed by Wang _et al._ achieving an extremely beneficial trade-off between accuracy and flops [16]. The technique implemented involves the use of early exits and weighted loss functions applied to architectures containing skip connections. Pacheco _et al._ demonstrated how the use of early exit architectures is incredibly beneficial in the context of edge computing [17]. In particular, they shown how the ability to classify non-anomalous samples at the shallow levels of a ConvNet allows not losing performance compared to a single-exit classification.
Having several available classifiers, some of them not necessarily high-performing, the most intuitive step is to identify an intelligent aggregation strategy to exploit their joint potential. Ensembling, largely seen as the natural solution in many classification problems, is a machine learning technique that consists of training several different models to solve the same task and then exploiting the knowledge derived from all of them at inference time to make the best choice. The ensemble technique works because the different models have different weaknesses that are compensated by the others' strength points. Ensembling of early exits consists in classifiers sharing part of their structure and parameters, but still working on different features given the same input data. This technique has been exploited by Wolczyk _et al._ to produce an early exit-based approach in which each prediction is reused by subsequent exits, combining previous results in an ensemble-like manner [11]. The goal of this work was to minimise the prediction latency without sacrificing the accuracy performance of the proposed models.
For what concerns the training of early exit networks, joint training is the most common approach and consists in formulating a single optimization problem whose loss depends on all branches, in particular the network loss is often calculated as the weighted sum of the branches losses [15, 18, 19]. Strategies have also been devised to dynamically adjust the exits losses weights during training [16] and to improve the ensemble by combining the prediction loss with a a diversity loss [12, 20]. When early exits are trained jointly with the backbone network, they favor the learning of more discriminative features at each layer and lead to faster convergence, while also acting as regularization [15, 21].
The output of early exits ensemble can also be computed in a multitude of ways, e.g., as arithmetic mean of the predictions [19, 20], via geometric ensemble [11] or through voting strategies [12]. The effectiveness of early exits ensembles is not limited to image classification, as they've also recently been used for image captioning [22], natural language processing [12], for uncertainty quantification and biosignal classification [13, 20] and to improve robustness against adversarial attacks [19]. Moreover, early exits ensembles were employed to produce a teacher-free knowledge distillation technique by treating the aggregated predictions as the teacher predictions [23].
## III Method
The technique of ensemble based on early exits proposed in this work, and called Anticipate, Ensemble and Prune (AEP), has been fully tested in the computer vision field to solve image classification tasks, but can easily be extended to other types of data and purposes. Given a ConvNet, it is practically always possible to identify the presence of repeating blocks or stages stacked within it. Regardless of the architecture in question, it is therefore possible to replicate the classifier present at the end of the ConvNet, i.e. after the last stage, immediately after each intermediate stage, as can be seen in Figure 1.
The training of an AEP model proceeds as follows: for each batch of images with dimension \(B\), given \(C\) target classes, the algorithm lets the images flow through the network such that every classifier outputs a tensor of \(B\) elements, each representing one image and containing \(C\) values, one for each class \(c\). In this setting, the network loss is computed as the weighted sum of the categorical cross entropy loss \(L_{i}\) of each exit \(i\in[1,N]\), in a joint training fashion, as in Equation 1.
\[L_{i}=L_{cee,i}=-\frac{1}{B}\sum_{b=1}^{B}\sum_{c=1}^{C}(p_{b,c}\log(y_{b,c})) \tag{1}\]
Unlike the other approaches, in AEP the last exit is not treated differently from the intermediate ones, since its contribution to the final loss is weighted following the same weight
assignment strategy as the others. The Equation 2 represents the network loss, where \(\alpha_{i}\) is the weight assigned to the loss associated with output \(i\).
\[L=\sum_{i=1}^{N}\alpha_{i}\cdot L_{i} \tag{2}\]
The predictions obtained after the ensemble, from now on referred to as \(\hat{y}\), are calculated as a weighted sum of the outputs of each i-th output, namely \(O_{i}\). The calculation of each classification metric is performed after applying the argmax operator to the vector thus obtained. The Equation 3 represents the prediction step, where \(\beta_{i}\) is the weight assigned to the outputs associated with exit \(i\).
\[\hat{y}=\mathrm{argmax}(\sum_{i=1}^{N}\beta_{i}\cdot O_{i}) \tag{3}\]
The strategy for selecting weights \([\alpha_{1},...,\alpha_{N}]\) and \([\beta_{1},...,\beta_{N}]\) is a crucial step in AEP. To make the selection independent of the choice of backbone neural network and dataset, the weights of the loss function as well as of the prediction ensemble were tested in linearly increasing or decreasing form. Furthermore, the values of these weights were chosen such that they were strictly positive and with sum equal to one. In this way, it is possible to maintain the desired properties by having adaptive scales of weights regardless of the number of early exits within the neural network.
In the initial set of experiments, the same weights were used to compute both the network loss and the output of the ensemble. Specifically, the weights were either always increasing or always decreasing. To differentiate between these two cases, the experiments in which the weights were assigned in a descending order for both the losses and the outputs were referred to as "EEdesc", while those in which the weights were assigned in an ascending order for both the losses and the outputs were referred to as "EEasc". Ablation studies demonstrated that, while the use of decreasing weights generally led to better exits when considered individually, particularly for earlier exits, networks with ascending weights performed better in terms of ensemble prediction. To address this, the two sets of weights were decoupled, and "EEmix" networks were tested, in which the exits losses weights were assigned in descending order and the outputs weights were assigned in ascending order. Additionally, a uniform weight mode was tested, in which all exits were given the same weight. These experiments were referred to as "EEunif". A summary of the different weight modes can be found in Table I.
Upon completion of training the multi-output network, a pruning step is applied as a post-processing technique to optimize its performance. The resulting ConvNet is loaded and validated in all possible combinations of exit activation states, using the same exit weights used in training. The best sub-network is then extracted and evaluated on the test set. This selection process can aid in further improving accuracy in comparison to both the full ensemble and the single-exit network, while also reducing the number of parameters, operations, and latency as entire sections of the original network can be removed.
## IV Experiments Setup
The experiments conducted to evaluate the effectiveness of the AEP technique included several ConvNet architectures and many of the major image classification benchmarks. For an analysis characterized by high completeness, all experiments were evaluated with two different input scales, and each ConvNet was trained both through training from scratch and through fine-tuning from the weights learned on ImageNet.
### _Networks_
The AEP technique was tested on 5 well known network architectures, thus encompassing many of ConvNets most relevant architectural trends and patterns. The goal was to span different network sizes, design patterns and inner components. An abstract and general architecture was constructed as a theoretical model for the various ConvNets considered,
Fig. 1: Outline of the AEP technique. \(O_{i}\) represents the output of the i-th exit stage with linear activation. \(\not\)\(\not\)\(\not\)\(\not\)\(\not\)\(\sim\) represents the loss function, i.e., the categorical cross-entropy, while \(\not\)\(\sim\) represents the argmax function used to obtain the class prediction. \(\alpha_{i}\) and \(\beta_{i}\) represent the weights assigned to the loss and output of exit stage \(i\), respectively. The activation of the outputs is relative to the pruning step with which the proposed method terminates.
observable in Figure 1. By identifying the recurring cells in these architectures, it was possible to identify the list of stages to which early exits should be attached. At this point, each network could be enriched with the function to extract a subnetwork by specifying which exits to preserve. Below the list of networks identified and used in the experiments:
* **ResNet50**[5]: In terms of size, it is in the middle of the ResNet family of architectures. ResNets are characterized by the use of skip/residual connections between layers, which should help in improving gradient flow through the network.
* **VGG16**[3]: It is a purely sequential ConvNet, and while smaller than other VGG models, it is still by far the most demanding in terms of operations required.
* **DenseNet169**[6]: It is found in the middle of the DenseNet family of architectures, they are characterized by a dense blocks structure, in which features obtained after a layers are not only passed to the successive layers but also concatenated to the outputs of said layers, so that the information in a block is better preserved and exploited.
* **MobileNetV3Small**[7]: Designed to work under mobile settings, it was a good candidate to see what would happen with tiny architectures. It is also characterized by the use of a particular type of block called the Residual Inverted Bottleneck, characterized by high memory efficiency.
* **EfficientNetB5**[8]: Part of the EfficientNet family, currently one of the best performing architecture families, this model in particular was chosen because with its more than 500 layers and, in contrast with MobileNetV3Small, allowed to study a huge architecture.
For each of the 5 networks, the classifier present in the original single-exit architecture was extracted and replicated for each of the early exits. The only exit stage altered from the original structure was that of the VGG16 model, which by default is extremely large and over-parameterized. It was substituted it with a simpler Global Average Pooling layer followed by a dense layer with a number of units equal to the number of possible classes.
The experiments conducted considered each network both with a number of early exits set at 4, for comparison purposes, and with a number of early exits equal to the number of stages in the original model. Specifically, while the experiments on Resnet50 and DenseNet169 did not need to be repeated since 4 was already the correct number of exits to match the number of stages, VGG16 and MobileNetV3Small required also a set of experiments concerning 5 exits, while EfficientNetB5 required also tests involving 7 exits. We will refer to these 3 networks by adding the "full" keyword to their name: **VGG16full**, **MobileNetv3Smallfull** and **EfficientNetB5full**, bringing the total number of networks tested to 8.
### _Datasets and Metrics_
In this study, a thorough experimental evaluation of the AEP algorithm was conducted using six diverse image classification datasets. These datasets were chosen to cover a range of characteristics, including variations in class imbalance and image modality, in order to assess the generalizability and robustness of the proposed algorithm. The datasets used in this study spanned a range of class sizes, including datasets with tens to hundreds of classes, and are summarized in Table II.
The evaluation included comparing the performance of baseline and ensemble networks under different scenarios, including training from scratch and fine-tuning from ImageNet weights for all networks as well as using images of sizes 224x224 and 64x64 for all datasets. For each experiment, a set of performance metrics were collected, including Top1 accuracy, number of parameters, number of MACs (or Multiply-ACcumulate operations), latency, and training time. Additionally, for multi-output networks, the Top1 accuracy for each exit was also recorded.
### _Hyperparameters Setting_
The goal of this research was to evaluate the performance of single-exit networks in comparison to early-exit ensembles rather than matching or beating state of the art results. To accomplish this, a simple training approach was employed, using 100 training epochs, a batch size of 64, and a learning rate of \(10^{-4}\) with the Adam optimizer. The other training parameters were left at their default values in PyTorch. An early stopping technique was applied, with a patience of 12 epochs, based on the validation loss. No data augmentation or learning rate schedules were utilized. The only preprocessing applied to the images was resizing to 224 or 64 through bicubic interpolation and normalization. The small learning rate was chosen to ensure that the same hyperparameters could be used in fine-tuning, thereby ensuring comparable results. The models were implemented using PyTorch and executed on an NVIDIA Quadro RTX 6000 GPU.
## V Results and Discussion
This section describes the results of the experiments conducted in the present research. The analysis begins by comparing the variations in accuracy between single-output architectures and the proposed configurations. Next, the benefits of the pruning step are shown, in terms of accuracy gain, parameter reduction, optimisation of MACs and faster inference time. Finally, the accuracy performance of early exits without ensemble compared to single-output models is analysed. The symbol '*' indicates early exits experiments whose networks
have been subjected to pruning. In contrast, the absence of the symbol '*' represents full-ensemble networks.
### _Classification Accuracy_
Table III shows the results of the experiments performed in terms of the percentage change in Top1 accuracy compared to the single-output experiments, grouped by neural network in the left column and by dataset in the right column.
With regard to the experiments performed with 224x224 images and traditional training (TRAIN-224), excellent average improvements can first of all be appreciated regardless of the strategy with which AEP was applied. For each neural network
and each dataset, it is possible to identify a configuration that improves the accuracy obtained. In particular, as far as the datasets are concerned, the greatest advantages are found in CIFAR100 and TinyImageNet, which respectively obtain an average accuracy improvement of 16.23% with the EEmix* configuration and 13.94% with the EEdesc* configuration. With regard to network configurations, the improvement achieved by ResNet50 and both MobileNetV3-small versions is noteworthy.
The general considerations made for the TRAIN-224 scenario are confirmed and underlined by the results of the experiments with 64x64 images and traditional training (TRAIN-64). In this case, with the same dataset and neural network, the problem to be solved by the algorithm considered is more complex due to the reduced amount of input information. In this context, it is possible to observe how beneficial AEP is overall, increasing the quality of the models regardless of their configuration and dataset, in some cases making such improvements as to make the difference between a poorly performing model and a winning one. In this scenario, the EEdesc* technique seems to be definitely the favourite, capable of improving accuracy performance by 15% on average.
Moving from the training scenario to the fine-tuning scenario, and in particular to the one with 224x224 images (FINETUNE-224), a different behavior can be observed. Since these are pre-trained algorithms on ImageNet images with similar size, the addition of early exits and ensembles seems to have a negative overall impact on average. This is reasonable in light of the fact that the starting models have an optimized set of weights available to facilitate the production of accurate predictions from the single output at the bottom of the network. This assumption is confirmed by the counterexample represented by the results obtained from the EEasc and EEasc* configurations. In fact, it can be observed from Table III how the use of ascending weights in the loss and inference phase turns out to be more suitable by favoring strongly the contribution of the outputs close to the single output of the original model.
Finally, going to look at the results of the last pool of experiments,i.e., those with fine-tuning and 64x64 images (FINETUNE-64), it is possible to see a different behavior from the previous case. The change of spatial domain resulted in an overall improvement in the accuracy of the AEP-based models compared to the single-output counterpart. This is probably due to the higher complexity of the problem inversely proportional to the size of the input, which thus allows better exploitation of classifiers trained on lower-level and thus less elaborate features, as is not the case with large images that require more complex transformations. However, the use of ascending weights turns out to be the winning move here as well, capable of improving models accuracy by an average of 4.79%.
In general, it can be argued that ensemble networks with early exits improve much more than the baseline when trained from scratch than when fine-tuned. This is reasonable because when training from scratch by jointly optimising the exits, the network is able to improve the features of each layer to achieve a good classification result, producing better features in the layers closer to the network input. Fine-tuning, on the other hand, starts from the weights obtained after training single-output networks, which means that the features of the initial layers are not good classification features by themselves, but do produce good classification features in the final layers. Adding early exits therefore leads to contradictory behaviour, as if the optimisation were trying to override the initial weights to solve a different task. In fact, by assigning higher weights to exits closer to the input, you end up assigning high classification importance to layers that should not be able to do so, which can lead to worse performance than the baseline.
Regarding the effect of the pruning step, the same behaviour as described in full ensembles can be observed with slight improvements in accuracy. Table IV collects and compares the accuracy performance of pruned versions of early exits ensemble networks with their full-ensemble counterparts. Among the different types of networks, EEdesc networks seem to be the ones that benefit the most from the pruning phase, particularly in fine-tuning, the same learning context in which they perform the worst.
### _Parameters, Operations and Latency_
The average effect in terms of parameters of applying AEP is summarised in Table V. As can be seen from the first column, the addition of early exits has a minimal impact on the increase in parameters, which on average increases by only 2.88% compared to single-exit architectures. The key aspect is the impact of the pruning step, which in general brings considerable benefits. Paying attention to the TRAINING-64 scenario, it can be seen that the EEmix* configuration results in an average parameter reduction of 41.02% and in parallel an average accuracy gain of 13.31%, as described in Table III.
With regard to the number of MACs, the average results of the experiments conducted are summarised in Table VI. Also for this metric, the addition of early exits alone has a negative impact, albeit with an average computational increase of only 5%. On the contrary, it is possible to observe the beneficial effect of pruning, which in addition to providing the previously discussed benefits, goes as far as reducing the
number of operations by up to a maximum of 17.95% in the EEmix* case.
A similar behaviour can be observed in the inference latency domain, as shown by the results of the experiments summarised in Table VII. Adding more outputs and calculating their weighted sum is clearly disadvantageous from a time point of view. In percentage terms, this phenomenon is more pronounced in architectures with input sizes 64x64. The cut of unnecessary outputs implemented by the pruning operation inevitably benefits inference times, as the image input to the networks will on average have to travel through a portion of the architecture and not its entirety. Fine-tuning scenarios tend to benefit less from the pruning step precisely because, as can be deduced from Table V and Table VI, on average only the very last outputs, or none at all, are cut out. Consequently, although mitigated, the addition of early exits is only partially beneficial, as opposed to scenarios characterised by training from scratch.
### _Early Exits Analysis_
An interesting aspect is certainly that of the quality of the classifiers obtained prior to the pruning operation. In fact, the proposed training, whether from scratch or fine-tuning, has the purpose of optimising the ensemble to be pruned, but has the secondary effect of computing weights that can be used directly by the classifiers obtained thanks to early exits. For each of the scenarios considered, Table VIII presents the average percentage change in accuracy of the early outputs common to all configurations compared to the average performance achieved by single-output networks.
The first notable result is the presence of early exits performing better than the single output network. This occurs in every scenario except FINETUNE-224, managing to achieve average accuracy improvements of up to 9.96%. This confirms not only the effectiveness of AEP as an ensemble-based training technique, but also opens up the possibility of realising shallower and at the same time more performant architectures, particularly useful in mobile or edge or distributed setups. Focusing on the TRAIN-64 and TRAIN-224 scenarios, it is evident that both third and fourth outputs represent an advantageous alternative to the traditional single output regardless of the AEP configuration used. This result is perhaps the most significant. It is important to reflect on the fact that sometimes
good human intuitions, such as increasing the complexity of a model, do not necessarily translate into improvements in performance. On the contrary, better results can be achieved by optimising the learning process and exploiting various levels of abstraction of the input data, as is done through the use of multiple outputs.
## VI Conclusion
This paper presented Anticipate, Ensemble and Prune (AEP), the early exits and ensemble-based technique for improving the performance of single-output artificial neural networks. The extensive series of experiments conducted demonstrated how this approach can be applied in the context of image classification, achieving remarkable improvements in terms of accuracy, parametric complexity, number of operations and inference time. The experiments were also replicated in the context of fitting pre-trained networks using fine-tuning techniques, with excellent results, especially in the absence of high-resolution input images. It was also shown how AEP can optimise the performance of individual early exits, exceeding the accuracy of single-exit models even without using the ensemble technique. We believe this can be an important step towards optimising the efficiency of neural architectures, especially for mobile and edge scenarios. In a follow-up study, AEP will include an automatic optimisation process to find the best loss weights and a search process to find the best output weights, which is likely to lead to further improvements and potentially reveal undiscovered design patterns.
## Acknowledgment
The European Commission has partially funded this work under the H2020 grant N. 101016577 AI-SPRINT: AI in Secure Privacy-pReserving computlNg conTinuum.
|
2305.11196 | Non-volatile Reconfigurable Digital Optical Diffractive Neural Network
Based on Phase Change Material | Optical diffractive neural networks have triggered extensive research with
their low power consumption and high speed in image processing. In this work,
we propose a reconfigurable digital all-optical diffractive neural network
(R-ODNN) structure. The optical neurons are built with Sb2Se3 phase-change
material, making our network reconfigurable, digital, and non-volatile. Using
three digital diffractive layers with 14,400 neurons on each and 10
photodetectors connected to a resistor network, our model achieves 94.46%
accuracy for handwritten digit recognition. We also performed full-vector
simulations and discussed the impact of errors to demonstrate the feasibility
and robustness of the R-ODNN. | Chu Wu, Jingyu Zhao, Qiaomu Hu, Rui Zeng, Minming Zhang | 2023-05-18T14:04:37Z | http://arxiv.org/abs/2305.11196v1 | Non-volatile Reconfigurable Digital Optical Diffractive Neural Network Based on Phase Change Material
###### Abstract
Optical diffractive neural networks have triggered extensive research with their low power consumption and high speed in image processing. In this work, we propose a reconfigurable digital all-optical diffractive neural network (R-ODNN) structure. The optical neurons are built with Sb\({}_{2}\)Se\({}_{3}\) phase-change material, making our network reconfigurable, digital, and non-volatile. Using three digital diffractive layers with 14,400 neurons on each and 10 photodetectors connected to a resistor network, our model achieves 94.46% accuracy for handwritten digit recognition. We also performed full-vector simulations and discussed the impact of errors to demonstrate the feasibility and robustness of the R-ODNN.
Chu Wu,1,4 Jingyu Zhao, 1,4 Qiaomu Hu,1,2 Rui Zeng,1,2 and Minming Zhang1,2,3,* \({}^{*}\) [email protected]
## 1 Introduction
Deep learning is a machine learning method that enables computers to finish complex tasks by simulating an artificial neural network (ANN) [1]. It has recently drastically impacted data processing for its superior performance over traditional methods. The method has widespread applications like image recognition[2], automatic driving, signal processing, and natural language processing[3]. However, electronic neural network implementations face difficulties like high energy consumption and processing time[4, 5].
Diffractive deep neural network( D\({}^{2}\)NN), has attracted extensive research for its low-cost scalability and structural simplicity. Implementations with 3D-printed diffraction surfaces and terahertz sources have achieved satisfying results on tasks like image recognition[6, 7, 8]. Metasurface-based D\({}^{2}\)[9, 10] and on-chip integration. After the fabrication of the model, only illuminating light and photodetectors would consume power. Therefore, these designs are superior in power efficiency over traditional electronic neural networks[7, 10]. Given that D\({}^{2}\)NNs use subwavelength neurons[11, 12], so migrating the design to shorter wavelengths, like the more accessible near-infrared or visible light, is a way to make the architecture more compact. However, the feature size of neurons would be so small in this situation that manufacturing becomes a significant problem. For example, a 3D printed-D\({}^{2}\)NN that modulates the phase by surface height requires controlling the height of each neuron at the nanometer scale, which would be nearly impossible for existing processes. Also, the diffractive neural networks are vulnerable to misalignment and manufacturing errors[13]. To make D\({}^{2}\)NN reconfigurable, researchers build active
diffractive networks with devices including spatial light modulators (SLM), digital micromirror devices (DMD)[14], or reprogrammable metasurfaces[11]. These designs have achieved outstanding reconfigurability and accuracy. However, the active layers modulating light are volatile and consume much more power than the passive designs[14, 15]. The SLM-based schemes also face difficulty in integrating because of their large size.
In this work, we propose a phase-change material-powered reconfigurable digital all-optical diffractive neural network (R-ODNN). Phase-change material (PCM) based schemes provide a robust platform with convenient regulation and non-volatile modulation[16, 17]. Our network is reprogrammable to cope with different image recognition tasks without remanufacturing its structure. The digital states of our optical neurons correspond with the crystalline and amorphous states of the phase-change material Sb\({}_{2}\)Se\({}_{3}\)[18]. To fit the characteristics of the material, the R-ODNN is digitalized[19]. Since digital devices cannot train with gradients, we use a straight-through estimating (STE) method to solve the derivative problem. Featuring three diffracting layers with 14,400 neurons on each layer and ten photodetectors as output, our model achieves 93.8% recognition accuracy for MNIST handwritten digit dataset. The network's performance can boost to 94.46% with a correcting resistor network, which also dramatically improves the resilience of our network to misalignment errors.
## 2 Design
Figure 1(a) shows the structure of the R-ODNN. It consists of three digital diffractive layers, one output layer with ten photodetectors and a correcting layer. Each layer is 120\(\upmu\)m\(\times\)120\(\upmu\)m in size and keeps a 50\(\upmu\)m distance from neighbor layers. A diffractive layer consists of 120\(\times\)120 digital optical neurons. The wavelength of the incident light is 1.55\(\upmu\)m. At this wavelength, the refractive indices of the Sb\({}_{2}\)Se\({}_{3}\) phase change material in the crystalline and amorphous states are 3.28 and 4.05, respectively[18, 20]. The correcting layer is a fully-connected layer with 10 inputs and 10 outputs. Figure 1(b) shows an equivalent neural network structure consisting of 3 hidden layers, a photodetector layer, and an fully-connected output layer. These layers simulate the phase change material diffractive layers, photodetectors, and the correcting layer, respectively.
Figure 2(b) shows the basic structure of the optical neuron. Optical phase-change material fills rectangular etching on the silicon dioxide substrate. The etching hole is 1um deep and has a side length of 0.8um, placed in grids of 1um\(\times\)1um on the diffractive layer. We introduce parameters \(\Theta\) and \(K\) describe the neurons' behavior in the equivalent neural network. \(\Theta\) is the phase shift difference between crystalline and amorphous states, which is \(\pi\) in our case. \(K\) is the ratio of the transmittance of two neuron states and is set to 1.
Figure 1: Schematics of the R-ODNN. (a) The physical structure of the R-ODNN. (b) The design of the equivalent neural network.
Figure 2: Structure of the optical neurons on the diffractive layer. (a) Partial top view of the diffractive layer. (b) Single neuron structure and parameters.
The forward propagation process of the layers in the equivalent neural network is shown in Fig. 3. When the light passes through a neuron, it applies different amplitude and phase shifts to the injected light, according to Eq. 1 and 2.
\[n_{i}^{l}=m_{i}^{l}T\big{(}k_{i}^{l},\theta_{i}^{l}\big{)} \tag{1}\]
\[T\big{(}k_{i}^{l},\theta_{i}^{l}\big{)}=k_{i}^{l}\exp\big{(}j\theta_{i}^{l} \big{)} \tag{2}\]
\(T\big{(}k_{i}^{l},\theta_{i}^{l}\big{)}\) is the neuron transmission function of our model, where \(\theta_{i}^{l}\)stands for the phase shift of the \(i\)-th neuron on the \(l\)-th layer, \(k_{i}^{l}\)is the amplitude modulation of that neuron, and \(m_{i}^{l}\) is the optical input of the neuron. In our network, \(\theta_{i}^{l}\) is the trainable neuron weight and \(k_{i}^{l}\) keeps 1. When the light propagates between layers, the diffraction pattern is calculated with angular spectrum diffraction theory[21], [22], and the input of the next layer can be told:
\[H\big{(}f_{x},f_{y}\big{)}=e^{ikx\sqrt{1-\big{(}\lambda f_{x}\big{)}^{2}- \big{(}\lambda f_{y}\big{)}^{2}}} \tag{3}\]
\[m_{i}^{l+1}(x,y)=n_{i}^{l}(x^{\prime},y^{\prime})*F^{-1}\big{[}H\big{(}f_{x},f _{y}\big{)}\big{]} \tag{4}\]
Here, \(H\big{(}f_{x},f_{y}\big{)}\) is the frequency-domain transmission function of diffraction, and \(F^{-1}\) is the inverse Fourier transform. This procedure calculates the input \(m_{i}^{l+1}\) of the next layer, which is the input of the neuron transmission function given in Eq. 2. When the light reaches the photodetectors on the final layer, the detectors read the intensity of the diffraction pattern and obtain classification results.
The cascaded correcting layer is a fully-connected layer that further boosts the accuracy and compensates for system errors. We may use a resistor network or digital signal processor in the physical model to finish the task. The output of the optical option, a 10-element vector \(X\), is remapped through this layer into a vector of the same size. \(L\) is the final output of the entire network. Equation 5 describes the mapping process.
\[L=WX \tag{5}\]
Figure 3: Forward propagation and neuron transmission function in the equivalent neural network
Fig. 4. The training process of the optical section of the equivalent neural network
The training process of the optical network uses backward propagation methods, as Fig. 4 shows. The output of the optical section is a 10-element vector \(X\), which corresponds to the readings of photodetectors. The results use softmax normalization. Then we may calculate the loss of the network with the cross-entropy function as follows.
\[Y_{i}=\frac{X_{i}}{\sum_{j=1}^{10}\exp\bigl{(}X_{j}\bigr{)}} \tag{6}\]
\[L_{CELoss}=\sum_{i=0}^{9}I_{i}\log_{2}Y_{i} \tag{7}\]
\(I_{i}\) is the picture's label in one-hot encoding, and \(Y_{i}\) is the output of the softmax function, \(i\) represents the \(i\)-th category of the label or result. The index of the photodetector with the highest output voltage is the result for classification.
Since the phase-change material only has two stable states, the transmission parameter of the neurons should also be binary. In our network, we do not perform the binarization process during the forward propagation but use a penalty function to make the parameters gather around the desired binary value. The penalty function is the sum of \(l_{2}\)-norms of the difference between a real weight and two target binary values, \(0\) and \(\Theta\).
\[P\bigl{(}\theta_{i}^{l}\bigr{)}=\Gamma\times\bigl{(}\bigl{\|}\theta_{i}^{l} \bigr{\|}+\bigl{\|}\theta_{i}^{l}-\Theta\bigr{\|}\bigr{)} \tag{8}\]
In our network, the function in Eq. 8 gives a larger penalty with input values far from 0 and \(\pi\). As Fig. 5 shows, the neuron transmission function returns a complex number with phase \(\theta_{i}^{l}\) and modulus 1. The penalty forces the phase \(\theta_{i}^{l}\) move towards the two target binarized value, 0 and \(\pi\), as the green arrow on Fig. 5 shows.
\[L_{loss}=L_{CELOSS}+\sum_{i}\sum_{l}P\big{(}\theta_{i}^{l}\big{)} \tag{9}\]
Eq. 10 is the final loss function. During the training process, the penalty function offers an additional force other than the Adam optimizer, pushing the neuron weights towards or away from the binarizing target value, 0 and \(\pi\). Penalty coefficient \(\Gamma\) controls the direction and magnitude of this force. To achieve minimum binarization loss, we first use a negative \(\Gamma\) value to gather the neuron weights around binarizing thresholds, \(\frac{\Theta}{2}\) and \(-\left(\pi-\frac{\Theta}{2}\right)\). This method offers us a better initial value for the subsequent training process. Later, we re-set the \(\Gamma\) value to a small positive value and increase it along the training. In this way, the weights are gradually forced to gather around target values, 0 and \(\Theta\).
After training, we apply a binarizing function to all neuron weights. \(\theta_{i,b}^{l}\) and \(\theta_{i}^{l}\) are to represent the binary and real-valued transmission parameters, respectively. Their relationship is described in Eq. 10.
\[\theta_{i,b}^{l}=\begin{cases}0,-\left(\pi-\frac{\Theta}{2}\right)<\theta_{i} ^{l}\leq\frac{\Theta}{2}\\ \Theta,\qquad\text{else}\end{cases} \tag{10}\]
After the binarization process, the weights \(\theta_{i,b}^{l}\) are corresponding to the final result of the optical neuron state.
Fig. 5: The \(l_{2}\)-norm-based penalty function. Weight \(\theta_{i}^{l}\) is a trainable parameter of the neuron. Penalty function forces \(\theta_{i}^{l}\) moves towards the target binary value 0 or \(\Theta\) at the rate defined by \(\Gamma\).Here, the binary target values are 0, \(\pi\) respectively.
After setting the optical section, we cascade the corrective section to the network and train it separately. As Fig. 6 shows, the output current of each photodetector is the input of the corrective section, and the production of adders tells the final classification result of the neural network.
We also optimize this section of the network using the softmax normalization and cross-entropy loss function to calculate the loss and the Adam method to optimize the weights of the fully-connected layer.
In corresponding to our physical model, we first train the optical section and apply the result to the optical neurons. A cascaded fully-connected layer would bring significant performance uplift for the network, especially under different error types[13]. In our case, the corrective section takes the role. Fig. 7 shows the accuracy of the R-ODNN during the training of the optical section and corrective section. The first 300 epochs show the accuracy variation as the penalty coefficient \(\Gamma\) varies. The subsequent 200 epochs are the final phase of training after \(\Gamma\) is set to maximum. The final epochs after Epoch 500 are the training process of the correcting section. The initial values of the correcting section are randomly set so that the accuracy drops during the first few epochs.
Figure 6: The physical structure and the schematics of the correcting resistor network. The fully-connected output layer determines the values of the resistors.
## 3 Results and discussion
To further verify the feasibility and accuracy of our neural network, we perform a 3D finite-difference time-domain (FDTD) full-vector simulation via commercial software (Lumerical FDTD Solutions) for a miniature version of the model. In the neural network, the phase shift of each optical neuron is directly substituted into the diffraction integral for a layer-by-layer calculation to obtain the output. For vector simulations, the value of optical neurons is transformed into the refractive index distribution of the _i_-th diffractive layer, and then the desired diffractive pattern is obtained by FDTD simulation. We calculated the diffractive pattern of each layer using the FDTD method. Figure 8 gives an example.
Figure 8: The intensity of light and index distribution of each layer. The output diffraction pattern and the position of photodetectors are shown on the right.
Figure 7: Measured network accuracy with and without correcting the resistor network. The first 300 epochs use a dynamic penalty coefficient to achieve binarized weights.
With a similar method, we calculated several output diffractive patterns of the system. As shown in Figure 9, the diffraction patterns calculated by different methods match very well in most situations. However, minor hot spots in our final diffraction patterns could interfere with our classification results. These problems can be solved by training the correcting network online.
According to Fig. 9, the result of our scalar diffraction theory-based neural network simulates the actual situation well in general. Therefore, we can approximate the accuracy of our physical model with the neural network. The accuracy is shown in Fig. 10, showing that the correcting network included version reached 94.5% in the end, and the all-optical model achieved an accuracy of 93.6%.
Fig. 10. The confusion matrix of the neural network after 200 epochs of training. (a) accuracy without the correcting layer; (b) accuracy with the correcting layer.
Fig. 9. The results of neural network calculation and FDTD vector simulation for handwritten digit classification. The energy distribution simulated by the two methods matches well in most situations.
We trained the model with layer distances set at 30, 40, 45, 50, 55, 60, 65, and 70\(\upmu\)m. As Fig. 11 shows, the classification accuracy does not change much as the layer distance varies. Considering the assembly process and the stability of accuracy, we choose a 50-55\(\upmu\)m layer distance for our model.
We apply different magnitudes of errors to the neuron, including the PCM's radius, thickness, and index to investigate the performance of our optical neuron. Figure 12 shows the variation of phase difference and the ratio of transmittance between the two PCM states when the characteristics of the neuron change.
Figure 11: The accuracy of the system with different numbers of layers and layer distance.
Figure 12: The phase shift difference and the ratio of transmittance between two states of the neuron upon different types of error. The phase shift difference is the phase difference of the output light field between two states of the neuron. \(T_{c}\) and \(T_{a}\) are the transmittances of the neuron in the crystalline and amorphous states, respectively. The ideal value of the phase shift difference is \(\pi\), and the ratio of transmittances is 1. The green area indicates the \(\pm\)20% phase shift difference error around \(\pi\). (a) Measured neuron performance with different neuron thicknesses, from 0.95\(\upmu\)m to 1.05\(\upmu\)m. (b) Measured neuron performance with different side lengths of the neuron, from 0.7\(\upmu\)m to 0.9\(\upmu\)m. (c) Measured neuron performance with index shift errors. The refractive index of the PCMs may deviate from the theoretical value in practice. The index shift error is measured by the deviation from the ideal index of the neuron, from -0.05 to +0.05.
Moreover, we discuss two types of manufacturing errors, assembly error, and neuron error, to further evaluate the robustness of our R-ODNN. The assembly error is the error caused by the disposition of diffractive layers. In the R-ODNN, the position of each diffraction plate in space is determined by the model's parameters, and the accuracy decreases when the diffraction layers are misspaced along the optical axis.
Figure 13 shows that R-ODNN is sensitive to the optical axis misalignment, and a severe accuracy degradation occurs at a deviation of about 1\(\upmu\)m. However, the correcting layer we introduced earlier can be trained online, so we can retrain the correcting layer to compensate for errors after the optical structure of the R-ODNN has been fabricated. At this point, the R-ODNN can maintain an accuracy above 85% over a large tolerance.
The second manufacturing error we consider is the error in the optical characteristics of the neurons, including the transmittance and the phase shift difference. In our model, each neuron is described by two parameters \(\Theta\) and \(K\). Such characteristics are determined by the neuron's side length, thickness, and refractive index. Figure 13 shows the variation of phase difference and the ratio of transmittance between the two PCM states when the characteristics of the neuron change. Figure 14 shows the decrease in accuracy along with phase difference and transmittance error.
Figure 13: The accuracy degradation of the R-ODNN under layer distance error. The orange line shows the restoration of performance brought by the correcting layer.
The simulated accuracy of the R-ODNN drops little when the error is set within \(\pm 20\%\). Retraining the correcting layer online can make the degradation caused by errors even lower. According to Fig. 14(b), we can tell that the network accuracy hardly drops when the ratio of transmittances varies from 0.8 to 1.2. After retraining the correcting network, the accuracy can be further stabilized to above 85%. Altogether, our network has a high tolerance for optical neuron errors.
We made an additional simulation on the Fashion-MNIST dataset to verify the reconfigurable ability of our R-ODNN. It achieved an accuracy of 84.5% for the corrective-section-included setup, and 83.7% for an optical-only setup. The R-ODNN can do this by just reconfiguring the state of optical neurons and the values of correcting layer.
Figure 14: Degradation of network accuracy due to the error of the phase shift difference and the ratio of transmittances. The orange curve shows the accuracy after retraining the correcting layer. (a) Accuracy of the R-ODNN when the phase difference of neurons shifts from \(0.8\pi\) to \(1.2\pi\), respectively. (b) Accuracy of the R-ODNN when the transmittance of two neuron states is different. The orange dashed line is the 94.45% baseline classification accuracy
In Fig.15 (b) and (c), we can see the error of the label 0, 4, and 6 are significantly higher. This is probably because the Fashion-MNIST image classification dataset is more complex than the MNIST dataset. Since the size of the current optical network is relatively small, the accuracy could drop when facing complex images. Adding more optical layers or increasing the size of each layer would further improve the classification accuracy.
## 4 Conclusion
This paper proposed a reconfigurable optical diffractive neural network (R-ODNN) structure. The model includes three digital diffractive layers that each feature 1\(\upmu\)m thickness, where digital neurons are placed on a 1\(\upmu\)m-by-1\(\upmu\)m grid with a side length of 0.8\(\upmu\)m. With the resistive correcting network, the R-ODNN achieves 94.45% classification accuracy for the MNIST handwritten digit dataset and keeps above 90% under the following errors: optical axis alignment error below 1\(\upmu\)m, layer spacing error below 2%, neuron edge length error below 5%, thickness error below 15% or refractive index error of phase change material below 10%, proving that our model is tolerant towards manufacturing errors.
Figure 15: The results of neural network calculation and FDTD vector simulation on the Fashion-MNIST dataset. (a) diffraction pattern of neural network calculation and FDTD simulation. (b) Confusion matrix of the complete R-ODNN setup trained with the Fashion-MNIST dataset. (c) Confusion matrix of the optical-section-only R-ODNN setup trained with the Fashion-MNIST dataset.
The novel binarized neural network discussed in the paper reveals that simple and linear structures can perform well in optical systems but also elucidates the principal basis of an easily reconfigurable, off-line trainable, and on-chip integrated all-optical neural network device based on phase change materials. Introducing non-linear optical layers could bring significant accuracy improvement to our network. Therefore, research on non-linearity would be the topic of our future work. We also look forward to obtaining explicit performance in the subsequent physical fabrication experiments.
## Funding
National Natural Science Foundation of China (62175076; 61775069; 51911530159).
**Disclosures.** The authors declare no conflicts of interest.
**Data availability.** Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
|
2307.06287 | Rational Neural Network Controllers | Neural networks have shown great success in many machine learning related
tasks, due to their ability to act as general function approximators. Recent
work has demonstrated the effectiveness of neural networks in control systems
(known as neural feedback loops), most notably by using a neural network as a
controller. However, one of the big challenges of this approach is that neural
networks have been shown to be sensitive to adversarial attacks. This means
that, unless they are designed properly, they are not an ideal candidate for
controllers due to issues with robustness and uncertainty, which are pivotal
aspects of control systems. There has been initial work on robustness to both
analyse and design dynamical systems with neural network controllers. However,
one prominent issue with these methods is that they use existing neural network
architectures tailored for traditional machine learning tasks. These structures
may not be appropriate for neural network controllers and it is important to
consider alternative architectures. This paper considers rational neural
networks and presents novel rational activation functions, which can be used
effectively in robustness problems for neural feedback loops. Rational
activation functions are replaced by a general rational neural network
structure, which is convex in the neural network's parameters. A method is
proposed to recover a stabilising controller from a Sum of Squares feasibility
test. This approach is then applied to a refined rational neural network which
is more compatible with Sum of Squares programming. Numerical examples show
that this method can successfully recover stabilising rational neural network
controllers for neural feedback loops with non-linear plants with noise and
parametric uncertainty. | Matthew Newton, Antonis Papachristodoulou | 2023-07-12T16:35:41Z | http://arxiv.org/abs/2307.06287v1 | # Rational Neural Network Controllers
###### Abstract
Neural networks have shown great success in many machine learning related tasks, due to their ability to act as general function approximators. Recent work has demonstrated the effectiveness of neural networks in control systems (known as neural feedback loops), most notably by using a neural network as a controller. However, one of the big challenges of this approach is that neural networks have been shown to be sensitive to adversarial attacks. This means that, unless they are designed properly, they are not an ideal candidate for controllers due to issues with robustness and uncertainty, which are pivotal aspects of control systems. There has been initial work on robustness to both analyse and design dynamical systems with neural network controllers. However, one prominent issue with these methods is that they use existing neural network architectures tailored for traditional machine learning tasks. These structures may not be appropriate for neural network controllers and it is important to consider alternative architectures. This paper considers rational neural networks and presents novel rational activation functions, which can be used effectively in robustness problems for neural feedback loops. Rational activation functions are replaced by a general rational neural network structure, which is convex in the neural network's parameters. A method is proposed to recover a stabilising controller from a Sum of Squares feasibility test. This approach is then applied to a refined rational neural network which is more compatible with Sum of Squares programming. Numerical examples show that this method can successfully recover stabilising rational neural network controllers for neural feedback loops with non-linear plants with noise and parametric uncertainty.
## 1 Introduction
Neural networks (NNs) have shown to be highly effective in numerous machine learning tasks. Examples of these include but are not limited to: image recognition, weather prediction, natural language processing, autonomous vehicle technology, medical imaging and social media algorithms [1, 2, 3, 4, 5, 6, 7]. There have been numerous advancements that have contributed to their success such as the development of modern NN architectures [8, 9], the increase in computational power available [10, 11] and the availability of big data.
More recently, there has been an increased interest in using NNs in control systems. One reason for this is the emergence of the parallel field of reinforcement learning. By harnessing the power of NNs, deep reinforcement learning has been used to create a decision-making agent to greatly outperform humans in many complex tasks. Such examples include the board game Go [12] and the video game Dota 2 [13]. Despite their success, there are many significant issues with theses methods. These learnt policies can perform poorly when the learnt environment is different from the real environment [14, 15, 16, 17]. Additionally, bounds to quantify their safety do not sufficiently describe the performance of the algorithm and can be overly conservative [18]. However, with new advancements in robust control and the success of NNs in reinforcement learning, there is a strong motivation for work at their intersection.
We refer to control systems that contain NNs as controllers as neural feedback loops (NFLs). Most research completed in this area has addressed the robustness analysis of NFLs, where the NN's parameters are given and the task is to quantify the system's safety or robustness properties. However, the more challenging task is to obtain the NN controller's parameters, whilst enforcing robustness guarantees. NNs are useful since they can be used as general
function approximators [19, 20, 21, 22], but contain a large number of parameters. This means that optimising over all of the parameters is often computationally expensive.
One method to design the NN controller is by learning an expert control law using input-output data and then checking the robustness guarantees using analysis methods, [23, 24, 25]. However, this may lead to poor robustness certificates because no relevant objective is being optimised while training the NN. It is also possible to use reinforcement learning methods. This often involves training the controller by simulating the system trajectories and then updating the controller's parameters subject to maximising a reward function. However, there are significant challenges with this process; it is very computationally intensive, requiring a large amount of hyperparameter tuning and sometimes leads to undesirable behaviour [26]. The parameterised NN controller can then be analysed to obtain robustness certificates in a defined operating region, however these guarantees can be poor. It can be difficult for these approaches to outperform traditional control laws. Furthermore, adding additional non-linearities into the model through the NN's activation functions can increase the complexity of the closed-loop system.
Despite these drawbacks, recent methods have focused on addressing these issues by designing NN controllers whilst ensuring robustness guarantees in the process. Methods that focus on developing reinforcement learning algorithms such as [27, 28] are able to create an NN policy which can be combined with robust control guarantees. However, these approaches still suffer from other reinforcement learning issues such as requiring significant computational time and hyperparameter tuning. These drawbacks can be mitigated by instead trying to obtain an NN controller by learning from an optimal model predictive control law. This allows the NN controller to be trained offline and when implemented it can be significantly less computationally expensive than the full model predictive control law. To achieve this an SDP framework that incorporates integral quadratic constraints is presented in [29]. An iterative algorithm is used to alternate between optimising the NN parameters to fit the control law and maximising the region of attraction. However, these approaches require a known model predictive control law to optimise over the controller's input-output data, which may not be easy to obtain. This approach relies on a loop transformation and Schur complement to ensure the optimisation problem is convex. Similar approaches have been used for different NN architectures such as recurrent NNs [30] and recurrent equilibrium network controllers [31]. These methods also include a projected policy gradient algorithm and reinforcement learning to synthesise the controller, instead of requiring an expert control law.
### Our Contribution
One shortfall of many recent approaches such as [29, 30, 31] is that large convex approximations must be made, as the original problem is non-convex. For example, the ReLU and tanh activation functions must be sector bounded in the region \([0,1]\) with no constraints obtained from pre-processing bounds such as Interval Bound Propagation [32]. These sector constraints are shown in Figure 1. This can lead to the acquired robustness certificates being conservative. Another issue with these methods is that they rely on iterative approaches, where the algorithm alternates between optimising the performance of the controller and the stability guarantees. This can make the problem computationally expensive and intractable for large scale systems. They also require that the plant model is linear subject to sector bounded non-linearities.
A root cause of the problems with recent approaches is that they use traditional NN architectures with ReLU, sigmoid and tanh activation functions. These structures have shown to be very effective in many machine learning tasks, however they may not be preferable for use in control systems. By considering a class of NNs that are better aligned with control system techniques, these large convex approximations could be mitigated. Since Sum of Squares (SOS) programming
Figure 1: Sector constraints that are used for common activation functions when designing neural network controllers.
is effective for systems with polynomial and rational functions, we investigate what happens when the NN is built upon them.
* Motivated by the success of rational neural networks in machine learning tasks, we propose novel rational activation functions to approximate the traditional sigmoid and tanh activation functions. We then show their effectiveness in analysing neural feedback loops that contain these rational activation functions.
* We note that we could be falling short of the potential expressivity of the neural network by using fixed activation functions. Therefore we consider a general neural network structure which is built upon equations that are convex in the design parameters. We show that a neural network of this form can be expressed in a similar way to that of a feed forward neural network with rational activation functions.
* We consider the Lyapunov stability criteria for constrained dynamics systems. We then propose a novel convex Sum of Squares procedure to recover a stabilising controller for a non-linear system through solving a Sum of Squares feasibility test.
* This procedure is extended to the generalised rational neural network architecture, to recover a stabilising rational neural network in a convex way. We then adapt the rational neural network architecture to make it more compatible with Sum of Squares programming and allow the stabilising controller to be recovered.
* We show for numerous examples that our proposed procedure and neural network architecture are able to effectively recover stabilising controllers for unstable systems with non-linear plants, noise and parametric uncertainty.
## 2 Preliminaries
### Sum of Squares Programming and The Positivstellensatz
In this section we outline how to formulate and solve SOS optimisation problems by solving an equivalent Semidefinite Program (SDP). For more details on SOS programming the reader is referred to [33, 34, 35, 36].
The fundamental idea behind SOS is to replace a polynomial positivity condition with a condition that enforces that the polynomial is a Sum of Squares.
**Definition 1**.: _A polynomial \(p(x)\) is said to be a Sum of Squares (SOS) polynomial if it can be expressed as_
\[p(x)=\sum_{i=1}r_{i}^{2}(x),\]
_where \(r_{i}(x)\in\mathbb{R}[x]\). We denote the set of polynomials that admit this decomposition by \(\Sigma[x]\) and say '\(p(x)\) is SOS'. Note that \(\mathbb{R}[x_{1},\ldots,x_{n}]\) is defined as the set of polynomials in \(x_{1},\ldots,x_{n}\) with real coefficients and we denote \(x=(x_{1},\ldots,x_{n})\) for simplicity._
**Definition 2**.: _A monomial in \(x=(x_{1},\ldots x_{n})\) is_
\[x^{\beta}=x_{1}^{\beta_{1}}x_{2}^{\beta_{2}}\ldots x_{n}^{\beta_{n}},\]
_where the exponent and degree are denoted as \(\beta=(\beta_{1},\ldots,\beta_{n})\in\mathbb{N}^{n}\) and \(|\beta|=\beta_{1}+\cdots+\beta_{n}\) respectively._
The column vector of monomials with only certain exponents is expressed as \(x^{\mathbb{B}}=(x^{\beta})_{\beta\in\mathbb{B}}\), where \(\mathbb{B}\subset\mathbb{N}^{n}\) is the set of exponents that are used in the monomials. The summation operation on \(\mathbb{B}\) is defined as
\[\mathbb{B}+\mathbb{B}:=\{\beta+\gamma\;:\;\beta,\gamma\in\mathbb{B}\}.\]
A polynomial \(p(x)\) can be written as a linear combination of a set of monomials \(x^{\mathbb{B}}\) for a set of coefficients \(p_{\beta}\in\mathbb{R}\) such that
\[p=\sum_{\beta\in\mathbb{N}^{n}_{d}}p_{\beta}x^{\beta},\]
where \(\mathbb{N}^{n}_{d}=\{\beta\in\mathbb{N}^{n}\;:\;|\beta|\leq d\}\) is the set of all \(n\)-variate exponents of degree \(d\) or less.
**Theorem 1**.: _A polynomial \(p(x)\) is SOS if and only if it can be written in what is referred to as a Gram matrix representation such that_
\[p(x)=(x^{\mathbb{B}})^{T}Qx^{\mathbb{B}},\]
_where \(Q\in\mathbb{S}^{|\mathbb{B}|}_{+}\) is a positive semidefinite matrix._
The Gram matrix representation can be rewritten as
\[(x^{\mathbb{B}})^{T}Qx^{\mathbb{B}}=\langle Q,x^{\mathbb{B}}(x^{\mathbb{B}})^{T} \rangle=\sum_{\alpha=\mathbb{B}+\mathbb{B}}\langle Q,A_{\alpha}\rangle x^{\alpha},\]
where the symmetric binary matrix \(A_{\alpha}\in\mathbb{S}^{|\mathbb{B}|}\) for each exponent \(\alpha\in\mathbb{B}+\mathbb{B}\) is
\[[A_{\alpha}]_{\beta,\gamma}:=\begin{cases}1,\;\beta+\gamma=\alpha,\\ 0,\;\mathrm{otherwise}.\end{cases}\]
The following is therefore true
\[p(x)\in\Sigma[x]\Leftrightarrow\exists Q\in\mathbb{S}^{|\mathbb{B}|}_{+}\; \text{where}\;\langle Q,A_{\alpha}\rangle=p_{\alpha},\;\forall\alpha\in\mathbb{ B}+\mathbb{B}.\]
This means that checking the SOS condition is equivalent to solving an SDP, which can be achieved with parsers such as SOSTOOLS [37] in MATLAB.
A central theorem of real algebraic geometry is known as the Positivstellensatz (Psatz) [38], which will now be briefly outlined. The Psatz provides an equivalent relation between an algebraic condition and the emptiness of a semi-algebraic set.
We express the semi-algebraic set with notation
\[S=\{x\in\mathbb{R}^{n}\mid g_{i}(x)\geq 0,\;h_{j}(x)=0,\;\forall\;i=1,\ldots,q_ {1},\;j=1,\ldots,q_{2}\}, \tag{1}\]
where \(g_{i}\) and \(h_{j}\) are polynomial functions.
**Definition 3**.: _Given \(f_{1},\ldots,f_{r}\in\mathbb{R}[x]\), the multiplicative monoid generated by the \(f_{k}\)'s is the set of all finite products of \(f_{k}\)'s, including 1 (i.e. the empty product). It is denoted as \(\mathcal{M}(f_{1},\ldots,f_{r})\)._
**Definition 4**.: _Given \(g_{1},\ldots,g_{q_{1}}\in\mathbb{R}[x]\), the cone generated by the \(g_{i}\)'s is_
\[\mathrm{cone}\{g_{1},\ldots,g_{q_{1}}\}=\Bigg{\{}s_{0}+\sum_{i=1}^{q_{1}}s_{i} G_{i}\mid s_{i}\in\Sigma[x],\;G_{i}\in\mathcal{M}(g_{1},\ldots,g_{q_{1}}) \Bigg{\}}. \tag{2}\]
**Definition 5**.: _Given \(h_{1},\ldots,h_{q_{2}}\in\mathbb{R}[x]\), the ideal generated by the \(h_{k}\)'s is_
\[\mathrm{ideal}\{h_{1},\ldots,h_{q_{2}}\}=\Bigg{\{}\sum_{j=1}^{q_{2}}t_{j}h_{j} \mid t_{j}\in\mathbb{R}[x]\Bigg{\}}. \tag{3}\]
**Theorem 2**.: _(Positivstellensatz, [38]) Given the semi-algebraic set \(S\) in (1), the following are equivalent:_
1. _The set_ \(S\) _is empty._
2. _There exist_ \(s_{i}\in\Sigma[x]\) _in (_2_) and_ \(t_{j}\in\mathbb{R}[x]\) _in (_3_) such that_ \[\mathrm{cone}\{g_{1},\ldots,g_{q_{1}}\}+\mathrm{ideal}\{h_{1},\ldots,h_{q_{2}} \}=0.\]
Based on Theorem 2 one can create a convex test for computational purposes by using a representation of the function \(p(x)\).
**Proposition 1**.: _Consider the set \(S\) in (1). If_
\[p=1+\sum_{j=1}^{q_{2}}t_{j}h_{j}+s_{0}+\sum_{i=1}^{q_{1}}s_{i}g_{i}, \tag{4}\]
_where \(s_{i}\in\Sigma[x]\) and \(t_{j}\in\mathbb{R}[x]\), then \(p(x)>0,\;\forall x\in S\)._
To test if the polynomial \(p(x)\geq 0\), \(\forall x\in S\) using SOS programming we can rewrite (4) as
\[p+\sum_{j=1}^{q_{2}}t_{j}h_{j}-\sum_{i=1}^{q_{1}}s_{i}g_{i}\in\Sigma[x],\]
where \(s_{i}\in\Sigma[x]\) and \(t_{j}\in\mathbb{R}[x]\). By selecting higher degree multipliers \(s_{i}\), \(t_{j}\) etc. we can obtain a series of set emptiness tests with increasing complexity and non-decreasing accuracy.
### Stability of Neural Feedback Loops using Sum of Squares
In this section we outline the methods proposed in [25], which presents a method to determine if the equilibrium a closed loop NFL is stable and then also use this to compute an inner approximation of the region of attraction.
Consider a continuous-time system
\[\dot{z}(t)=f(z(t),u(t)), \tag{5}\]
where \(f\) is the plant model \(z(t)\in\mathbb{R}^{n_{z}}\) and \(u(t)\in\mathbb{R}^{n_{u}}\) are the system states and inputs respectively. \(n_{z}\) and \(n_{u}\) are the number of system states and inputs respectively. This system is a continuous time NFL if the controller \(u\) is an NN. Consider a state feedback controller \(u(t)=\pi(z(t)):\mathbb{R}^{n_{z}}\rightarrow\mathbb{R}^{n_{u}}\) as a feed-forward fully connected NN such that
\[\begin{split} x^{0}(t)&=z(t),\\ v^{k}(t)&=W^{k}x^{k}(t)+b^{k},\;\mathrm{for}\;k=0, \ldots,\ell-1,\\ x^{k+1}(t)&=\phi(v^{k}(t)),\,\mathrm{for}\;k=0, \ldots,\ell-1,\\ \pi(z(t))&=W^{\ell}x^{\ell}(t)+b^{\ell},\end{split} \tag{6}\]
where \(W^{k}\in\mathbb{R}^{n_{k+1}\times n_{k}}\), \(b^{k}\in\mathbb{R}^{n_{k+1}}\) are the weights matrix and biases of the \((k+1)^{\text{th}}\) layer respectively and \(z(t)=x^{0}(t)\in\mathbb{R}^{n_{z}}\) is the input into the NN. The activation function \(\phi\) is applied element-wise to the \(v^{k}(t)\) terms. The number of neurons in the \(k^{\text{th}}\) layer is denoted by \(n_{k}\).
As shown in [25], an NFL can be viewed as a dynamical system with equality and inequality constraints that arise from the input-output description of the NN. These constraints can be described as the semi-algebraic set in (1). Further constraints can be added to the semi-algebraic set when only local asymptotic stability is being verified. The region
\[D^{z}=\left\{z\in\mathbb{R}^{n_{z}}\,|\,d_{k}(z)\geq 0,\;k=1,\ldots,n_{d}\right\}, \tag{7}\]
is considered where the stability conditions will need to be satisfied.
**Proposition 2**.: _([25]) Consider System (5) in feedback with an NN controller given by (6). Suppose the input-output properties of the NN are described by (1), and consider the region given by (7). Suppose there exists a polynomial function \(V(z)\) satisfying the following conditions_
\[\begin{split} V(z)-\rho(z)&\in\Sigma[z],\\ \rho(z)&>0,\\ -\frac{\partial V}{\partial z}(z)f(z,\pi(z))-\sum_{k=1}^{n_{d}} p_{k}(X)d_{k}(z)-\sum_{j=1}^{q_{2}}t_{j}(X)h_{j}(x)-\sum_{i=1}^{q_{1}}s_{i}(X)g _{i}(x)\in\Sigma[X],\\ p_{k}(X)&\in\Sigma[X],\;\forall k=1,\ldots,n_{d},\\ s_{i}(X)&\in\Sigma[X],\;\forall i=1,\ldots,q_{1},\\ t_{j}(X)&\in\mathbb{R}[X],\;\forall j=1,\ldots,q_{2}, \end{split} \tag{8}\]
_where \(X\) is a vector of all the system and NN states, i.e. \(X=(x,z)\). Then the equilibrium of the NFL is stable._
If a Lyapunov function is constructed using Proposition 2 then the region of attraction can be approximated. To achieve this we find the largest level set of the Lyapunov function \(V(z)\) that is contained within the region that the Lyapunov conditions are satisfied. This can be cast as an SOS program. Consider a Lyapunov function \(V(z)\), if the SOS optimisation problem
\[\begin{split}|z|^{k}(V(z)-\gamma)+p(z)d(z)&\in \Sigma[z],\\ p(z)&\in\Sigma[z],\end{split} \tag{9}\]
is feasible, where \(\gamma\) is a variable to be maximised and \(k\) is a positive integer, then \(V(z)\leq\gamma\) is an estimate of the region of attraction.
## 3 Rational Neural Networks
There has been little prior work investigating NNs with rational expressions for control. [20] showed the expressive power of using rational activation functions and how they can be used to approximate commonly used activation functions such as ReLU. The effectiveness of rational NNs in machine learning tasks was shown in [39]. Polynomial
activation functions have also been used in previous work [40, 41, 42, 43, 44], however they have not seen much investigation within the machine learning research community due to vanishing and exploding gradients that can be exhibited when used with the backpropagation algorithm [45]. Another class of NNs that has seen recent interest are quadratic NNs. These can take many different forms, which are summarised in [46]. Quadratic NNs have been shown to be beneficial in many aspects [47, 22, 48], such as being general universal function approximators. Recent work from [49] has explored the use of a two-layer NN controller that has quadratic activation functions and how a convex formulation can be achieved using this structure.
One of the benefits of using polynomial or rational activation functions in the NN is that they can contain trainable parameters, which can improve the performance of the NN and reduce the number of neurons in the network. For example, the rational activation function used in [39] is expressed as
\[\phi(x)=\frac{P(x)}{Q(x)}=\frac{\sum_{i=0}^{r_{P}}a_{i}x^{i}}{\sum_{j=0}^{r_{ O}}b_{j}x^{j}}, \tag{10}\]
where \(r_{P}=\deg(P(x))\), \(r_{Q}=\deg(Q(x))\) and \(a_{i}\) and \(b_{j}\) are the trainable parameters within the activation function. Another proposed activation function based on rational functions is the Pade activation unit, which has shown to be useful when applied to image classification [50]. This activation function is expressed as
\[\phi(x)=\frac{P(x)}{Q(x)}=\frac{\sum_{i=0}^{r_{P}}a_{i}x^{i}}{1+|\sum_{j=0}^{r _{O}}b_{j}x^{j}|}\]
and can be used to learn and approximate commonly used activation functions, whilst training the NN. The resulting NNs can have compact representations and can perform similarly to state-of-the-art NNs.
### Rational Approximation of Tanh Activation Function
Despite their success in some machine learning applications, rational activation functions have yet to be applied to NN controllers. Most methods to train NNs in control systems use traditional activation functions such as ReLU, sigmoid and tanh. In addition, NNs that are used in control systems are often significantly smaller than those used for machine learning tasks such as image classification. The reason behind this is that methods to train NNs for control often require the use of an SDP, meaning that having a large number of parameters in the NN makes the problem become intractable. However, the requirement for the NN to only contain a small number of neurons to learn a function that can sufficiently control a system has not been well explored. One justification for this is the low number of input and output dimensions that are required for small NFLs. However, if the number of system states increases, then larger networks may be required, which may be intractable to obtain with current methods.
For NNs to be effectively used as controllers, it would be of interest to investigate the use of alternative NN structures and activation functions that may give a sufficient level of expressivity in the network, whilst being easier to compute or ensure robustness guarantees. Motivated by this we propose a novel activation function defined by a simple rational expression. We name this function 'Rtanh' as it is an approximation of the tanh activation function
\[\tanh(x)=\phi(x)=\frac{e^{x}-e^{-x}}{e^{x}+e^{-x}}\]
and is defined as
\[\mathrm{Rtanh}(x)=\phi(x)=\frac{4x}{x^{2}+4}.\]
This function is shown in Figure 1(a) against the tanh function and the error between these functions is shown in Figure 1(b).
The tanh function can be over-approximated by sector constraints, as demonstrated in [29, 30, 31]. These constraints can then be used to analyse the NFL by creating an optimisation problem through an SDP or SOS programming. However, these bounds are very conservative as shown in Figure 0(b). A big advantage of the 'Rtanh' function is that it can be represented by a single equality constraint such that
\[\phi(x)(x^{2}+4)-4x=0. \tag{11}\]
If this activation function were to be used in an NN, then to test its robustness properties using the Psatz, the equality constraint (11) can be used directly. This is beneficial as no conservatism is introduced when abstracting the input-output properties of the NN using a semi-algebraic set.
To demonstrate that the Rtanh activation function is useful when analysing NFLs, we take an existing NFL that uses tanh activation functions. We consider the inverted pendulum from [24], which uses a five layer NN with five nodes in each layer and tanh activation functions. The dynamics of this system are expressed as
\[\ddot{\theta}(t)=\frac{mgl\sin\theta(t)-\mu\dot{\theta}(t)+\text{sat}(u(t))}{ml^ {2}}, \tag{12}\]
where \(\text{sat}(\cdot)\) is the saturation function. The system is discretised with time step \(\Delta t=0.2\) and is parameterised by \(m=0.15\text{kg}\), \(l=0.5\text{m}\), \(\mu=0.5\text{Nms/rad}\), \(g=10\text{m/s}^{2}\), \(u_{\text{max}}=1\) where \(u_{\text{max}}\) is the saturation limit in the saturation function. As shown in [51], we can use 'ReachSparsePastz', which is an SOS optimisation technique to approximate the reachable set at each time step. Using the tanh activation function as in the original NN controller, the reachable sets can be computed and are shown in Figure 3.
We then replace the tanh function with Rtanh and observe the system behaviour. We conduct the same reachability analysis as in [51] by computing the reachable sets for six time steps. In Figure 4 we can see that this activation function behaves similarly to that of the tanh function and that using the Psatz with sparse polynomial optimisation (ReachSparsePastz) gives very tight approximations of the reachable sets.
We also compute the region of attraction using the approach outlined in Section 2.2, refereed to as 'NNSSOSStability'. The region of attraction when using the tanh and Rtanh functions are shown in Figure 5 and Figure 6 respectively. We
Figure 3: Reachable sets for the inverted pendulum in (12) with the tanh activation function. The reachable sets are computed using ReachSparsePastz and are shown in black. The true reachable sets are shown in blue.
Figure 2: Plots showing a comparison between the \(\mathrm{Rtanh}(x)\) and \(\tanh(x)\) functions.
find that the region of attraction is increased significantly when using Rtanh over the tanh activation function. The areas of the region of attraction on the phase plane are 58 and 0.94 per square unit for the Rtanh and tanh activation functions respectively.
These results show that the Rtanh function can be useful in NNs and NFLs. However, this function is not compatible with most recent methods to learn NN controllers due to it being represented by an equality that is a polynomial of degree three. Indeed, most methods require sector inequality constraints and the Schur complement, which is used to
Figure 4: Reachable sets for the inverted pendulum in (12) where the tanh activation function is replaced with the Rtanh function. The reachable sets are computed using ReachSparsePastz and are shown in black. The true reachable sets are shown in blue.
Figure 5: Diagram showing the region of attraction for the inverted pendulum in (12) with tanh activation functions in the neural network controller. The region of attraction is computed using NNSSStability (black) with a fourth order Lyapunov function. The system trajectories are shown in blue.
make the problem convex. Therefore, this function cannot be used in those formulations. This requires the use of an alternative method to obtain an NN controller with these activation functions, which we will investigate later in this paper.
### Rational Approximation of Sigmoid Activation Function
It is also possible to approximate the sigmoid function as a rational function. We define the function
\[\mathrm{Rsig}(x)=\phi(x)=\frac{(x+4)^{2}}{2(x^{2}+16)}.\]
As shown in Figure 6(a), this is a good approximation to the sigmoid function. Figure 6(b) shows the error with the sigmoid function. Rsig can be expressed as the equality constraint
\[2(x^{2}+16)\phi(x)-(x+4)^{2}=0.\]
Figure 6: Diagram showing the region of attraction for the inverted pendulum in (12) where the tanh activation function is replaced with the Rtanh function. The region of attraction is computed using NNNOSSSstability (black) with a fourth order Lyapunov function. The system trajectories are shown in blue.
Figure 7: Plots showing a comparison between the \(\mathrm{Rsig}(x)\) and \(\mathrm{sig}(x)\) functions.
### Irrational Approximation of ReLU Activation Function
Approximating the ReLU function with a rational function is more difficult. Here we consider the class of irrational activation functions; to demonstrate how they could be used we propose the following function
\[\mathrm{IReLU}(x)=\phi(x)=\sqrt{x^{2}+1}+x-1.\]
This function and the error with the ReLU function are shown in Figure (a)a and Figure (b)b respectively. By making a substitution, the IReLU function can be expressed as a semi-algebraic set with two equality constraints and one inequality constraint
\[\phi(x)-y-x+1 =0,\] \[y^{2}-x^{2}-1 =0,\] \[y \geq 0.\]
### General Rational Neural Networks
The rational approximations of the tanh and sigmoid functions proposed in Section 3 can be used as activation functions in an NN. However, if we were to use these structures we could be missing the potential expressivity that can be obtained from a general class of rational functions. As in (10), rational activation functions can contain training parameters. Therefore, instead of considering a rigid structure for the activation function and fixing the parameters, we can consider a general rational expression that contains the NN parameters.
If we were to consider a feed-forward fully connected NN with predefined rational activation functions, then the NN can be written as
\[x^{0} =u, \tag{13}\] \[v^{k} =W^{k}x^{k}+b^{k},\;\mathrm{for}\;k=0,\dots,\ell-1,\] \[x^{k+1} =\frac{p(v^{k})}{q(v^{k})},\;\mathrm{for}\;k=0,\dots,\ell-1,\] \[\pi(u) =W^{\ell}x^{\ell}+b^{\ell},\]
where \(p(v^{k})\) and \(q(v^{k})\) are polynomial functions with specified coefficients. However, if we substitute the preactivation value \(v^{k}_{j}\) into the polynomial expressions, we obtain a rational expression in \(x^{k}_{j}\), where the coefficients are parameterised by the values of the weight matrices \(W^{k}\) and biases vector \(b^{k}\). Any coefficients in the rational activation function will be multiplied by the weights and bias terms. Therefore, we can instead consider an NN with no affine transformation and parameters only in the rational activation function. We can then write the rational NN as
\[x^{0} =u, \tag{14}\] \[x^{k+1}_{i} =\frac{p_{i}(x^{k})}{q_{i}(x^{k})},\;\mathrm{for}\;i=0,\dots,n_{k},\;\mathrm{for}\;k=0,\dots,\ell-1,\] \[\pi_{i}(u) =\frac{p_{i}(x^{\ell})}{q_{i}(x^{\ell})},\;\mathrm{for}\;i=0, \dots,n_{\ell},\]
Figure 8: Plots showing a comparison between the \(\mathrm{IReLU}(x)\) and \(\mathrm{ReLU}(x)\) functions.
where \(p_{i}(x^{k})\) and \(q_{i}(x^{k})\) are general polynomial functions which can be written as
\[p_{i}(x) =\sum_{\alpha\in\mathbb{N}_{q_{pi}}^{n}}\lambda_{\alpha}x^{\alpha},\] \[q_{i}(x) =\sum_{\beta\in\mathbb{N}_{q_{q_{i}}}^{n}}\gamma_{\beta}x^{\beta},\]
where \(d_{q_{i}}=\deg(q_{i})\), \(d_{p_{i}}=\deg(p_{i})\), \(\alpha\) and \(\beta\) are the exponents that are defined in Section 2.1 and \(\lambda_{\alpha}\) and \(\gamma_{\beta}\) are the coefficients of \(p_{i}(x)\) and \(q_{i}(x)\) respectively. To show the similarity between (13) and (14) we present the following simple example.
**Example 1**.: _Consider an NN with structure defined by (13) with two layers and two nodes in each layer. The rational activation functions with \(\deg(p)=\deg(q)=2\) can be written as_
\[p(v) =c_{1}v^{2}+c_{2}v+c_{3},\] \[q(v) =d_{1}v^{2}+d_{2}v+d_{3}.\]
_The first node in the second layer can be expressed as_
\[v_{1}^{1} =W_{1}^{1}x_{1}^{1}+W_{2}^{1}x_{2}^{1}+b_{1}^{1},\] \[x_{1}^{2} =\frac{p(v_{1}^{1})}{q(v_{1}^{1})}.\]
_We can substitute in the preactivation terms into the activation functions to obtain the polynomials_
\[p(v_{1}^{1}) =c_{1}(W_{1}^{1})^{2}(x_{1}^{1})^{2}+2c_{2}W_{1}^{1}W_{2}^{1}(x_{1 }^{1}x_{2}^{1})+c_{3}(W_{2}^{1})^{2}(x_{2}^{1})^{2}+\] \[(c_{1}W_{1}^{1}+2c_{2}W_{1}^{1}b_{1}^{1})(x_{1}^{1})+(c_{1}W_{2} ^{1}+2c_{2}W_{2}^{1}b_{1}^{1})(x_{2}^{1})+(c_{1}(b_{1}^{1})^{2}+c_{2}b_{1}^{1}+ c_{3}),\]
\[q(v_{1}^{1}) =d_{1}(W_{1}^{1})^{2}(x_{1}^{1})^{2}+2d_{2}W_{1}^{1}W_{2}^{1}(x_{1 }^{1}x_{2}^{1})+d_{3}(W_{2}^{1})^{2}(x_{2}^{1})^{2}+\] \[(d_{1}W_{1}^{1}+2d_{2}W_{1}^{1}b_{1}^{1})(x_{1}^{1})+(d_{1}W_{2} ^{1}+2d_{2}W_{2}^{1}b_{1}^{1})(x_{2}^{1})+(d_{1}(b_{1}^{1})^{2}+d_{2}b_{1}^{1} +d_{3}),\]
_which form the rational expression for \(x_{1}^{2}\). However, we can instead write the rational expression as_
\[x_{1}^{2}=\frac{\lambda_{1}(x_{1}^{1})^{2}+\lambda_{2}x_{1}^{1}x_{2}^{1}+ \lambda_{3}(x_{2}^{1})^{2}+\lambda_{4}x_{1}^{1}+\lambda_{5}x_{2}^{1}+\lambda_ {6}}{\gamma_{1}(x_{1}^{1})^{2}+\gamma_{2}x_{1}^{1}x_{2}^{1}+\gamma_{3}(x_{2}^ {1})^{2}+\gamma_{4}x_{1}^{1}+\gamma_{5}x_{2}^{1}+\gamma_{6}},\]
_which is in the form of (14). This can be generalised to larger NNs with higher degree polynomials in the rational functions. This approach can reduce the number of parameters in the NN and each polynomial is convex in the decision variables. This allows the parameters in the rational expression to be tuned directly instead of simultaneously tuning the weights, biases and rational activation parameters._
## 4 Recovering Stabilising Controllers using Sum of Squares
In this section, we propose a novel procedure to obtain a stabilising controller for a non-linear polynomial system using SOS programming. To do this we leverage the Psatz and exploit its structure to generate a feasibility test for a stabilising controller.
**Proposition 3**.: _Consider the polynomial \(P(x)\in\mathbb{R}[x]\) and partition \(x=[y,z]\) so that \(y\in\mathbb{R}^{n},z\in\mathbb{R}\). Consider the set_
\[S=\{y\in\mathbb{R}^{n},\;z\in\mathbb{R}\;|\;p_{i}(y,z)-q_{i}(y)z=0\;\;\forall\; i=1,\ldots,m\},\]
_where \(p_{i}(y,z)\in\mathbb{R}[x]\), \(q_{i}(z)\in\mathbb{R}[z]\). If_
\[P(x)-\sum_{i=1}^{m}(p_{i}(y,z)-q_{i}(y)z)\in\Sigma[x],\;q_{i}(y)\neq 0,\; \forall\;i=1,\ldots,m, \tag{15}\]
_then \(P(x)\geq 0\) on the set_
\[T=\bigg{\{}y\in\mathbb{R}^{n},\;z\in\mathbb{R}\;\bigg{|}\;\frac{p_{i}(y,z)}{q _{i}(y)}-z=0,\;q_{i}(y)\neq 0\;\;\forall\;i=1,\ldots,m\bigg{\}}.\]
Proof.: Consider the set \(S\), if
\[P(x)-\sum_{i=1}^{m}t_{i}(p_{i}(y,z)-q_{i}(y)z)\in\Sigma[x],\]
where \(t_{i}=1\), since \(t_{i}\in\mathbb{R}[x]\), then \(P(x)\geq 0\) on \(S\). If we include the condition \(q_{i}(y)\neq 0,\;\forall\;i=1,\ldots,m\), then we can rewrite (15) as
\[P(x)-\sum_{i=1}^{m}q_{i}(y)\bigg{(}\frac{p_{i}(y,z)}{q_{i}(y)}-z\bigg{)}\in \Sigma[x],\;q_{i}(y)\neq 0,\;\forall\;i=1,\ldots,m,\]
since \(q_{i}(y)\in\mathbb{R}[x]\), then \(P(x)\geq 0\) on \(T\).
Proposition 3 considers the Psatz in a particular form to obtain a positivity certificate of a function over a set of rational functions. In essence, this formulation fixes the multipliers in the Psatz to unity and then searches over the polynomials to find a constraint set that satisfies the feasibility test. This is useful if we want to find a constraint set for the nonnegativity of a function, instead of testing that function over a known constraint set. This can be leveraged to find a controller that satisfies the Lyapunov stability conditions.
**Proposition 4**.: _Consider the non-linear system_
\[\dot{z} =f(z)+g(z)u, \tag{16}\] \[u =\frac{p(z)}{q(z)},\;q(z)\neq 0,\]
_where \(z\in\mathbb{R}^{n_{x}}\) are the system states, \(u\in\mathbb{R}^{n_{u}}\) is the controller input and \(f(z)\) and \(g(z)\) are polynomials. Suppose that there exists a Lyapunov function \(V(z)\) such that \(V(z)\) is positive definite in a neighbourhood of the origin and polynomials \(p(z),\;q(z)\) satisfying_
\[-\frac{\partial V}{\partial z}(f(z)+g(z)u)-(p(z)-q(z)u)\geq 0\;\forall z,u.\]
_Then the origin of the state space is a stable equilibrium of the system._
Proof.: We consider the stability of constrained dynamical systems as in [33]. Using the same argument as in Proposition 3, if
\[-\frac{\partial V}{\partial z}(f(z)+g(z)u),\]
is nonnegative on the set
\[\{z\in\mathbb{R}^{n_{z}},\;u\in\mathbb{R}^{n_{u}}\;|\;p(z)-q(z)u=0,\;q(z)\neq 0\},\]
then it is also nonnegative on the set
\[\bigg{\{}z\in\mathbb{R}^{n_{z}},\;u\in\mathbb{R}^{n_{u}}\;\bigg{|}\;\frac{p(z )}{q(z)}-u=0,\;q(z)\neq 0\bigg{\}},\]
which defines the controller in the closed-loop system.
To compute the Lyapunov function and polynomials that define the rational controller in Proposition 4 we can formulate an SOS program.
**Proposition 5**.: _Consider the dynamical system (16) in Proposition 4. Suppose there exists polynomial functions \(V(z),\;p(z),\;q(z)\), a positive definite function \(\rho(z)\) such that_
\[V(z)-\rho(z) \in\Sigma[z],\] \[-\frac{\partial V}{\partial z}(f(z)+g(z)u)-(p(z)-q(z)u) \in\Sigma[X],\] \[p(z) \in\mathbb{R}[X],\] \[q(z) \in\mathbb{R}[X],\] \[q(z) \neq 0,\]
_where \(X=(z,u)\) is a vector of all of the states. Then the origin of the system is a stable equilibrium._
Theorem 5 allows us to reconstruct a stabilising controller from a feasibility test using SOS programming. However, the structure of the controller is limited as it is a simple rational function. We therefore extend this approach to a more expressive class of functions through rational NNs.
### Extension to Rational Neural Network Controllers
The technique outlined in the previous section can be used to recover a stabilising controller that is a rational function of the system states. However, we can expand this approach to consider an NN architecture that contains rational functions similar to the one proposed in (14). We consider a state feedback controller \(u(t)=\pi(z(t)):\mathbb{R}^{n_{x}}\rightarrow\mathbb{R}^{n_{u}}\) as a rational NN such that
\[\begin{split} x^{0}(t)&=z(t),\\ x_{i}^{k+1}(t)&=\frac{p_{i}^{k}(x^{k}(t))}{q_{i}^{ k}(x^{k}(t))},\,\text{for}\ i=1,\ldots,n_{k},\ k=0,\ldots,\ell-1,\\ u_{i}(t)=\pi_{i}(z(t))&=\frac{p_{i}^{\ell}(x^{ \ell}(t))}{q_{i}^{\ell}(x^{\ell}(t))},\,\text{for}\ i=1,\ldots,n_{z},\end{split} \tag{17}\]
where \(p_{i}^{k}(x^{k}(t))\), \(q_{i}^{k}(x^{k}(t))\) are the polynomials that form the rational expression associated with the \(i^{\text{th}}\) node in the \((k+1)^{\text{th}}\) layer. The number of neurons in the \(k^{\text{th}}\) layer is denoted by \(n_{k}\). We will drop the time dependence notation throughout the rest of this paper for simplicity.
We can apply Proposition 5 and the theory of constrained dynamical systems as in [33] due to the well-defined structure of this controller. The following proposition shows how Lyapunov stability over constrained dynamical systems can be used to recover a controller of this form.
**Proposition 6**.: _Consider the non-linear system_
\[\begin{split}\dot{z}&=f(z)+g(z)u,\\ u&=\pi(z),\end{split}\]
_where \(z\in\mathbb{R}^{n_{x}}\) are the system states, \(u\in\mathbb{R}^{n_{u}}\) is the controller input and \(f(z)\) and \(g(z)\) are polynomials. Consider the controller structure \(\pi(z)\) defined in (17) and the region given by (7). Suppose there exist polynomial functions \(V(z)\), \(p_{i}^{k}(x^{k})\ \forall i=1,\ldots,n_{k+1},\ k=0,\ldots,\ell\), and \(q_{i}^{k}(x^{k})\ \forall i=1,\ldots,n_{k+1},\ k=0,\ldots,\ell\) satisfying the following conditions_
\[\begin{split} V(z)-\rho(z)&\in\Sigma[z],\\ \rho(z)&>0,\\ -\frac{\partial V}{\partial z}(z)(f(z)+g(z)u)&-\sum_ {k=1}^{n_{d}}s_{k}(X)d_{k}(z)\ldots\\ &-\sum_{k=1}^{\ell}\sum_{i=1}^{n_{k}}\left(p_{i}^{k}(x^{k})-q_{i} ^{k}(x^{k})x_{i}^{k+1}\right)\ldots\\ &-\sum_{i=1}^{n_{u}}\left(p_{i}^{\ell}(x^{\ell})-q_{i}^{\ell}(x^{ \ell})u_{i}\right)\in\Sigma[X],\\ s_{k}(X)&\in\Sigma[X],\,\forall k=1,\ldots,n_{d},\\ q_{i}^{k}(x^{k})&\neq 0,\,\forall\,i=1,\ldots,n_{k}, \ k=0,\ldots,\ell,\end{split}\]
_where \(X\) is a vector of all the system and NN states, i.e. \(X=(x,u,z)\). Then the equilibrium of the system is stable._
The above proposition can generate a feasible SOS program, however the rational NN controller may be difficult to recover. This is because the coefficients in the \(q_{i}^{k}(x^{k})\) terms will be set to very small values by the SOS program, due to each term cancelling with the adjacent layers. We therefore propose an alternative rational NN controller structure in the following section to mitigate this issue.
### Refined Rational Neural Network Controller
The Lyapunov condition in Proposition 6 may result in numerical issues when solving the SOS program due to the structure of the constraints, making the rational NN controller difficult to recover. To overcome this issue, we enrich the
NN structure by considering a state feedback controller \(u=\pi(z):\mathbb{R}^{n_{z}}\rightarrow\mathbb{R}^{n_{u}}\) as a rational NN such that
\[\begin{split} y^{0}&=z,\\ x_{i}^{k+1}&=\frac{\sum_{j=1}^{n_{k}}p_{i,j}^{k}(y^{k}) y_{j}^{k}}{q_{i}^{k}(z)},\,\text{for}\,i=1,\dots,n_{k+1},\,k=0,\dots,\ell-1,\\ y_{i}^{k+1}&=x_{i}^{k+1}+1,\,\text{for}\,i=1,\dots,n _{k},\,k=0,\dots,\ell-1,\\ u_{i}=\pi_{i}(z)&=\frac{\sum_{j=1}^{n_{\ell}}p_{i,j} ^{\ell}(y^{\ell})y_{j}^{\ell}}{q_{i}^{\ell}(z)}\Big{(}\sum_{m=1}^{n_{k}}z_{m}^{ 2}\Big{)},\,\text{for}\,i=1,\dots,n_{z},\end{split} \tag{18}\]
where \(p_{i,j}^{k}(x^{k})\), \(q_{i}^{k}(z)\) are the \(j^{\text{th}}\) polynomials that form the rational expression associated with the \(i^{\text{th}}\) node in the \((k+1)^{\text{th}}\) layer.
Each \(x_{j}^{k}\) node in the NN contains a rational activation function and each \(y_{j}^{k}\) node is equal to the \(x_{j}^{k}\) term with the addition of a bias term which we set to unity. The \(y_{j}^{k}\) term that appears in the numerator of the rational activation function is to ensure that all terms in the \(x_{i}^{k+1}\) node are a function of all of the nodes in the \(k^{\text{th}}\) layer and to impose more structure on the NN. The denominator \(q_{i}^{k}(z)\) is a function of the system states to ensure that all nodes in the network are tied to the system states and not just the nodes in the previous layer. This will ensure that the SOS program does not set the coefficients in the \(q_{i}^{k}(z)\) terms to very small values. The final layer contains a multiplier term which is a sum of all of the system states \(\sum_{m=1}^{n_{k}}z_{m}^{2}\) to ensure that the controller input goes to zero at the origin.
We can then adapt Proposition 6 for this modified rational NN structure, to obtain a controller that can be recoverable from the feasibility test.
**Proposition 7**.: _Consider the non-linear system_
\[\begin{split}\dot{z}&=f(z)+g(z)u,\\ u&=\pi(z),\end{split}\]
_where \(z\in\mathbb{R}^{n_{z}}\) are the system states, \(u\in\mathbb{R}^{n_{u}}\) is the controller input and \(f(z)\) and \(g(z)\) are polynomials. Consider the controller structure \(\pi(z)\) defined in (18) and the region given by (7). Suppose there exist polynomial functions \(V(z)\), \(p_{i,j}^{k}(x^{k})\;\forall i=1,\dots,n_{k+1},\;j=1,\dots,n_{k},\;k=0,\dots,\ell\) and \(q_{i}^{k}(z)\;\forall i=1,\dots,n_{k+1},\;k=0,\dots,\ell\) satisfying the following conditions_
\[\begin{split} V(z)-\rho(z)&\in\Sigma[z],\\ \rho(z)&>0,\\ -\frac{\partial V}{\partial z}(z)(f(z)+g(z)u)&- \sum_{k=1}^{n_{d}}s_{k}(X)d_{k}(z)\dots\\ &-\sum_{k=1}^{\ell}\sum_{i=1}^{n_{k}}\left(\Big{(}\sum_{j=1}^{n_ {k-1}}p_{i,j}^{k}(y^{k})y_{j}^{k}\Big{)}-q_{i}^{k}(z)x_{i}^{k+1}\right)\dots \\ &-\sum_{k=1}^{\ell}\sum_{i=1}^{n_{k}}t_{i,k}(X)(y_{i}^{k}-x_{i}^{k }-1)\dots\\ &-\sum_{i=1}^{n_{u}}\bigg{(}\sum_{j=1}^{n_{\ell}}\bigg{(}p_{i,j} ^{\ell}(y^{\ell})y_{j}^{\ell}\bigg{(}\sum_{m=1}^{n_{z}}z_{m}^{2}\bigg{)} \bigg{)}-q_{i}^{\ell}(z)u_{i}\bigg{)}\in\Sigma[X],\\ s_{k}(X)&\in\Sigma[X],\;\forall k=1,\dots,n_{d},\\ t_{i,k}(X)&\in\mathbb{R}[X],\;\forall k=1,\dots,\ell,\;i=1,\dots,n_{k},\\ q_{i}^{k}(x^{k})&\neq 0,\;\forall\;i=1,\dots,n_{k},\;k=0, \dots,\ell,\end{split}\]
_where \(X\) is a vector of all the system and NN states, i.e. \(X=(u,x,y,z)\). Then the equilibrium of the system is stable._
The SOS program in the above proposition is convex in the rational NN parameters and can hence be solved using SOS programming. Note that saturation and any uncertainty and robustness conditions can be incorporated in the same way that is demonstrated in [25]. Proposition 7 presents a method to obtain a stabilising NN controller for a non-linear polynomial system in a convex way by solving one SOS optimisation problem. As described in Section 1, other recent approaches such as [27, 28, 29, 30, 31] rely on iterative algorithms or reinforcement learning formulations that are significantly more expensive to compute.
## 5 Numerical Examples
We demonstrate how the approach in Proposition 7 can be used to recover stabilising rational NN controllers through numerical examples. These examples were run on a four-core Intel Xeon processor @3.50GHz with 16GB of RAM. The SOS programs were implemented using MATLAB and SOSTOOLS to parse the SOS constraints into an SDP, which is solved using MOSEK [52].
### One Dimensional System
To show that this method is able to obtain a stabilising rational NN controller, we consider a very simple one dimensional linear system of the form
\[\dot{z}=z+u.\]
This system is unstable without a feedback controller. To recover a stabilising controller we set the size of the rational NN to be a small two layer network with a single node in each layer. The equations of the controller can be written as
\[x_{1} =\frac{\lambda_{1,1}z^{4}+\lambda_{1,2}z^{2}}{\gamma_{1,1}z^{2}+ \gamma_{1,2}},\] \[y_{1} =x_{1}+1,\] \[u =\frac{(\lambda_{2,1}y_{1}^{2}+\lambda_{2,2}y_{1})z}{\gamma_{2,1} z^{2}+\gamma_{2,2}},\]
where \(\gamma_{1,1}\geq 0\), \(\gamma_{1,2}>0\), \(\gamma_{2,1}\geq 0\), \(\gamma_{2,2}>0\). We also add saturation to the controller such that \(-10\leq u\leq 10\) and we enforce the local region of the state space to be \(-10\leq z\leq 10\). The SOS program is able to recover a feasible controller which can stabilise the system to the zero equilibrium. The state trajectory and controller input for this NFL over time are shown in Figure 9 and 10 respectively.
### Three Dimensional Non-linear System
We now consider a three dimensional non-linear system system given by
\[\dot{z}_{1} =-z_{1}+z_{2}-z_{3},\] \[\dot{z}_{2} =-z_{1}(z_{3}+1)-z_{2},\] \[\dot{z}_{3} =-z_{1}+u,\]
Figure 9: Plot showing the trajectories of the system state in Section 5.1 over time.
and attempt to find a stabilising rational NN controller for the system. We set the NN to have two layers and three nodes in each layer, with saturation \(-10\leq u\leq 10\). The system states are defined to operate in the region
\[1^{2}-z_{1}^{2}-z_{2}^{2}-z_{3}^{2}\geq 0.\]
Each polynomial in the rational NN is assigned to contain zeroth to fourth order terms. We define a quartic Lyapunov function and second order polynomials for the \(s_{k}\) and \(t_{i,k}\) terms. The trajectories for this system are shown in Figure 11, which shows that the controller can successfully stabilise the system.
Figure 11: Plot showing the trajectories of the system states for the example in Section 5.2.
Figure 10: Plot showing the controller input of the system in Section 5.1 over time.
### Non-linear Inverted Pendulum
We now consider the inverted pendulum proposed in [29] with dynamics given by
\[\ddot{\theta}(t)=\frac{mgl\sin{(\theta(t))}-\mu\dot{\theta}(t)+\mathrm{sat}(u(t)) }{ml^{2}}.\]
As shown in [33], we can rewrite the dynamics as a four dimensional polynomial system. We let \(z_{1}=\theta\), \(z_{2}=\dot{\theta}\) and by making the substitution \(z_{3}=\sin(z_{1})\), \(z_{4}=\cos(z_{1})\) the system can be written as
\[\dot{z}_{1} =z_{2},\] \[\dot{z}_{2} =\frac{g}{l}z_{3}-\frac{\mu}{ml^{2}}z_{2}+\frac{1}{ml^{2}}u,\] \[\dot{z}_{3} =z_{2}z_{4},\] \[\dot{z}_{4} =-z_{2}z_{3},\]
where \(m=0.15\,\mathrm{kg},\,l=0.5\,\mathrm{m},\,\mu=0.5\,\mathrm{Nmsrad}^{-1},\,g=9.81 \,\mathrm{m}\mathrm{s}^{-2}\) and the controller input is saturated such that \(-1\leq u\leq 1\). The system also requires the equality constraint
\[z_{3}^{2}+z_{4}^{2}-1=0,\]
to be enforced. We do not define any region of the state space and instead consider global stability. We include the additional robustness constraints on the length of the pendulum to be \(\pm 0.1\) its original length and additive white noise \(w\) on the angular velocity such that \(||w||_{\infty}\leq 0.1\). By making the substitution \(\delta=1/l\) the full dynamical system can be written as
\[\dot{z}_{1} =z_{2},\] \[\dot{z}_{2} =g\delta z_{3}-\frac{\mu\delta^{2}}{m}z_{2}+\frac{\delta^{2}}{m}u +w,\] \[\dot{z}_{3} =z_{2}z_{4},\] \[\dot{z}_{4} =-z_{2}z_{3},\] \[0 =z_{3}^{2}+z_{4}^{2}-1,\] \[0 \leq 1^{2}-u^{2},\] \[0 \leq 0.1^{2}-w^{2},\] \[0 \leq(1/0.4-\delta)(\delta-1/0.6).\]
The Lyapunov function must be carefully constructed due to the \(z_{4}\) state being equal to one at the origin. We therefore define the Lyapunov function to be the sum of two Lyapunov functions, the first of which is defined as
\[V_{1}(z_{3},z_{4})=a_{1}z_{3}^{2}+a_{2}z_{4}^{2}+a_{3}z_{4}+a_{4}\]
and the second is quadratic in \(z_{1}\) and \(z_{2}\). To ensure that the Lyapunov function is zero at the origin we must enforce
\[a_{2}+a_{3}+a_{4}=0.\]
To ensure that the Lyapunov function is positive definite, we define
\[\rho(z)=\epsilon_{1}z_{1}^{2}+\epsilon_{2}z_{2}^{2}+\epsilon_{3}(1-z_{4}),\]
where \(\epsilon_{1}\geq 0.1\), \(\epsilon_{2}\geq 0.1\), \(\epsilon_{3}\geq 0.1\).
The size of the rational NN controller is a two layer network with four nodes in each layer. By setting the size of the polynomials in the network to be between zeroth and fourth order, we are able to recover a controller. The trajectories for this NFL are shown in Figure 12. We can see that the controller initially drives the system states towards a manifold and then moves it towards the equilibrium. Since the control system is discontinuous at the manifold, further analysis is required to show stability of the system.
## 6 Conclusion
In this paper, we analyse the use of rational NNs in previous application areas. We present novel rational activation functions to approximate the traditional sigmoid and tanh functions and show how they can be used in robustness
problems for NFLs. We argue that rational activation functions can be replaced with a general rational NN structure where each layer is convex in the NN's parameters. We then propose a method to recover a stabilising controller from a feasibility test and then extend this approach to rational NNs. This structure is refined to make it more compatible when used in conjunction with SOS programming. Through numerous numerical examples we show how this approach can be used to recover stabilising rational NN controllers for NFLs with non-linear plants with noise and parametric uncertainty.
## Acknowledgements
This work was supported by EPSRC grants EP/L015897/1 (to M. Newton) and EP/M002454/1 (to A. Papachristodoulou) and the Tony Corner Research Fund.
|
2301.13262 | Temporal Consistency Loss for Physics-Informed Neural Networks | Physics-informed neural networks (PINNs) have been widely used to solve
partial differential equations in a forward and inverse manner using deep
neural networks. However, training these networks can be challenging for
multiscale problems. While statistical methods can be employed to scale the
regression loss on data, it is generally challenging to scale the loss terms
for equations. This paper proposes a method for scaling the mean squared loss
terms in the objective function used to train PINNs. Instead of using automatic
differentiation to calculate the temporal derivative, we use backward Euler
discretization. This provides us with a scaling term for the equations. In this
work, we consider the two and three-dimensional Navier-Stokes equations and
determine the kinematic viscosity using the spatio-temporal data on the
velocity and pressure fields. We first consider numerical datasets to test our
method. We test the sensitivity of our method to the time step size, the number
of timesteps, noise in the data, and spatial resolution. Finally, we use the
velocity field obtained using Particle Image Velocimetry (PIV) experiments to
generate a reference pressure field. We then test our framework using the
velocity and reference pressure field. | Sukirt Thakur, Maziar Raissi, Harsa Mitra, Arezoo Ardekani | 2023-01-30T20:10:19Z | http://arxiv.org/abs/2301.13262v1 | # Temporal Consistency Loss for Physics-Informed Neural Networks
###### Abstract
Physics-informed neural networks (PINNs) have been widely used to solve partial differential equations in a forward and inverse manner using deep neural networks. However, training these networks can be challenging for multiscale problems. While statistical methods can be employed to scale the regression loss on data, it is generally challenging to scale the loss terms for equations. This paper proposes a method for scaling the mean squared loss terms in the objective function used to train PINNs. Instead of using automatic differentiation to calculate the temporal derivative, we use backward Euler discretization. This provides us with a scaling term for the equations. In this work, we consider the two and three-dimensional Navier-Stokes equations and determine the kinematic viscosity using the spatio-temporal data on the velocity and pressure fields. We first consider numerical datasets to test our method. We test the sensitivity of our method to the time step size, the number of timesteps, noise in the data, and spatial resolution. Finally, we use the velocity field obtained using Particle Image Velocimetry (PIV) experiments to generate a reference pressure field. We then test our framework using the velocity and reference pressure field.
_Keywords--_ Physics-informed neural networks, Deep learning, Inverse modelling
## 1 Introduction
Physics-informed neural networks [1, 2] (PINNs) have become a popular method for solving a wide range of forward and inverse problems. While traditional deep learning methods are data intensive and do not consider the physics of the problem, PINNs leverage the prior information that we have in the form of governing partial differential equations (PDEs). Using the governing equations to regularize the optimization of parameters in PINNs allows us to train large networks with small datasets. This
proves handy for problems in biological and engineering systems, as collecting data can be tedious and expensive.
Augmentations to PINNs can be made in five dimensions: 1) More complex physics, 2) more complex geometries, 3) better loss functions, 4) better architectures, and 5) better training processes. While PINNs have been used to solve a whole range of multiphysics problems [3, 4, 5], there has been much interest in deploying PINNs to tackle problems in fluid mechanics [6, 7, 8, 9]. PINN-based frameworks have been used to model high-speed aerodynamic flows [10], porous media flows [11], and biomedical flows [12]. Recently, PINNs have been used to solve non-Newtonian and complex fluid systems [13, 14].
While vanilla feed-forward neural networks remain the most popular architecture, PINNs have been extended to use multiple feed-forward networks [15, 16], convolution neural networks [17, 18], recurrent neural networks [19, 20], and Bayesian neural networks [21]. However, there are challenges associated with training PINNs. It is not straightforward to train PINNs with "stiff" PDEs and multiscale problems. There have been numerous efforts to tackle the problem of assigning relative weights to the different objectives. Apart from assigning relative weights through trial and error, the methods include learning rate annealing [22], minmax weighting [23], using the eigenvalues of the neural tangent kernel matrix [24] and using the soft self-attention mechanism [25].
In this work, we focus on a better loss function and training process for PINNs. We leverage the scales we have in the observed data to obtain the relative scales of the loss terms. We use backward Euler discretization for time stepping instead of automatic differentiation for the temporal derivative. Other discrete schemes can be used as well, we focus on backward Euler discretization in this work without any loss of generality. This allows us to use the scale in the observed data to scale the governing equations.
In this work, we consider the two-dimensional and three-dimensional Navier-Stokes equations which govern fluid flows. We obtain the viscosity using the velocity and pressure fields as the observations. We test the sensitivity and robustness of our method to the time step size, the number of timesteps, noise in the data, and spatial resolution for a numerical dataset in section 3.1. Here, the number of timesteps refers to the number of discrete time slices we randomly sample from. As our method works robustly, we benchmark our method against the experimental dataset of Particle Image Velocimetry (PIV) observations in section 3.2. Finally, we provide some concluding remarks and discuss the future scope of our work in section 4.
## 2 Methodology
### Fluid Governing Equations
The conservation of mass for an incompressible fluid is given by
\[\nabla\cdot\mathbf{u}=0, \tag{1}\]
where \(\mathbf{u}\) is the fluid velocity vector. The conservation of momentum of an incompressible Newtonian fluid under isothermal, single-phase, transient conditions in the absence of a body force is given by
\[\left(\frac{\partial\mathbf{u}}{\partial t}+\mathbf{u}\cdot\nabla\mathbf{u}\right)=-\frac {1}{\rho}\nabla p+\nu\nabla^{2}\mathbf{u}, \tag{2}\]
where \(\rho\) is the density of the fluid, \(\mathbf{u}\) is the velocity vector, \(t\) is the time, \(p\) is the pressure, and \(\nu\) is the kinematic viscosity. The vector form of the momentum equation in two dimensions in \(x\) and \(y\) directions is, respectively, given by
\[\begin{split}& u_{t}+uu_{x}+vu_{y}=-\frac{1}{\rho}p_{x}+\nu(u_{xx}+u _{yy}),\\ & v_{t}+uv_{x}+vv_{y}=-\frac{1}{\rho}p_{y}+\nu(v_{xx}+v_{yy}), \end{split} \tag{3}\]
Figure 1: We employ a fully connected neural network with eight hidden layers and 128 neurons per hidden layer to learn the viscosity from velocity and pressure fields in two dimensions. The network takes \(t,x,y\) as inputs and outputs the scalar field \(\psi\) and the pressure \(p\). We employ automatic differentiation to compute the losses described in section 2. Here, \(I\) denotes the identity operator, and we compute the differential operators \(\partial x\) and \(\partial y\) using automatic differentiation.
Figure 2: We employ a fully connected neural network with ten hidden layers and 200 neurons per hidden layer to learn the viscosity from velocity and pressure fields in three dimensions. The network takes \(t,x,y,z\) as inputs and outputs the components of the vector field \(\mathbf{\psi}\) and the pressure \(p\). We employ automatic differentiation to compute the losses described in section 2. Here, \(I\) denotes the identity operator, and we compute the differential operators \(\partial x\) and \(\partial y\) using automatic differentiation.
where the subscripts denote the derivatives. The momentum equation in vector form in the \(x\), \(y\) and \(z\) directions is, respectively, given by
\[u_{t}+uu_{x}+vu_{y}+wu_{z}=-\frac{1}{\rho}p_{x}+\nu(u_{xx}+u_{yy}+ u_{zz}),\] \[v_{t}+uv_{x}+vv_{y}+wv_{z}=-\frac{1}{\rho}p_{y}+\nu(v_{xx}+v_{yy} +v_{zz}), \tag{4}\] \[w_{t}+uw_{x}+vw_{y}+ww_{z}=-\frac{1}{\rho}p_{z}+\nu(w_{xx}+w_{yy} +w_{zz}),\]
### Physics informed neural networks
We define the spatial coordinates in two and three dimensions as \(\mathbf{x}=(x,y)\) and \(\mathbf{x}=(x,y,z).\) We define the velocity field of an incompressible isothermal Newtonian fluid as
\[\mathbf{u}(t,\mathbf{x})=(u(t,\mathbf{x}),v(t,\mathbf{x})), \tag{5}\]
in two dimensions and as
\[\mathbf{u}(t,\mathbf{x})=(u(t,\mathbf{x}),v(t,\mathbf{x}),w(t,\mathbf{x})), \tag{6}\]
in three dimensions. Our observables at the \(N\) spatio-temporal data coordinates \(\{(t_{n},\mathbf{x}_{n}),n=1,\ldots,N\}\) are the corresponding velocity and pressure fields. We define the velocity field in three dimensions as
\[\mathbf{u}=\nabla\times\mathbf{\psi}, \tag{7}\]
where \(\mathbf{\psi}\) is a vector in three dimensions. We define the vector \(\mathbf{\psi}\) with components \(\psi^{1},\psi^{2},\) and \(\psi^{3}\). We get the velocity field as
\[u=\psi^{3}_{y}-\psi^{2}_{z} \tag{8}\] \[v=\psi^{1}_{y}-\psi^{3}_{z}\] \[w=\psi^{2}_{y}-\psi^{1}_{z},\]
where \(u,v\) and \(w\) are the components of velocity in the \(x,y\) and \(z\) directions, respectively. By definition, the velocity field will then satisfy the continuity equation (1). In two dimensions, for the \(x\) and \(y\) components of velocity, we make the assumption that
\[u=\psi_{y},v=-\psi_{x}, \tag{9}\]
for some latent function \(\psi(t,\mathbf{x})\). We approximate the function \((t,x,y)\longmapsto(\psi,p)\) using a deep neural network with parameters \(\theta\) for the two-dimensional case. Here \(p\) denotes the pressure field. For the three-dimensional case, a deep neural network with parameters \(\theta\) was used to approximate the function \((t,x,y,z)\longmapsto(\psi^{1},\psi^{2},\psi^{3},p)\). The schematic for the neural network setup for the two and three-dimensional cases are shown in fig. 1 and 2, respectively. We define the mean squared loss for regression over the velocity and pressure fields in two-dimensions as
\[L_{data}(\theta)= \mathbb{E}_{(t,x,y,u)}[\frac{|u(t,x,y;\theta)-u|^{2}}{{\sigma_{u} }^{2}}]+ \tag{10}\] \[\mathbb{E}_{(t,x,y,v)}[\frac{|v(t,x,y;\theta)-v|^{2}}{{\sigma_{v} }^{2}}]+\] \[\mathbb{E}_{(t,x,y,p)}[\frac{|p(t,x,y;\theta)-p|^{2}}{{\sigma_{p} }^{2}}],\]
and in three-dimensions as
\[\begin{split} L_{data}(\theta)=&\mathbb{E}_{(t,x,y,z,u) }[\frac{|u(t,x,y,z;\theta)-u|^{2}}{{\sigma_{u}}^{2}}]+\\ &\mathbb{E}_{(t,x,y,z,v)}[\frac{|v(t,x,y,z;\theta)-v|^{2}}{{ \sigma_{v}}^{2}}]+\\ &\mathbb{E}_{(t,x,y,z,w)}[\frac{|w(t,x,y,z;\theta)-w|^{2}}{{ \sigma_{w}}^{2}}]+\\ &\mathbb{E}_{(t,x,y,z,p)}[\frac{|p(t,x,y,z;\theta)-p|^{2}}{{ \sigma_{p}}^{2}}],\end{split} \tag{11}\]
where \(\sigma_{u}\), \(\sigma_{v}\) and \(\sigma_{w}\) are the standard deviation of the \(x\), \(y\) and \(z\) components of the reference velocity field, and \(\sigma_{p}\) is the standard deviation of the reference pressure field. Here \(\mathbb{E}\) denotes the expectation approximated by the population mean (i.e., mean of the observations \(t_{n},x_{n},y_{n},z_{n},u_{n},v_{n},w_{n},p_{n},n=1,\ldots,N\).). Now, considering the momentum equation in two-dimensions (3), we define
\[\begin{split}& g(u,v,p;\nu)=uu_{x}+vu_{y}+p_{x}-\nu(u_{xx}+u_{yy}), \\ & h(u,v,p;\nu)=uv_{x}+vv_{y}+p_{y}-\nu(v_{xx}+v_{yy}),\end{split} \tag{12}\]
and for three-dimensional momentum equation (4), we have
\[\begin{split}& l(u,v,w,p;\nu)=uu_{x}+vu_{y}+wu_{z}+p_{x}-\nu(u_{xx}+u_{ yy}+u_{zz}),\\ & m(u,v,w,p;\nu)=uv_{x}+vv_{y}+wv_{z}+p_{y}-\nu(v_{xx}+v_{yy}+v_{ zz}),\\ & n(u,v,w,p;\nu)=uw_{x}+vw_{y}+ww_{z}+p_{z}-\nu(w_{xx}+w_{yy}+w_{ zz}).\end{split} \tag{13}\]
We now create physics-informed neural networks using backward Euler discretization for the time derivative. Other discrete schemes can be used as well, we focus on backward Euler discretization in this work without any loss of generality. For the two-dimensional case, we have
\[\begin{split} u^{pi}(t,x,y;\Delta t,\theta)=u^{pu}(t+\Delta t,x,y ;\theta)-\Delta t\,g(u^{pu}(t+\Delta t,x,y;\theta)\\ \,v^{pu}(t+\Delta t,x,y;\theta)\\ \,p^{pu}(t+\Delta t,x,y;\theta);\nu),\\ \,v^{pi}(t,x,y;\Delta t,\theta)=v^{pu}(t+\Delta t,x,y;\theta)- \Delta t\,h(u^{pu}(t+\Delta t,x,y;\theta)\\ \,v^{pu}(t+\Delta t,x,y;\theta)\\ \,p^{pu}(t+\Delta t,x,y;\theta);\nu),\end{split} \tag{14}\]
and the three-dimensional physics-informed neural networks are given by
\[\begin{split} u^{pi}(t,x,y,z;\Delta t,\theta)=u^{pu}(t+\Delta t,x,y,z;\theta)-\Delta t\,l(u^{pu}(t+\Delta t,x,y,z;\theta)\\ \,v^{pi}(t+\Delta t,x,y,z;\theta)\\ \,w^{pu}(t+\Delta t,x,y,z;\theta)\\ \,p^{pu}(t+\Delta t,x,y,z;\theta);\nu),\\ v^{pi}(t,x,y,z;\Delta t,\theta)=v^{pu}(t+\Delta t,x,y,z;\theta)- \Delta t\,m(u^{pu}(t+\Delta t,x,y,z;\theta)\\ \,v^{pu}(t+\Delta t,x,y,z;\theta)\\ \,w^{pu}(t+\Delta t,x,y,z;\theta)\\ \,p^{pu}(t+\Delta t,x,y,z;\theta);\nu),\end{split} \tag{15}\]
\[w^{p_{i}}(t,x,y,z;\Delta t,\theta)=w^{pu}(t+\Delta t,x,y,z;\theta)- \Delta t\,n(u^{pu}(t+\Delta t,x,y,z;\theta) \tag{18}\] \[v^{pu}(t+\Delta t,x,y,z;\theta)\] \[w^{pu}(t+\Delta t,x,y,z;\theta)\] \[p^{pu}(t+\Delta t,x,y,z;\theta);\nu),\]
here the superscript pi denotes a physics-informed network and pu denotes a physics-uninformed network. Since the physics-informed and uninformed networks evaluate the velocities at the same point \(t,x,y\), they need to be consistent. We enforce this using a consistency loss
\[L_{consistency}(\theta;\Delta t)= \mathbb{E}_{(t,x,y)}[\frac{|u^{pi}(t,x,y;\Delta t,\theta)-u^{pu} (t,x,y;\theta)|^{2}}{{\sigma_{u}}^{2}}]+ \tag{19}\] \[\mathbb{E}_{(t,x,y)}[\frac{|v^{pi}(t,x,y;\Delta t,\theta)-v^{pu} (t,x,y;\theta)|^{2}}{{\sigma_{v}}^{2}}],\]
in two-dimensions. The consistency loss in three-dimensions is defined as
\[L_{consistency}(\theta;\Delta t)= \mathbb{E}_{(t,x,y,z)}[\frac{|u^{pi}(t,x,y,z;\Delta t,\theta)-u ^{pu}(t,x,y,z;\theta)|^{2}}{{\sigma_{u}}^{2}}]+ \tag{20}\] \[\mathbb{E}_{(t,x,y,z)}[\frac{|v^{pi}(t,x,y,z;\Delta t,\theta)-v^{ pu}(t,x,y,z;\theta)|^{2}}{{\sigma_{v}}^{2}}]+\] \[\mathbb{E}_{(t,x,y,z)}[\frac{|w^{pi}(t,x,y,z;\Delta t,\theta)-w ^{pu}(t,x,y,z;\theta)|^{2}}{{\sigma_{w}}^{2}}].\]
The parameters \(\theta\) are then optimized to minimize the following combined loss
\[L_{MSE}(\theta)=L_{data}(\theta)+L_{consistency}(\theta). \tag{21}\]
## 3 Results
To test our method, we consider two and three-dimensional numerical datasets (section 3.1) and an experimental dataset (section 3.2). We generated the two-dimensional dataset using the open source CFD toolbox OpenFOAM [26] for the flow past a cylinder. A snapshot of the reference velocity and pressure fields of this dataset is shown in fig. 3. For the three-dimensional case, we look at the flow inside an aneurysm [27]. The three-dimensional dataset was generated using the spectral element method, and
Figure 3: A snapshot of the reference (a) x-velocity, (b) y-velocity and (c) pressure fields of the two-dimensional dataset.
the dataset is available at [https://github.com/maziarraissi/HFM](https://github.com/maziarraissi/HFM). For the experimental dataset, we calculate the velocity field for water in a channel flow using PIVLab [28] by tracking particles. A PINN solver was then used to generate the pressure field using the known viscosity of water. We then used the velocity field from PIVlab and the pressure field from the PINN solver to test the method discussed in this paper.
### Numerical datasets
For all the two-dimensional datasets in this section, we present the scalar fields \(\psi\) and pressure using an eight-layer deep, fully connected neural network with 128 neurons per hidden layer. For the three-dimensional case, we present \(\psi_{1},\psi_{2},\psi_{3}\) and pressure using a ten-layer deep neural network with 200 neurons per hidden layer. We use the swish activation function. The use of other architectures might yield better results. A cosine learning rate schedule [29] was used in all the runs reported in this work. We used a value of 2.5e-03 for \(\eta_{max}\) and 2.5e-06 for \(\eta_{min}\) to get the learning rate \(\eta\) as defined in the following equation
\[\eta=\eta_{min}+0.5(\eta_{max}-\eta_{min})\left(1+\cos\left(\frac{T_{cur}}{T_ {max}}\pi\right)\right), \tag{22}\]
where \(T_{cur}\) is the current time step and \(T_{max}\) is the total timestep. For the two-dimensional case, we choose a mini-batch size of 1024 for the spatio-temporal point cloud inside the domain. The Adam optimizer [30] was used to optimize the parameters of the neural network. We ran 100,000 iterations of the Adam optimizer for each two-dimensional case, and every ten iterations of the Adam optimizer took about 0.15 seconds. We used the same learning rate schedule and mini-batch size for the three-dimensional runs. For the three-dimensional cases, we optimized the parameters using 360,000 iterations of the Adam optimizer, where ten iterations took about 0.54 seconds.
We first tested the sensitivity of our method to the timestep size and the number of time steps. Here, the number of timesteps refers to the number of discrete time slices we randomly sample from. We do this to test the sensitivity of our method
Figure 4: Relative error for viscosity for different combinations of the number of timesteps and timestep size for the (a) two-dimensional and (b) three-dimensional numerical dataset.
to temporal resolution and the amount of data. We report the kinematic viscosity obtained for the two-dimensional dataset in table 1, where the reference value for the dimensionless kinematic viscosity was \(0.01\). For the three-dimensional dataset, the reference dimensionless kinematic viscosity was \(0.01018\), and we report the results in table 2. We show the plot for the relative errors for different combinations of timestep size and timesteps for the two-dimensional and three-dimensional cases in fig. 4. While the trend is not strictly monotonic, increasing the spatial resolution by decreasing the timestep size and increasing the amount of data improves the results. Our framework reports a low relative error for a wide range of combinations.
To test the sensitivity of our method to noise, we added Gaussian noise to the two-dimensional dataset. We report the values for viscosity with \(16\) timesteps with time step sizes \(0.01s,0.02s,0.05s,0.10s\), and \(0.20s\) at different noise levels in table 3. We plot the relative errors for different combinations of timestep sizes and Gaussian noise in fig. 5. We observed that the amount of Gaussian noise did not significantly affect the error, and our method worked well even when \(10\%\) Gaussian noise was added to the dataset. This result was in agreement with what was observed for PINNs earlier [14].
The low sensitivity to Gaussian noise might result from many spatial points in the dataset. We trained our model on fewer spatial points to test this. We randomly sampled \(60495\), \(16384\), \(4096\), \(1024\), and \(256\) spatial points at \(16\) time steps and added \(5\%\) Gaussian noise. We report the predicted viscosities for each case in table 4 and show the relative error in fig. 5. We obtained good results by randomly sampling \(4096\) points, or roughly \(1\) in \(16\) spatial points. Our method worked well for smaller time step sizes and eventually broke down when we randomly sampled only \(256\) spatial points or around \(1\) in \(256\) spatial points. Since our setup worked for sparse and noisy data, we next considered a real-world dataset obtained through experiments.
Figure 5: Relative error for viscosity for (a) different combinations of the number of timesteps and noise and (b) different combinations of the number of timesteps and number of spatial points for the two-dimensional numerical dataset. We noticed that the addition of Gaussian noise does not have a significant effect on results. Our framework eventually breaks down when only \(256\) points are randomly sampled.
### Experimental Dataset
Water seeded with 1 \(\mu\)m fluorescent polystyrene beads (Bangs Laboratories Inc., IN, USA) at 2% (w/w) concentration is used for the experimental validation. As shown in Fig. 6, a syringe pump drives the fluid flow within an oblique channel of 1 mm width and 0.4 mm height. We applied water flow at 40 \(\mu\)l/min. A 520 nm laser using an inverted microscope coupled with a confocal system (Nikon, NY, USA) for imaging and used an oil immersion 60x (0.1083 \(\mu\)m/px) lens. We collected 3000 images at 5 ms intervals (200 fps).
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline
**Timesteps\(\Delta T\)** & \(0.01s\) & \(0.02s\) & \(0.05s\) & \(0.10s\) & \(0.20s\) \\ \hline
2 & 0.01043 & 0.01335 & 0.07658 & 0.1328 & 0.1795 \\ \hline
4 & 0.01001 & 0.01002 & 0.01027 & 0.01147 & 0.0139 \\ \hline
8 & 0.01 & 0.01 & 0.01011 & 0.01102 & 0.01452 \\ \hline
16 & 0.01 & 0.01 & 0.01018 & 0.01076 & 0.01345 \\ \hline
32 & 0.01022 & 0.01 & 0.01017 & 0.01069 & 0.01358 \\ \hline
64 & 0.0104 & 0.01041 & 0.01029 & 0.01093 & 0.01479 \\ \hline
128 & 0.0104 & 0.0106 & 0.01113 & 0.01222 & 0.01494 \\ \hline \end{tabular}
\end{table}
Table 1: Viscosity for 2D case
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline
**Timesteps\(\Delta T\)** & \(0.01s\) & \(0.02s\) & \(0.05s\) & \(0.10s\) & \(0.20s\) \\ \hline
32 & 0.0083 & 0.0092 & 0.01005 & 0.01177 & 0.01222 \\ \hline
64 & 0.008 & 0.0092 & 0.01187 & 0.01176 & 0.01176 \\ \hline
128 & 0.0097 & 0.0094 & 0.01088 & 0.01175 & 0.01204 \\ \hline
192 & 0.0094 & 0.009746 & 0.01158 & 0.01388 & 0.01456 \\ \hline \end{tabular}
\end{table}
Table 2: Viscosity for 3D case
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline
**\# points**\(\Delta T\) & \(0.01s\) & \(0.02s\) & \(0.05s\) & \(0.10s\) & \(0.20s\) \\ \hline
60945 & 0.01 & 0.01 & 0.01018 & 0.01076 & 0.01357 \\ \hline
16384 & 0.01 & 0.01 & 0.01023 & 0.01066 & 0.01156 \\ \hline
4096 & 0.0099 & 0.0097 & 0.01025 & 0.0101 & 0.01072 \\ \hline
1024 & 0.011 & 0.0157 & 0.0132 & 0.0117 & 0.01107 \\ \hline
256 & 0.05447 & 0.05492 & 0.03893 & 0.03758 & 0.04164 \\ \hline \end{tabular}
\end{table}
Table 4: Viscosity for 2D case as a function of spatial points for 16 timesteps
Figure 6: The experimental setup with the \(\upmu\)-Slide III 3in1 (ibidi Inc., WI, USA) is used for the PIV measurements. The flow inlet and outlet are labeled as A and B, respectively. The interrogation area (not to scale) is also represented using the orange square. During the experiment, the other two inlets were closed using ibidi luer locks.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline
**Noise**\(\Delta T\) & \(0.01s\) & \(0.02s\) & \(0.05s\) & \(0.10s\) & \(0.20s\) \\ \hline
0\% noise & 0.01 & 0.01 & 0.01018 & 0.01076 & 0.01173 \\ \hline
1\% noise & 0.01 & 0.01 & 0.01023 & 0.01064 & 0.01152 \\ \hline
2\% noise & 0.01 & 0.01 & 0.01023 & 0.01072 & 0.0116 \\ \hline
5\% noise & 0.01 & 0.01 & 0.01023 & 0.01066 & 0.01156 \\ \hline
10\% noise & 0.01 & 0.01 & 0.01017 & 0.01053 & 0.01151 \\ \hline \end{tabular}
\end{table}
Table 3: Viscosity for 2D case as a function of noise level for 16 timesteps with 5% noise
For post-processing PIVlab MATLAB GUI is used [28]. We imported the images in the pairwise sequencing scheme. Image pre-processing using the PIVlab interface is also applied to remove the background light intensity. The 2-D velocity field is extracted in the \(x\)-\(y\) plane using the Fast Fourier Transform (FFT) window deformation algorithm, along with three passes, i.e., 128, 64, and 32 pixel interrogation areas. Finally, the mean- \(x\) and \(y\) velocity components are calculated and exported separately.
We use the velocity field obtained from PIVLab to generate a reference pressure field. Our framework then uses the velocity and pressure fields to predict water viscosity at room temperature. We used an eight-layer deep, fully connected neural network with 128 neurons per hidden layer. We used the learning schedule described in section 3.1, and the parameters of the network were optimized using 800,000 iterations of the Adam optimizer. The reference value for the water viscosity at room temperature is 0.01 poise [31], and the value we get from our model is 0.00977 poise.
## 4 Conclusions and Future Work
It is generally challenging to assign relative weights to the loss terms while training physics-informed neural networks, especially with multiscale data. We propose a novel solution for this challenge. By using backward Euler discretization for temporal derivatives instead of automatic differentiation, we can use the data's statistical properties to get the loss terms' relative weights. In this work, we consider the two and three-dimensional Navier-Stokes equations and determine the kinematic viscosity using spatio-temporal data on the velocity and pressure fields.
For the two-dimensional case, we look at the flow past a cylinder and flow in an aneurysm for the three-dimensional case. We test the sensitivity and robustness of our method against the timestep size, the number of timesteps, noise in the data, and the spatial data resolution. Since our method worked well for a wide range of numerical data, we tested our method using experimental data. We used the velocity field from experimental PIV measurements of a channel flow to generate a reference pressure field. We tested our framework using this velocity and reference pressure fields to get water viscosity at room temperature. We demonstrated that our framework worked well with an experimental dataset. This work uses spatio-temporal data on the pressure and velocity fields as input. For future work, using just the velocity field as an input and solving for the pressure field can be explored. Then the velocity field from PIV measurements can be used directly to learn the viscosity and the pressure field for both two and three-dimensional flows.
## 5 Acknowledgements
A.M.A. acknowledges financial support from the National Science Foundation (NSF) through Grant No. CBET-2141404.
|
2305.06398 | Towards Scalable Adaptive Learning with Graph Neural Networks and
Reinforcement Learning | Adaptive learning is an area of educational technology that consists in
delivering personalized learning experiences to address the unique needs of
each learner. An important subfield of adaptive learning is learning path
personalization: it aims at designing systems that recommend sequences of
educational activities to maximize students' learning outcomes. Many machine
learning approaches have already demonstrated significant results in a variety
of contexts related to learning path personalization. However, most of them
were designed for very specific settings and are not very reusable. This is
accentuated by the fact that they often rely on non-scalable models, which are
unable to integrate new elements after being trained on a specific set of
educational resources. In this paper, we introduce a flexible and scalable
approach towards the problem of learning path personalization, which we
formalize as a reinforcement learning problem. Our model is a sequential
recommender system based on a graph neural network, which we evaluate on a
population of simulated learners. Our results demonstrate that it can learn to
make good recommendations in the small-data regime. | Jean Vassoyan, Jill-Jênn Vie, Pirmin Lemberger | 2023-05-10T18:16:04Z | http://arxiv.org/abs/2305.06398v1 | # Towards Scalable Adaptive Learning with Graph Neural Networks and Reinforcement Learning
###### Abstract
Adaptive learning is an area of educational technology that consists in delivering personalized learning experiences to address the unique needs of each learner. An important sub-field of adaptive learning is learning path personalization: it aims at designing systems that recommend sequences of educational activities to maximize students' learning outcomes. Many machine learning approaches have already demonstrated significant results in a variety of contexts related to learning path personalization. However, most of them were designed for very specific settings and are not very reusable. This is accentuated by the fact that they often rely on non-scalable models, which are unable to integrate new elements after being trained on a specific set of educational resources. In this paper, we introduce a flexible and scalable approach towards the problem of learning path personalization, which we formalize as a reinforcement learning problem. Our model is a sequential recommender system based on a graph neural network, which we evaluate on a population of simulated learners. Our results demonstrate that it can learn to make good recommendations in the small-data regime.
adaptive learning, learning path personalization, graph neural networks, reinforcement learning, recommender system
## 1 Introduction
Adaptive learning is an area of educational technology that focuses on addressing the unique needs, abilities, and interests of each individual student. This field emerged in the 1980s with the introduction of the first _Intelligent Tutoring Systems_ (ITS) and experienced major expansion in the 1990s. As described by T. Murray in [20], an ITS usually consists of four components: a _domain model_, a _student model_, an _instructional model_ and a _user interface model_. As we address the problem from an algorithmic point of view, we only focus on the first three models. The domain model is a representation of the knowledge to be taught; it often serves as a basis for the student model. The student model provides a characterization of each learner that allows to assess their knowledge and skills and anticipate their behavior. The instructional model takes the domain and student models as input to select strategies that will help each user achieve their learning objectives. This general structure allows ITSs to achieve many purposes (recommending exercises, providing feedback, facilitating memorization, etc.) while optimizing a variety of metrics (learning gains, engagement, speed of learning, etc.).
In this paper, we address the problem of learning path personalization with optimization of learning gains. This means that we look for a sequential recommender system that can provide each student with the right content at the right time (according to their past activity), in order to maximize their overall learning gains.
Towards this goal, "standard" approaches often require significant structuring of the domain model. This step is usually assisted by experts: they may be mobilized to tag educational resources, set up prerequisite relationships, draw up skill tables, etc. One example of such structuring is the Q-matrix [2] which maps knowledge components (KC) to exercises. These expert-based approaches present some serious practical limitations. First, they make it quite cumbersome to create resource sets, since each resource has to be properly tagged (sometimes with an extensive set of metadata). They also lead to poorly reusable recommender systems, since prerequisite relationships and skills maps are usually tailored to specific resource sets. This low reusability problem is often exacerbated by the modeling of resources/skills/KC as one-hot encodings [3, 21] which tie the model to a maximum number of resources/skills/KC it can handle. As a result, these approaches produce models that are not suitable for transfer learning. Our approach, on the other hand, is based on a graph neural network, which structure makes it possible to process data in a much more flexible way.
Our contributions in this paper are threefold. First, we introduce a new setting for learning path personalization and formalize it as a model-free reinforcement learning (RL) problem. Second, we present a novel RL policy that can leverage educational resource content and users' feedback to make recommendations that improve learning gains. The
proposed model has the advantage of being inherently scalable, reusable, and independent of any expert tagging. Third we evaluate our model on 6 semi-synthetic environments composed of real-world educational resources and simulated learners. The results demonstrate that it can learn to make good recommendations from few interactions with learners, thereby significantly outperforming the uniform random policy.
The rest of the paper is organized as follows. In Section 2, we relate our paper to prior research. In Section 3, we describe our setting, the assumptions we make and the problem we attempt to solve, which we formalize as a reinforcement learning problem. In Section 4, we present our novel RL policy. In Section 5, we describe our experimental setting and discuss our results. In Section 6 we address some limitations of our model and propose a few directions for future work. We finally conclude in Section 7.
## 2 Related Work
In recent years, several works have used reinforcement learning to address the problem of learning path personalization. Most of these RL approaches are model-based, as they rely on a predefined student model to simulate student trajectories. However, no student model is completely accurate, and the learned instructional policies may overfit to the student model. Doroudi et al. [7] have attempted to learn policies that provide a better reward no matter the student model chosen (i.e. robust policies). Azhar et al. [1] proposed a method to gradually refine the student model by adding features that maximize the reward.
Reward functions usually involve learning gains. Subramanian and Mostow [29] defined learning gains as average difference between posterior and prior latent knowledge. Lan and Baraniuk [15] proposed to learn a policy for selecting learning actions so that the grade on the next exam is maximized. Clement et al. [5] attempted to optimize an increase in success rate in recent time steps, they used an \(\varepsilon\)-greedy approach. Doroudi et al. [8] conducted a thorough review of the different reward functions used in instructional policies.
The closest to our setting is probably the approach proposed by Bassen et al. [3] which, like ours, does not rely on expert pre-labeling of educational resources. However, in the absence of compensation for this lack of information, their reinforcement learning algorithm requires a substantial number of learners to converge to an effective policy: about 1000 learners for a corpus of 12 educational resources. Moreover, in their framework, educational activities were represented as one-hot encodings and passed to the policy via a fixed-size vector. Therefore, this approach does not allow to work with an evolving corpus of educational resources (which is the case for most _e-learning_ platforms) nor to reuse the model on another set, unless it is completely re-trained.
In contrast, our approach leverages information from resource keywords which allows to achieve convergence in a relatively small number of episodes, while maintaining a high level of flexibility. This keyword-based approach was inspired by the work of Gasparetti et al. [12, 11]. Although the authors did not directly address the problem of learning path personalization, they outlined a method of feature extraction from textual resources that proved to be very successful in predicting prerequisite relationships.
## 3 Problem Formulation
### Description of the setting
Consider an _e-learning_ platform with a collection of educational resources which have been designed to cover a specific topic, for example "an introduction to machine learning". Consider a population \(\mathscr{P}\) of learners to be trained on this topic. The goal of learning path personalization is to be able to recommend a sequence of educational resources to each learner so as to maximize his overall _learning gains_. Therefore the resulting machine learning problem can be expressed in the following terms: given a large enough sample \(U\) of users from \(\mathscr{P}\), how can we train a machine learning model to make recommendations to users from \(U\) so as to generalize to the whole population?
In this paper, we work at the scale of short learning paths (\(\sim 1\) hour), which means that each learning session only consists of a few interactions between the learner and the ITS. One advantage of this setting is that it reduces the effects of memory loss: we assume that when a learner visits a new resource, what he learned from the previous ones is still in his working memory.
We first make a few assumptions about the learning sessions:
1. Each learner follows one learning path of equal length (i.e. same number of resources). The purpose of this assumption is primarily to simplify the notations as it can be easily relaxed without making major modifications to the model.
2. There is no interaction between the learner and the external world (no communication, no access to external resources). This makes it possible to work in the closed system {learner + ITS}. While incorrect in most cases, this assumption may be more reasonable in our setting than in a multi-day learning context.
3. We assume the existence of a feedback signal that provides information about user understanding of each resource. This signal can take three values: * (\(f_{<}\)): the user did not understand the resource * (\(f_{>}\)): the user understood, but found it too easy * (\(f_{\circ}\)): the resource was at the right level. In practice, such feedback can be obtained from self-assessment or more sophisticated test, and should be associated with an error margin to account for its imprecision. Nevertheless, in this study, we assume that each feedback is perfectly accurate.
A view of such a learning session is provided in Figure 1.
To further simplify the problem, we also adopt a few simplifying assumptions about the educational resources:
1. They are purely textual resources, written in natural language. We indeed consider that most educational
formats can be easily transcribed into text (transcript of a video, legend of a diagram, caption of an image etc.).
* They are _self-contained_, which means that they can be considered independently. This implies for example that they do not explicitly refer to each other. Although quite strong, this assumption is essential to prevent mandatory dependencies and foster diversity of learning paths.
* Each resource explains one or few concepts and has equivalent "educational value". This involves that each resource carries the same "amount" of knowledge.
Some examples of educational resources that satisfy these requirements are provided in Figure 4 of the Appendix.
Our goal with this work is to design a machine learning algorithm that can leverage learners' feedback to text-based educational resources to model their understanding of each concept, anticipate their reactions, and recommend resources that maximize their overall learning gains.
Since most _e-learning_ platforms are in constant evolution, our goal is not only to solve this problem but to do it in a flexible and scalable way. This means that the model should not require full retraining when new resources are added to (or removed from) the platform. Actually, it should be able to extrapolate to new resources what it learned from previous interactions. This suggests that the number of parameters of our model should not depend on the size of the corpus.
### Formalization
In this section, we formalize the problem described above as a reinforcement learning problem. We use the terms "user" and "learner" interchangeably to refer to any individual from the sample \(U\). Similarly, we refer to an educational resource with the terms "document" or "resource".
In the following, we denote: \(T\) the length of each learning session (identical for each user), \(\mathcal{D}\) the corpus of documents, \(d\) a document from \(\mathcal{D}\), \(u\) a user (or learner) from the sample \(U\), \(\mathbf{f}_{d}\) the feedback given by a learner on document \(d\).
The sequential recommendation problem defined above can be easily expressed as a reinforcement learning problem where: the agent is the recommender system, the environment is the population \(\mathcal{P}\) of students and each episode is a learning path. This problem can be formulated as a partially observable Markov decision process (\(\mathcal{S}\), \(\mathcal{A}\), \(\mathcal{O}\), \(\mathcal{T}\), \(\mathcal{R}\), \(\mathcal{Z}\)) where \(\mathcal{S}\) is the state space, \(\mathcal{A}\) is the action space, \(\mathcal{O}\) is the observation space, \(\mathcal{T}:\mathcal{S}\times\mathcal{A}\times\mathcal{S}\rightarrow[0,1]\) defines the conditional transition probabilities, \(\mathcal{R}:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}\) is the reward function and \(\mathcal{Z}:\mathcal{S}\times\mathcal{A}\times\mathcal{O}\rightarrow[0,1]\) is the observation function. More precisely, in our setting:
* \(\mathbf{s}_{t}\in\mathcal{S}\) is the (unknown) knowledge state of the learner at step \(t\);
* \(\mathbf{a}_{t}\in\mathcal{A}\) is the document selected by the recommender system at step \(t\); we can write \(\mathbf{a}_{t}=\mathbf{d}_{t}\);
* \(\mathbf{o}_{t}\in\mathcal{O}\) is the observation made at step \(t\), which is a tuple of the selected document and the returned feedback: \(\mathbf{o}_{t}=(\mathbf{d}_{t},\mathbf{f}_{t})\);
* \(\mathcal{T}(\mathbf{s},\mathbf{a},\mathbf{s}^{\prime})=\mathbb{P}(\mathbf{s}_{ t+1}=\mathbf{s}^{\prime}\mid\mathbf{s}_{t}=\mathbf{s},\mathbf{a}_{t}= \mathbf{a})\) is unknown, as it represents the impact of selecting document \(\mathbf{a}\) on learner's state \(\mathbf{s}_{t}\);
* \(\mathcal{Z}(\mathbf{s},\mathbf{a},\mathbf{o})=\mathbb{P}(\mathbf{o}_{t+1}= \mathbf{o}\mid\mathbf{s}_{t+1}=\mathbf{s},\mathbf{a}_{t}=\mathbf{a})\) is also unknown and represents the probability of observing \(\mathbf{o}\) in state \(\mathbf{s}\) after choosing document \(\mathbf{a}\);
* \(\mathcal{R}(\mathbf{s}_{t},\mathbf{a}_{t})\) is the learning gain of the user at step \(t\), which we define as follows: \[\mathcal{R}(\mathbf{s}_{t},\mathbf{a}_{t})=\mathbb{1}_{\{\mathbf{f}_{t}=f_{ \circ}\}}.\] (1) We indeed consider that only feedback \(f_{\circ}\) corresponds to an effective learning gain. We denote \(\mathcal{R}(\mathbf{s}_{t},\mathbf{a}_{t})=\mathbf{r}_{t}\) in the following.
To solve this problem, we need to find a policy \(\pi:\mathcal{O}\rightarrow\mathcal{A}\) that maximizes the expected return over each episode \(\eta\):
\[\pi^{*}=\operatorname*{arg\,max}_{\pi}\,\mathbb{E}_{\eta\sim\pi}\left[\,\sum _{t=1}^{T}\mathbf{r}_{t}\,\right]. \tag{2}\]
## 4 Our RL Model
A common approach to solve partially observable Markov decision processes (POMDP) is to leverage information from past observations \(\mathbf{o}_{1},\ldots,\mathbf{o}_{t}\) to build an estimation of \(\mathbf{s}_{t}\) which is then used to select the next action (illustrated in Figure 2). This boils down to encoding these observations into a latent space \(\mathcal{S}\). In our setting, this latent space contains all possible knowledge states for the learner, which is why we call it _knowledge space_ in the following.
### Knowledge space
While more compact than the observation space, the knowledge space should be informative enough to convey a relevant approximation of learner's knowledge.
We decided to structure this representation with the keywords of the corpus, denoted \((w_{1},\ldots,w_{M})\). We define a keyword as a word or group of words that refers to a technical concept closely related to the subject of the corpus. Some examples of keywords extracted from educational resources are provided in the Appendix. The keywords carry information about the concepts addressed by the documents and are therefore a good approximation of their pedagogical content.
Figure 1: A view of a learning session. In this example, the session length is \(T=4\). \(\mathbf{Actions}_{1},\mathbf{a}_{2},\mathbf{a}_{3},\mathbf{a}_{4}\) are the recommendations of the ITS. \(\mathbf{f}_{1},\mathbf{f}_{2},\mathbf{f}_{3},\mathbf{f}_{4}\) are the feedback signals returned by the user.
That is why we modeled the knowledge state of each learner as a collection of vectors \((\mathbf{w}_{1},\ldots,\mathbf{w}_{M})\) which represents his "understanding" of each keyword. We indeed consider that a keyword can be understood in multiple ways depending on the context in which it occurs, and a multidimensional vector can be a convenient way to capture this plurality. This is illustrated in Figure 2. From this perspective, the knowledge space \(\mathcal{S}\) can be defined as \(\mathcal{S}:=\underbrace{\mathbb{R}^{K}\times\cdots\times\mathbb{R}^{K}}_{M}\).
Note that we do not consider keyword extraction as a task requiring expert knowledge since it can be done by any creator of educational content and involves fewer skills than defining the knowledge components of a course. Moreover, it is mainly a pattern-matching task that can be automated through a keyword extraction algorithm [9, 22, 4].
### Policy
Following the previous considerations, the policy \(\pi_{\theta}\) should take a collection of observations \(\mathbf{o}_{1},\ldots,\mathbf{o}_{t}\) as input, encode it into the latent space \(\mathcal{S}\) and return a recommendation for the next document \(\mathbf{d}_{t}\). We emphasize that this function should also meet the aforementioned flexibility and scalability requirements.
A natural way to model the relationship between documents and keywords is to build a bipartite graph \(\mathcal{G}=(\mathcal{V}_{\mathcal{D}},\mathcal{V}_{\mathcal{W}},\mathcal{E})\), where \(\mathcal{V}_{\mathcal{D}}\) is the set of _document_ nodes, \(\mathcal{V}_{\mathcal{W}}\) is the set of _keyword_ nodes and \(\mathcal{E}\) is the set of edges, with \((v_{d},v_{w})\in\mathcal{E}\) if the document \(d\) contains the word \(w\).
We chose to use a graph neural network (GNN) as a policy. GNNs are quite convenient for this task as they allow to enrich node features with information about their extensive neighborhood, through message-passing. Therefore, documents (respectively keywords) that share a large number of keywords (respectively documents) will also have similar embeddings. This allows to build keyword embeddings that contain information about feedback from neighboring documents \((\mathbf{o}_{1},\ldots,\mathbf{o}_{t}\rightarrow\hat{\mathbf{s}}_{t})\). Message-passing can also be used the other way around, from keywords to documents, to build embeddings that inform about the relevance of each document according to the estimated knowledge state \((\hat{\mathbf{s}}_{t}\rightarrow\mathbf{a}_{t})\). Another significant advantage of GNNs is that their number of parameters does not depend on the size and structure of the graph, which makes them highly flexible and scalable.
Multiple options are possible for the initial node features. For keyword nodes, pre-trained word embeddings are a natural choice. As for the document nodes, a simple null vector is sufficient. However one may choose to include extra information about the documents if it is available (type of document, format, length etc.). We denote as \((\mathbf{x}_{w})_{w\in\mathcal{V}_{\mathcal{W}}}\) and \((\mathbf{x}_{d})_{d\in\mathcal{V}_{\mathcal{D}}}\) the initial feature vectors of keyword and document nodes.
In our model, we adapted a version of GAT (graph attention networks) [32] to the heterogeneity of our bipartite graph:
\[\forall d\in\mathcal{V}_{\mathcal{D}},\;\mathbf{h}_{d}^{(\ell+1)} =\sigma\left(\sum_{w\in\mathcal{N}(d)}\alpha_{dw}^{(\ell)}W_{D}^{ (\ell)}\mathbf{h}_{w}^{(\ell)}+B_{D}^{(\ell)}\right) \tag{3}\] \[\forall w\in\mathcal{V}_{\mathcal{W}},\mathbf{h}_{w}^{(\ell+1)} =\sigma\left(\sum_{d\in\mathcal{N}(w)}\alpha_{wd}^{(\ell)}W_{W}^{ (\ell)}\mathbf{h}_{d}^{(\ell+1)}+B_{W}^{(\ell)}\right) \tag{4}\]
\(\mathbf{h}_{d}^{(\ell)}\in\mathbb{R}^{K}\) is the embedding of node \(d\) at \(\ell\)th layer, with \(\mathbf{h}_{d}^{(0)}=\mathbf{x}_{d}\). \(\mathcal{N}(d)\) is the set of neighbors of node \(d\) in the graph. \(\alpha_{dw}^{(\ell)}\) is a _self-attention_ coefficient, detailed in the Appendix. \(\sigma(\cdot)\) is the ReLU activation function (rectified linear unit). \(W_{W}^{(\ell)}\), \(W_{D}^{(\ell)}\), \(B_{W}^{(\ell)}\) and \(B_{D}^{(\ell)}\) are trainable parameters. This back-and-forth mechanism between documents and keywords allows to learn distinct filters for each node type (document or keyword), effectively addressing the graph's heterogeneity. In the following, we refer to equations (3) and (4) as _bipartite GAT layers_ and denote them (\(\text{KW}\xrightarrow{(3)}\text{DOC}\)) and (\(\text{DOC}\xrightarrow{(4)}\text{KW}\)). Note that they can be chained one after the other.
We define our first block of bipartite GAT layers as follows:
\[\text{BLOCK1}\;=\;\text{KW}\xrightarrow{(3)}\text{DOC}\xrightarrow{(4)}\text {KW}\xrightarrow{(3)}\text{DOC}. \tag{5}\]
After this block, document embeddings \((\mathbf{h}_{d}^{(2)})_{d\in\mathcal{V}_{\mathcal{D}}}\) contain information about keywords from their extended neighborhood. Using a Hadamard product, we enrich these embeddings with user feedback:
\[\mathbf{h}_{d}^{(\varphi)}=\mathbf{h}_{d}^{(2)}\odot\mathsf{MLP}_{K_{d} \to K}(\mathbf{f}_{d}) \tag{6}\]
\(\mathbf{h}_{d}^{(2)}\) and \(\mathbf{h}_{d}^{(\varphi)}\) are the embeddings of document \(d\) before and after adding the feedback. \(\mathbf{f}_{d}\) is an encoding of user's feedback on document \(d\), which is passed through a multilayer perceptron (MLP). We use a "not visited" feedback for the documents that the learner has not yet visited.
After doing this operation on each document node, we apply another block of bipartite GAT layers:
\[\text{BLOCK2}\;=\;\text{DOC}\xrightarrow{(4)}\text{KW}\xrightarrow{(3)}\text {DOC}. \tag{7}\]
Operation (4) allows to enrich keyword embeddings with feedback from neighboring documents, which carry information about user's understanding. We consider these embeddings as a good approximation of learner's knowledge state, which is why we define \(\hat{\mathbf{s}}_{t}:=(\mathbf{h}_{w}^{(2)})_{w\in\mathcal{V}_{\mathcal{W}}}\). The final GAT layer (3) maps \(\hat{\mathbf{s}}_{t}\) to documents for the next recommendation.
Figure 2: Up, a view of common policy architecture to solve POMDP. Down, this architecture applied to our setting.
Before assigning probabilities to each document in the final step, we enrich document embeddings by incorporating information about the remaining time in the session, which, as we observed, slightly improved the performance of the model:
\[\mathbf{h}_{d}^{(\tau)}=\mathbf{h}_{d}^{(3)}\odot\mathsf{MLP}_{K_{\tau}\to K}( \Delta_{t}) \tag{8}\]
\(\mathbf{h}_{d}^{(3)}\) and \(\mathbf{h}_{d}^{(\tau)}\) are the embeddings of document \(d\) before and after adding the remaining time. \(\Delta_{t}=T-t\) is an encoding of the remaining time (or remaining steps) at step \(t\).
Eventually, the embeddings \(\mathbf{h}_{d}^{(\tau)}\) are passed through an MLP to assign a score to each document. These scores are converted into probabilities via a softmax over all document nodes (further details in the Appendix):
\[\pi_{\theta}\left(d\mid\mathbf{o}_{1},\ldots,\mathbf{o}_{t}\right)=\underset {\mathcal{V}_{D}}{\text{softmax}}\left(\mathsf{MLP}_{K\to 1}(\mathbf{h}_{d}^{( \tau)})\right). \tag{9}\]
The full architecture of the policy is illustrated in Figure 3.
### RL Algorithm
As our policy selects the next action directly from observations, it belongs to the _policy-based_ reinforcement learning paradigm, especially the _policy gradient_ methods. The latter make it possible to maximize the expected return by optimizing directly the parameters of \(\pi_{\theta}\) through gradient descent. We chose the REINFORCE algorithm [31] for its simplicity. At the end of each episode, \(\pi_{\theta}\) is updated as follows:
\[\forall t\in[1,T],\quad\theta\leftarrow\theta+\lambda\nabla_{\theta}\log\pi_{ \theta}\left(s_{t},a_{t}\right)v_{t} \tag{10}\]
with \(\lambda\) the learning rate and \(v_{t}=\sum_{t^{\prime}=t}^{T}\gamma^{t^{\prime}-t}r_{t^{\prime}}\) the return of the episode from step \(t\).
Note that we could learn our policy using more sophisticated RL algorithms like actor-critic, which usually has lower variance. However, it is likely that the current architecture would provide a poor state value function as it only operates at the scale of node neighborhoods and does not have a "global" view of the graph. Some changes in this architecture might nevertheless be done to process information at a larger scale, as discussed in Section 6.
## 5 Experiments
Given the complexity of conducting mass experiments on real learners, we chose to evaluate our model in an environment made up of semi-synthetic data. Our implementation is written in Python and is available on GitHub1. We also provided the hyperparameters of our model in Table 4 of the Appendix.
Footnote 1: [https://github.com/jvasso/graph-rl4adaptive-learning](https://github.com/jvasso/graph-rl4adaptive-learning)
### Experimental setting
#### Linear corpus
We introduce what we call a "linear" corpus. Starting from a regular course divided into sections and subsections, we treat each subsection as one document. The corpus resulting from this decomposition is "linear", in the sense that it was designed to be followed in a single, pre-defined order, which is identical for each learner. Therefore, it leaves practically no room for personalization. Six corpora were constructed this way: three about data science (1-3) and three about programming (4-6). They were all built from courses taken from a popular _e-learning_ platform.
For the purpose of our experiments, we have chosen to tag keywords "by hand" to avoid introducing any noise in the results. Our methodology was quite simple: for each document, we collected keywords referring to technical concepts related to the topic of the course. Table 1 presents some key statistics about each corpus and their associated bipartite graphs. Note that the graph of corpus 5 is disconnected: indeed, one of its documents only contains keywords that do not appear in any other document. Despite significantly complicating the task for a diffusion model like ours, we have chosen to keep this corpus for our experiments.
#### Simulated learners
Since each corpus has been designed to be explored in a single pre-defined order, we assume that the only way to
\begin{table}
\begin{tabular}{l r r r r} \hline corpus & \# doc & \# kw & \# edges & diameter \\ \hline Corpus 1 & 33 & 68 & 154 & 10 \\ Corpus 2 & 11 & 31 & 62 & 6 \\ Corpus 3 & 19 & 39 & 83 & 8 \\ Corpus 4 & 28 & 55 & 113 & 8 \\ Corpus 5 & 18 & 41 & 66 & \(\infty\) \\ Corpus 6 & 20 & 45 & 143 & 6 \\ \hline \end{tabular}
\end{table}
Table 1: Key statistics of each corpus
Figure 3: The architecture of our policy network on a 3-document corpus
understand it is to follow this order scrupulously. Therefore we have decided to simulate the behavior of learners in this very simple way: as long as the policy recommends documents in the right order, the learner returns the feedback (\(f_{\circ}\)). Conversely, each time the algorithm recommends a document too early or too late, the learner returns the feedback (\(f_{<}\)) or (\(f_{>}\)). A detailed example is given in the Appendix.
Since our simulated learners have a straightforward behavior, the purpose of this experiment is not to evaluate the personalization or generalization capabilities of our model, but to assess its ability to grasp the structure of a corpus, by finding its original order in a reasonable number of episodes (i.e. a few learners). While trivial at first glance, this task can be quite difficult for an RL agent in the small-data regime. Besides, each corpus contains some parts that are independent of each other which suggests that in practice, multiple learning trajectories might be understandable to real learners. From this perspective, the "strict" feedback of our simulated learners can distort the real nature of the relationships between resources and make the task more difficult for our recommender system.
_Policy_
In our experiments, we compared 3 different policies. The first one is the uniform random policy. The second one is our policy with one-hot-encodings as keyword features. The third one is our policy with Wikipedia2Vec embeddings [34] as keyword features. Wikipedia2Vec embeddings are quite suitable for our task as they contain encyclopedic information about the relationship between words and entities. They were derived from a skip-gram model trained on a triple objective, which is detailed in the Appendix. We used null vectors as document features for each policy.
_Training_
In each experiment, the maximum achievable return is equal to the size of the corpus. We set the horizon \(T\) to the size of the corpus to make sure that only an optimal policy (i.e. one that makes no "mistake") can reach this return. In this setting, the return of the random policy follows a binomial distribution with parameters (\(T\), \(\frac{1}{T}\)). Therefore its expected return is 1 for each episode. We also set the discount factor \(\gamma=0\) during training because in this very specific setting, the best action at each step \(t\) can be learned from immediate reward. We trained our model from scratch over 50 episodes (\(\sim\) 50 students) for each corpus, with a constant learning rate.
### Results
Since the REINFORCE algorithm has quite a high variance, we averaged each episodic return over 25 random seeds. The resulting learning curves are shown in Figure 6 of the Appendix and the last episodic returns (measured at 50\({}^{\text{th}}\) episode) are reported in Table 2.
From these curves, one can notice that despite the small-data regime and the choice of a sub-optimal RL algorithm (the REINFORCE algorithm is known to be quite unstable and sample-inefficient), our agent succeeded in recovering a significant part of the original order of each corpus. Most of the time, it achieved average return over 10 whereas the random policy was stuck in an expected return of 1.
Best performance was achieved on Corpus 2. Indeed, it is the only one for which our model managed to reach the maximum achievable return most of the time. This may be partly due to the small number of documents in this corpus. However, we stress that the number of documents alone is not a sufficient feature to account for the variability of the results. For instance, corpora 3 and 6 have a nearly similar number of documents, but our model performed very differently on these two corpora. Moreover, in the case of the Wikipedia2Vec approach, it is not guaranteed that a large corpus should be more difficult than a small one, since the episodes are shorter for small corpora and therefore the algorithm has fewer steps to grasp the geometrical structures in the distribution of Wikipedia2Vec embeddings.
The diameter of the graph may also impact the performance of the model. Indeed, Corpus 2 is again the one with the smallest diameter, which may have helped the model to determine the relationships between documents and keywords more quickly. However, this must be balanced with the results on Corpus 6, on which our model performed far worse (in terms of normalized return) despite equal diameter.
Another noticeable result is the one of Corpus 5. We remind that this corpus was the only one to be disconnected. Actually, it was disconnected at the 11\({}^{\text{th}}\) document, which is consistent with the performance of the model: indeed, episodic return lower than 10 indicates that it failed to make recommendations beyond the 10\({}^{\text{th}}\) document. This can be explained quite simply: since this document is disconnected from the rest of the graph, it does not benefit from message-passing and therefore receives no information about other documents feedback.
Eventually, one cannot ignore the extremely high variance of the episodic return for almost all corpora (except for Corpus 2). This is partly due to the choice of the REINFORCE algorithm, which is known for its high instability.
_Ablation study_
We conducted an ablation study to analyse the contribution of Wikipedia2Vec embeddings compared to simple one-hot encodings. Even though the approach with embeddings performed significantly better on each corpus, the high error margins and the similarity between trends suggest that our model was not truly able to leverage high level information
\begin{table}
\begin{tabular}{l c c} \hline \hline Corpus & Wikipedia2Vec & One-hot encodings \\ \hline Corpus 1 & \(\mathbf{16.48\pm 2.66}\) & \(13.36\pm 1.74\) \\ Corpus 2 & \(\mathbf{10.84\pm 0.14}\) & \(10.28\pm 0.37\) \\ Corpus 3 & \(\mathbf{14.40\pm 1.31}\) & \(11.68\pm 1.13\) \\ Corpus 4 & \(\mathbf{15.16\pm 0.90}\) & \(12.52\pm 0.98\) \\ Corpus 5 & \(\mathbf{9.80\pm 2.13}\) & \(7.56\pm 1.83\) \\ Corpus 6 & \(\mathbf{11.24\pm 1.52}\) & \(8.24\pm 0.84\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison between episodic returns when using Wikipedia2Vec and one-hot encodings as keyword features
about the relationships between Wikipedia entities. Instead, it is more likely that it simply "overfit" to each corpus. This lack of generalization is not a problem in the setting of our experiment but can be a serious issue in transfer learning scenarios and therefore needs to be addressed.
## 6 Limitations and Future Work
### Size and structure of the graph
All of our experiments have been conducted on small graphs (less than \(\sim\) 100 nodes). However, it is likely that our model would struggle a little more on larger graphs as the receptive field of each node accounts for a smaller fraction of the graph in such case. Besides, it is not possible to increase the depth of a GNN indefinitely because of the over-smoothing problem [14, 33, 16]. Therefore, it is likely that these embeddings alone would not be sufficiently informative to allow for long-term planning. This limitation can be addressed with down- and upsampling methods such as pooling and unpooling operations on graphs, which make it possible to process information at multiple scales [6, 28, 35]. It can also be addressed with planning techniques such as Monte Carlo Tree Search, which has demonstrated great performance in combination with deep RL techniques [23, 26, 27].
As we saw in subsection 5.2, there is also an issue with disconnected graphs since our model failed to make predictions beyond the disconnected document node in Corpus 5. One possible solution could be to slightly modify the structure of the graph, for example through link prediction based on keyword embeddings.
Eventually, it is important to note that we tested our approach on corpora related to engineering topics -- machine learning and programming -- which keyword distributions might be quite similar (cf. Figure 5 in the Appendix). Yet, corpora related to different topics may have completely different keyword distributions. Therefore, it would be worth comparing the performance of the model on a wider range of subjects in the future.
### Variance and sample efficiency
As stated in Section 5, our approach suffers from high variance, partly due to the choice of the REINFORCE algorithm. Some other on-policy methods have demonstrated great success in reducing variance [24, 25, 18]. Nevertheless, these approaches remain generally not very sample-efficient. To improve sample-efficiency, it is quite common to use off-policy algorithms as they allow to reuse past experience [19, 17, 13]. However, as stated in Section 4, the implementation of an approximate Q-value function with a GNN is not trivial as it requires to leverage information at the scale of the entire graph, which involves modifications in the model. Another alternative is to use a model-based reinforcement learning algorithm (MBRL) [30, 23]. As they allow to learn a model of the environment (i.e. a model that predicts the next observations and rewards), MBRL techniques enable to reuse past experience and learn from a richer signal than the reward signal alone. Therefore, they are usually much more sample-efficient than model-free RL techniques. These approaches might be more appropriate in our case, as a local model like a GNN may more easily predict immediate feedback than the (long-term) _value_ of a state-action pair.
### Interpretability
One of the main limitations of our approach is its lack of interpretability. Ideally, an ITS would not only provide a personalized learning experience but also inform the learner about their progress and level of understanding, in order to encourage self-awareness and self-regulation. This is usually done with an _open learner model_. However, like most deep learning approaches, our recommender system is a black-box model and does not allow for easy interpretation. Yet, we hypothesize that the estimated knowledge state \(\hat{\mathbf{s}}_{t}\) does not only contain semantic information about keywords but also about the way they were understood by the learner. Therefore, future work may consist in projecting these keyword embeddings into lower dimensional space to visualize their evolution throughout learning sessions.
### Reusability
We designed a model that is flexible enough to be _theoretically_ capable of transferring its knowledge from one corpus to another. However, this is only possible if the model has managed to capture high-level information that is common to all corpora. Unfortunately, our experiments do not allow to truly evaluate the transfer learning capabilities of our model. However, since it seems to overfit to the structure of each corpus, it might not have learned that much about the high-level relationships in the distribution of Wikipedia2Vec embeddings. Therefore, transfer learning might not be very effective in this case. Future directions to reduce overfitting may consist in applying regularization techniques to GNN (such as node dropout), or using training techniques that push the model to learn higher-level knowledge, such as meta-learning for RL [10].
## 7 Conclusion
In this paper, we presented a new model for learning path personalization, designed to be reusable and independent of any expert labeling. We demonstrated its ability to learn to make recommendations in 6 semi-synthetic environments made-up of real-world educational resources and simulated learners. Since this model is theoretically capable of transferring its knowledge from one corpus to another, it is a first step towards an approach that could considerably reduce the cold-start problem. Future work will investigate its performance in the context of transfer learning and with real students.
## 8 Acknowledgments
We would like to warmly thank Vincent Francois-Lavet from VU Amsterdam who gave us some great advice on the formalization of the reinforcement learning problem. We would also like to thank Nicolas Vayatis, Argyris Kalogeratos (Centre Borelli), Nathanael Beau and Antoine Sailenfest (one-point) for their insightful reviews of the paper.
|
2308.00926 | Detection and Segmentation of Cosmic Objects Based on Adaptive
Thresholding and Back Propagation Neural Network | Astronomical images provide information about the great variety of cosmic
objects in the Universe. Due to the large volumes of data, the presence of
innumerable bright point sources as well as noise within the frame and the
spatial gap between objects and satellite cameras, it is a challenging task to
classify and detect the celestial objects. We propose an Adaptive Thresholding
Method (ATM) based segmentation and Back Propagation Neural Network (BPNN)
based cosmic object detection including a well-structured series of
pre-processing steps designed to enhance segmentation and detection. | Samia Sultana, Shyla Afroge | 2023-08-02T04:02:46Z | http://arxiv.org/abs/2308.00926v1 | Detection and Segmentation of Cosmic Objects Based on Adaptive Thresholding and Back Propagation Neural Network
###### Abstract
Astronomical images provide information about the great variety of cosmic objects in the Universe. Due to the large volumes of data, the presence of immunmable bright point sources as well as noise within the frame and the spatial gap between objects and satellite cameras, it is a challenging task to classify and detect the celestial objects. We propose an Adaptive Thresholding Method (ATM) based segmentation and Back Propagation Neural Network (BPNN) based cosmic object detection including a well-structured series of pre-processing steps designed to enhance segmentation and detection.
Index Terms--Log Transformation, Erosion, Gaussian Filtering, Adaptive Segmentation, Back Propagation Neural Network (BPNN)
## I Introduction
We have proposed sequential order of pre-processing steps before segmentation and cosmic object detection. Segmentation based on Adaptive thresholding Method and back propagation neural network algorithm for learning and recognizing celestial objects. Astronomical image processing system faces so many obstacles during segmentation and detection. To overcome these obstacles, multiple pre-processing steps are performed on the actual image prior to segmenting the desired object(s).
## II Data
Input of this work is images of celestial objects. The standard data format used in astronomy is FITS that stands for "Flexible Image Transport System". The Charge Coupled Device (CCD) images obtained from celestial observations are usually stored in FITS format. This input image is in FITS format. This format is endorsed by NASA and the International Astronomical Union (IAU). This is used for transport, analysis, and archival storage of scientific data sets.
Two data is used here for experiment. One data is Eagle Nebula and Another is Comet Ison. Eagle Nebula was captured by "Faultkes Telescope North". It is a huge data of 1026x1024 dimensions. The Eagle Nebula is a cluster of stars discovered by Jean-Philippe in 1745-46[1]. The input data 2 is Comet ISON which is an image of Comet. This input image contains 1074x1074 dimensions, which mean the data is 1074 pixels width and 1074 pixels height.
## III Methodology
The methodology contains different types of steps to segment astronomical images. The steps are as follows-
### _Image Enhancement_
Enhancing an image provides better contrast and a more detailed image as compared to non-enhanced image. Here image is enhanced by Log Transformation.[3] The log transformations can be defined by this formula:
\[s=c\ log(r+1)\]
Where s and r are the pixel values of the output and the input image, c is a constant.
### _Morphological Adjustments_
Astronomical images usually contain numerous bright point sources (stars, distant galaxies etc.). These point sources are to be primarily removed for the proper segmentation in the final stage. The removal of such sudden change in intensities
Figure 1: Eagle Nebula (**Source:** Las Cumbres Obsevatory Global Telescope Network) [2]
Figure 2: Log Transformation
prevents the evolving level set contour, from getting stuck at the local regions. Local peak search using a matched filter or a cleaning process [5] are usually used to remove the unresolved point sources. But multiple pass through the filter, which is required during these processes, would diffuse the image and may result in the break-up of the components of the extended sources [4][5]. Erosion is a morphological enhancement process [6], would serve the purpose without affecting the shape and structure of the celestial objects to an extent. We have removed the peak intensities and bright point sources using erosion.
### _De Noising_
A **Gaussian filter** is a filter whose impulse response is a gaussian function. Gaussian filters have the properties of having no overshoot to a step function input while minimizing the rise and fall time. This behaviour is closely connected to the fact that the Gaussian filter has the minimum possible group delay. It is considered the ideal time delay filter, just as the sinC is the ideal frequency domain filter. [7]. In one dimension, the Gaussian function is:
\[G(x)=\frac{1}{\sqrt{2\pi\sigma^{2}}}e^{-\frac{x^{2}}{2\sigma^{2}}}\]
Where, \(\sigma\) is the standard deviation of the distribution.
### _Segmentation_
Here we have used the local Adaptive threshold method to segment the astronomical image. The Local Adaptive Threshold is used in uneven lighting conditions when you need to segment a lighter foreground object from its background. In many lighting situations shadows or dimming of light cause thresholding problems as traditional thresholding considers the entire image brightness. Adaptive Thresholding will perform binary thresholding (i.e., it creates a black and white image) by analysing each pixel with respect to its local neighbourhood. This localization allows each pixel to be considered in a more adaptive environment. It will perform binary Thresholding by analyzing each pixel with respect to its local neighborhood and calculating the midrange of current pixel. We focus on the binarization of image documents using local adaptive thresholding technique, because in Global thresholding methods such as one proposed by Otsu [8] try to find a single threshold value for the whole document. But it may lose huge amount of data. That's why we have used local Adaptive threshold method.
### _Object Detection by BPNN_
It is a supervised learning technique with multi-layer perceptron. First proposed by Paul Werbos in the 1970's and rediscovered by Rumelhart and McClelland in 1986[9].It is a training procedure which allows Multilayer feed-forward Neural Network to be trained.
The layer that is not input or output is called hidden layer. Objective of this neural network method is to classify successfully the non-linearly separable data.
There are three layers of this network:
* **Input Layer:** Introduces input value into the network, No activation function or other processing.
* **Hidden Layer:** Perform classification of features, It works like a black box and feed input to output layer.
* **Output Layer:** Functionally just like the hidden layers, Outputs are passed on to the world.
Figure 4: Segmentation by adaptive Thresholding
Figure 5: De noising by Gaussian Filtering
Figure 3: Erosion
Figure 6: architecture of a back propagation neural network
To detect an object, it is necessary to match the pattern. For pattern matching 10 hidden layers and 7 input nodes with some randomize weighted values. We have two sessions; one is a learning session, and another is testing session. During these two sessions 3 types of samples are taken as follows:
1. **Training Samples:** These are presented to the network during training, and the network is adjusted according to its error.
2. **Validation Samples:** These are used to measure network generalization, and to halt training when generalization stops improving.
3. **Testing Sample:** These have no effect on training and so provide an independent measure of network performance during and after training.
## V Conclusions
It is always challenging to process the astronomical images. The original images from the telescope archive are rarely available as high confidentiality is maintained, so lots of data are missing in our input images. Color images are available, but the images directly captured by telescopes are not available. Some parts of the Celestial objects are often very faint due to the presence of bright point sources. The images have low contrast due to long distances and disturbances. The celestial objects have lack of clear-cut boundaries. As experimented data size is huge, e.g., one is 1026x1024 dimensions and another is 1074x 1074 dimensions, so the detection accuracy is low. The detection technique takes long time to execute (more than 1 hour). The performance of detection is not so good and accuracy is very low. The accuracy is varying with different data.
|
2304.12985 | Rubik's Optical Neural Networks: Multi-task Learning with Physics-aware
Rotation Architecture | Recently, there are increasing efforts on advancing optical neural networks
(ONNs), which bring significant advantages for machine learning (ML) in terms
of power efficiency, parallelism, and computational speed. With the
considerable benefits in computation speed and energy efficiency, there are
significant interests in leveraging ONNs into medical sensing, security
screening, drug detection, and autonomous driving. However, due to the
challenge of implementing reconfigurability, deploying multi-task learning
(MTL) algorithms on ONNs requires re-building and duplicating the physical
diffractive systems, which significantly degrades the energy and cost
efficiency in practical application scenarios. This work presents a novel ONNs
architecture, namely, \textit{RubikONNs}, which utilizes the physical
properties of optical systems to encode multiple feed-forward functions by
physically rotating the hardware similarly to rotating a \textit{Rubik's Cube}.
To optimize MTL performance on RubikONNs, two domain-specific physics-aware
training algorithms \textit{RotAgg} and \textit{RotSeq} are proposed. Our
experimental results demonstrate more than 4$\times$ improvements in energy and
cost efficiency with marginal accuracy degradation compared to the
state-of-the-art approaches. | Yingjie Li, Weilu Gao, Cunxi Yu | 2023-04-25T16:51:12Z | http://arxiv.org/abs/2304.12985v2 | # Rubik's Optical Neural Networks:
###### Abstract
Recently, there are increasing efforts on advancing optical neural networks (ONNs), which bring significant advantages for machine learning (ML) in terms of power efficiency, parallelism, and computational speed. With the considerable benefits in computation speed and energy efficiency, there are significant interests in leveraging ONNs into medical sensing, security screening, drug detection, and autonomous driving. However, due to the challenge of implementing reconfigurability, deploying multi-task learning (MTL) algorithms on ONNs requires re-building and duplicating the physical diffractive systems, which significantly degrades the energy and cost efficiency in practical application scenarios. This work presents a novel ONNs architecture, namely, _RubikONNs_, which utilizes the physical properties of optical systems to encode multiple feed-forward functions by physically rotating the hardware similarly to rotating a _Rubik's Cube_. To optimize MTL performance on RubikONNs, two domain-specific physics-aware training algorithms _RotAgg_ and _RotSeq_ are proposed. Our experimental results demonstrate more than 4\(\times\) improvements in energy and cost efficiency with marginal accuracy degradation compared to the state-of-the-art approaches.
## 1 Introduction
Recently, use of Deep Neural Networks (DNNs) shows significant advantages in many applications, including large-scale computer vision, natural language processing, and data mining tasks. However, DNNs have substantial computational and memory requirements, which greatly limit their training and deployment in resource-constrained (e.g., computation, I/O, and memory bounded) environments [18, 19, 20, 21, 22]. More importantly, it is identified that training large DNN models produces significant carbon dioxide, e.g., recent studies estimated 626,000 pounds of planet-warming carbon dioxide, equal to the lifetime emissions of five cars, produced in training Transformer network [23]. As models grow bigger, their demand for computing increases, as well as the carbon footprint produced by those computations. To address these challenges and make the computation more eco-friendly, there has been a significant trend in building novel high-performance DNNs platforms, especially the increasing efforts on implementing novel DNNs in optical domain, i.e., optical neural network (ONNs) that mimic conventional feed-forward neural network functions using light propagation [15, 24, 25, 26, 27, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25]. Unlike directly accelerating conventional DNNs, algorithms for training and deploying ONNs need to be customized in order to precisely represent the whole physics properties of light propagation. Specifically, the equivalent numerical representations of inputs, intermediate results, and propagation functions in optical domain are complex values and complex-valued functions. Additionally, due to the limitations from nature physics, implementing reconfigurability and deploying multi-task learning (MTL) algorithms on many ONNs systems requires re-building and duplicating the physical hardware systems, which significantly degrades the energy and cost efficiency in practical application scenarios.
This work proposes a novel architecture **RubikONNs**, which utilizes the physical properties of optical systems to encode multiple feed-forward functions by physically rotating the systems similarly as rotating a _Rubik's Cube_. With the realization of MTL in optical systems, the computational carbon footprint can be significantly reduced while maintaining the system performance. The paper is organized as follows: in Section 2, we introduce Diffractive Deep Neural Networks (D\({}^{2}\)NN) and its physical implementations; in Section 3, we first formulate the forward functions in D\({}^{2}\)NN systems for MTL. Furthermore, to optimize the MTL performance of RubikONNs, we propose two novel domain-specific physics-aware training algorithms, **RotAgg** and **RotSeq**; in Section 4, we demonstrate four-task MTL on RubikONNs with implementation cost and energy efficiencies improved more than **4\(\times\)**. Finally, a comprehensive RubikONNs design space exploration analysis and explainability are provided to offer concrete design methodologies for practical uses.
## 2 Background
**Diffractive Deep Neural Networks (D\({}^{2}\)NN)** - Recently, there are increasing efforts on optical neural networks and optical computing based DNNs hardware, which bring significant advantages for machine learning systems in terms of their power efficiency, parallelism, and computational speed, demonstrated at various optical computing systems by [11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24]. Among them, free-space _diffractive deep neural networks_ (D\({}^{2}\)NNs), which is based on the light diffraction and phase modulation of the light signal provided by diffractive layers (L1-L5 in Figure 1), featuring millions of neurons in each layer interconnected with neurons in neighboring layers. This ultrahigh density and parallelism make this system possess fast and high throughput computing capability. Additionally, the D\({}^{2}\)NN system is implemented with passive optical devices, where the devices function without additional maintaining power required, thus significantly reducing the consumption power for solving deep learning tasks with orders of magnitude energy efficiency advantages over low-power digital devices ([12, 13, 14, 15, 16, 17, 18, 19, 20]). More importantly, [12, 13, 14, 15, 16, 17, 18, 19] demonstrated that diffractive propagation controlled by phase modulation are differentiable, which means that such parameters can be optimized with conventional backpropagation algorithms using conventional automatic differentiation (autograd) engine implemented in modern compilers such as PyTorch and Tensorflow.
In conventional DNNs, forward propagation is computed by generating the feature representation with floating-point weights associated with each neural layer. While in D\({}^{2}\)NNs, such floating-point weights are encoded in the phase modulation of each neuron in diffractive phase masks, which is acquired by and multiplied onto the light wavefunction as it propagates through the neuron. Similar to conventional DNNs, the final output class is predicted based on generating labels according to a given one-hot representation, e.g., the max energy reading over the output signals of the last layer observed by detectors. Specific examples of the system at training and inference can be found in the next section.
Once the training of a D\({}^{2}\)NN system is completed on the digital computation platform, the trained D\({}^{2}\)NN is deployed on the optical platform with non-configurable fabricated phase masks such as 3D printed phase masks, as diffractive layers for all-optical inference. Thus, D\({}^{2}\)NNs lack reconfigurability for the weight parameters, which will bring significant energy and system cost overhead in practical application scenarios, especially for MTL.
## 3 Approach
To overcome the aforementioned limitations of existing D\({}^{2}\)NN systems, we propose a novel neural architecture, namely **RubikONNs**, that utilizes the physical rotation properties of existing D\({}^{2}\)NN systems to realize MTL with few overheads, in which case, a single-task D\({}^{2}\)NN system can be used to encode multiple feed-forward functions by rotating its underlying structure just like rotating a _Rubik's Cube_.
### Forward function for a single-task D\({}^{2}\)NN
D\({}^{2}\)NN system is designed with three major components (Figure 1): (1) laser source encoding the input images, (2) diffractive layers encoding trainable phase modulation, and (3) detectors capturing the output of the forward propagation. Specifically, the input image is first encoded with the laser
Figure 2: Example of a four-task RubikONNs rotation architecture.
source. The information-encoded light signal is diffracted in the free space between diffractive layers and modulated via phase modulation at each layer. Finally, the diffraction pattern after light propagation w.r.t light intensity distribution will be captured at the detector plane for predictions.
From the beginning of the system, the input information (e.g., an image) is encoded on the coherent light signal from the laser source, its wavefunction can be expressed as \(f^{0}(x_{0},y_{0})\). The wavefunction after light diffraction from the input plane to the first diffractive layer over diffraction distance \(z\) can be seen as the summation of the outputs at the input plane, i.e.,
\[f^{1}(x,y)=\iint f^{0}(x_{0},y_{0})h(x-x_{0},y-y_{0},z)dx_{0}dy_{0} \tag{1}\]
where \((x,y)\) is the coordinate on the receiver plane, i.e., the first diffractive layer, \(h\) is the impulse response function of free space. Here we use Fresnel approximation, thus the impulse response function \(h\) is
\[h(x,y,z)=\frac{\exp(ikz)}{i\lambda z}\exp\{\frac{ik}{2z}(x^{2}+y^{2})\} \tag{2}\]
where \(i=\sqrt{-1}\), \(\lambda\) is the wavelength of the laser source, \(k=2\pi/\lambda\) is free-space wavenumber.
Equation 1 can be calculated with spectral algorithm, where we employ Fast Fourier Transform (FFT) for fast and differentiable computation, i.e.,
\[U^{1}(\alpha,\beta)=U^{0}(\alpha,\beta)H(\alpha,\beta,z) \tag{3}\]
where \(U\) and \(H\) are the Fourier transformation of \(f\) and \(h\) respectively.
After light diffraction, the wavefunction resulting in Equation 3 \(U^{1}(\alpha,\beta)\) is first transformed to time domain with inverse FFT (iFFT). Then the phase modulation \(W(x,y)\) provided by the diffractive layer is applied to the light wavefunction in time domain by matrix multiplication, i.e.,
\[f^{2}(x,y)=\text{iFFT}(U^{1}(\alpha,\beta))\times W_{1}(x,y) \tag{4}\]
where \(W_{1}(x,y)\) is the phase modulation in the first diffractive layer, \(f^{2}(x,y)\) is then the input light wavefunction for the light diffraction between the first diffractive layer and the second diffractive layer.
We enclose one computation round of light diffraction and phase modulation at one diffractive layer as a computation module named **DiffMod**, i.e.,
\[\text{DiffMod}(f(x,y),W)=L(f(x,y),z)\times W(x,y) \tag{5}\]
where \(f(x,y)\) is the input wavefunction, \(W(x,y)\) is the phase modulation, \(L(f(x,y),z)\) is the wavefunction after light diffraction over a constant distance \(z\) in time domain, i.e., \(\text{iFFT}(U(\alpha,\beta))\) in Equation 4.
As a result, in a multiple diffractive layer constructed D\({}^{2}\)NN system, the forward function can be computed iteratively for the stacked diffractive layers. For example, for the 5-layer system shown in Figure 1, the forward function can be expressed as,
\[\begin{split} I(f^{0}(x,y),W)=\text{DiffMod}(\text{DiffMod}( \text{DiffMod}\\ (\text{DiffMod}(f^{0}(x,y),W_{1}(x,y)),W_{2}(x,y)),\\ W_{3}(x,y)),W_{4}(x,y)),W_{5}(x,y))\end{split} \tag{6}\]
where \(f^{0}(x,y)\) is the input wavefunction to the system and \(W_{1-5}\) is phase modulation provided at each diffractive layer.
The final diffraction pattern w.r.t the light intensity \(I\) in Equation 6 is projected to the detector plane. We can design arbitrary detector patterns for classes in different tasks by setting the corresponding coordinates of the detector region at the full detector plane for each class by the user's definition. For example, for MNIST datasets, the output plane is divided into **ten** detector regions to mimic the output of conventional neural networks for predicting **ten** classes. The final class will be produced by argmax function with the ten intensity sums of the ten detector regions as input. For example, in Figure 1, based on the label indices of the ten detector regions for image "2", we can see that the 3\({}^{rd}\) region on the first row has the highest energy. Then, the predicted class is class "2". Similarly, the predicted classes "1", "8", and "9" of other three datasets can be generated by applying argmax on the detector. With the one-hot represented ground truth class \(t\), the loss function \(L\) can be acquired with **MSELoss** as,
\[L=\parallel\text{Softmax}(I)-t\parallel_{2} \tag{7}\]
Thus, the whole system is designed to be differentiable and compatible with conventional automatic differential engines.
### RubikONNs Architecture for MTL
To deal with multiple tasks with minimum system overhead, an ideal system should be designed to encode different forward functions without changing the single-task system. Note that the diffractive layers are mostly designed with 3D printed materials, such that the phase parameters (weights) carried by these layers are non-reconfigurable after 3D printing. However, as demonstrated by [18, 19], the layers are portable in D\({}^{2}\)NNs and they are in square shapes. This means that we are able to rotate each layer by close-wise \(90^{\circ}\), \(180^{\circ}\), or \(270^{\circ}\), and place the layer back in the system without any other changes. While each layer carries specific trained phase parameters, by rotating one or multiple layers, the forward function will be different since the weights of the model are changed. In optical domain, this means that the modulation of the light changes accordingly w.r.t specific rotation patterns. This offers the main motivation of designing RubikONNs that aims to enable MTL in existing single-task D\({}^{2}\)NN systems. As a result, RubikONNs enables MTL by simply **(1)** pulling out the layer, **(2)** rotating it to the specific rotation pattern as designed, and **(3)** plugging the layer back to the original location, without changing the rest of the system.
To illustrate the rotation architecture RubikONNs, an example of encoding four tasks with the last two layers (L4, L5) as _rotation layers_ is shown in Figure 2. The designed rotation type applied to these layers is the _rotation angle_ (clockwise \(90^{\circ}\) in this example). To summarize the forward function of RubikONN architecture, we introduce two Boolean variables to indicate the rotation patterns of L4 and L5 layers. When \(s_{0}=1\), L4 will be rotated clockwise \(90^{\circ}\), otherwise, L4 will remain unchanged; similarly, \(s_{1}\) indicates the rotation pattern of L5. Thus, for the first task, \(s_{0}s_{1}=00\), both layers are unchanged; for the second task, \(s_{0}s_{1}=01\), L5 will be rotated clockwise \(90^{\circ}\); for the third task, \(s_{0}s_{1}=11\), both layers will
be rotated clockwise \(90^{\circ}\); for the fourth task, \(s_{0}s_{1}=10\), L4 will be rotated clockwise \(90^{\circ}\). The forward function is expressed as follows:
\[I=\text{DiffMod}(\text{DiffMod}(\begin{subarray}{c}I\\ 1\leq i\leq 3\end{subarray}(f^{0},W_{i}),\begin{cases}W_{4}),W_{5}),&s_{0}s_{1}=0 0\\ W_{4},\text{Rot}(W_{5})),&s_{0}s_{1}=01\\ \text{Rot}(W_{4})),\text{Rot}(W_{5})),&s_{0}s_{1}=11\\ \text{Rot}(W_{4})),W_{5}),&s_{0}s_{1}=10\end{cases} \tag{8}\]
We take \(s_{0}s_{1}=01\) as an example for the quantitative analysis w.r.t the rotation in the system, where only the fifth layer is rotated for 90\({}^{\circ}\). Thus, the forward function is
\[I_{01}=\text{DiffMod}((\begin{subarray}{c}I\\ 1\leq i\leq 4\end{subarray}(f^{0},W_{i}),\text{Rot}(W_{5})) \tag{9}\]
where \(\text{DiffMod}(f(x,y),W)=L(f(x,y),z)\times W(x,y),x\in[n,n],y\in[n,n]\) as shown in Equation 5, assuming \(W_{5}\) is trained as \(W_{n,n}\) and the diffraction result is \(L_{n,n}\), i.e.,
\[W_{n,n}=\begin{pmatrix}w_{1}^{1}&w_{2}^{1}&\cdots&w_{n}^{1}\\ w_{1}^{2}&w_{2}^{2}&\cdots&w_{n}^{2}\\ \vdots&\vdots&\ddots&\vdots\\ w_{1}^{n}&w_{2}^{n}&\cdots&w_{n}^{n}\end{pmatrix}L_{n,n}=\begin{pmatrix}l_{1}^{ 1}&l_{2}^{1}&l_{2}^{2}&\cdots&l_{n}^{1}\\ l_{1}^{2}&l_{2}^{2}&\cdots&l_{n}^{2}\\ \vdots&\vdots&\ddots&\vdots\\ l_{1}^{n}&l_{2}^{n}&\cdots&l_{n}^{n}\end{pmatrix}\]
Thus, the wavefunction after \(\text{DiffMod}(f,W)\)**without rotation**, i.e., \(I_{00}\), is calculated as
\[I_{00}=\begin{pmatrix}l_{1}^{1}*w_{1}^{1}&l_{2}^{1}*w_{2}^{1}&\cdots&l_{1}^{1} *w_{n}^{1}\\ l_{1}^{2}*w_{1}^{2}&l_{2}^{2}*w_{2}^{2}&\cdots&l_{n}^{2}*w_{n}^{2}\\ \vdots&\vdots&\ddots&\vdots\\ l_{1}^{n}*w_{1}^{n}&l_{2}^{n}*w_{2}^{n}&\cdots&l_{n}^{n}*w_{n}^{n}\end{pmatrix}\]
When the rotation pattern applied to \(W_{5}\) is \(90^{\circ}\), the corresponding \(\text{Rot}(W_{5})\)is
\[\text{Rot}(W_{5})=\begin{pmatrix}w_{1}^{n}&\cdots&w_{1}^{2}&w_{1}^{1}\\ w_{2}^{n}&\cdots&w_{2}^{n}&w_{2}^{1}\\ \vdots&\ddots&\vdots&\vdots\\ w_{n}^{n}&\cdots&w_{n}^{2}&w_{n}^{1}\end{pmatrix}\]
While \(L(f,z)\) remains the same, the correponding rotated \(I_{01}\) is thus altered to
\[I_{01}=\begin{pmatrix}l_{1}^{1}*w_{1}^{n}&\cdots&l_{n-1}^{1}*w_{2}^{2}&l_{n}^ {1}*w_{1}^{1}\\ l_{1}^{2}*w_{2}^{n}&\cdots&l_{n-1}^{2}*w_{2}^{2}&l_{n}^{2}*w_{2}^{2}\\ \vdots&\ddots&\vdots&\vdots&\vdots\\ l_{1}^{n}*w_{n}^{n}&\cdots&l_{n-1}^{n}*w_{n}^{2}&l_{n}^{n}*w_{n}^{1}\end{pmatrix}\]
As a result, by rotating the weights matrix \(W\) (phase modulation) to different angles, the information-carried light signal is modulated with different applied phase modulations w.r.t different datasets. The full propagation figures in Figure 4 show the light patterns when different rotation patterns are applied to the same input light signal.
Note that the proposed architecture can go beyond four tasks by adding more rotation patterns. For example, each layer has four rotation states, \(0^{\circ}\), \(90^{\circ}\), \(180^{\circ}\), or \(270^{\circ}\), such that the maximum number of different forward functions is \(1+3\times 4=13\) with two rotation layers; when add another rotation layer, i.e., three rotation layers with four rotation states in the system, the system can deal with up to 29 tasks.. Discussions about choices of rotation layers and different rotation angles are included in Section 4.
While the RubikONNs architecture enables zero-overhead MTL on D\({}^{2}\)NN systems, the training algorithms that are aware of physical rotations and light diffraction do not exist. Specifically, the training algorithms should be able to learn structural weight parameters w.r.t specific rotation patterns and given datasets. Thus, we introduce two novel MTL algorithms for training RubikONNs, i.e., _rotated aggregation algorithm (RotAgg)_ and _rotated sequence Algorithm (RotSeq)_.
```
Result:\(W=\{W_{C}^{1,2,3},W_{R}^{1},W_{R}^{5}\}\) for the rotation model
1Initialization: Weights \({W_{0}}^{1,2,3}=\{{W_{C0}}^{1,2,3},{W_{R0}}^{4},{W_{R0}}^{5}\}\) for the model
2while\(i\leq\text{training iterations}\)do
3\(W_{1,2,3,4}\leftarrow{W_{0}}^{i}:\)
4\(W_{1}^{i}\leftarrow\{W_{C_{i}^{1,2}}^{3},W_{R_{i}^{1}}^{4}\};\)
5\(W_{1}^{i+1}\leftarrow\)\(\{W_{1}^{i}-\eta{W_{1}}^{i}\};\)\(\triangleright\) training w.r.t task 1 (D1) w/o rotation
6\(W_{2}^{i}\leftarrow\{W_{C_{i}^{2,3}}^{2},W_{R_{i}^{1}},\text{rotate}(W_{R_{i}^{ 1}}^{6})\};\)\(W_{2}^{i+1}\leftarrow\)\(D2\)\(W_{2}^{i}-\eta{W_{2}}^{i}\)\(\triangleright\) task 2 (D2) update w 5\({}^{th}\) layer rotated 90\({}^{\circ}\)
7\(W_{3}^{i}\leftarrow\{W_{C_{i}^{1,2,3}}^{1,3},\text{rotate}(W_{R_{i}^{ 1}}^{6}),\text{rotate}(W_{R_{i}^{1}}^{6})\};\)
8\(W_{3}^{i+1}\leftarrow\)\(\{W_{C_{i}^{1,2,3}}^{1,3},\text{rotate}(W_{R_{i}^{ 1}}^{6}),\text{rotate}(W_{R_{i}^{1}}^{6})\};\)
9\(W_{3}^{i+1}\leftarrow\)\(\{W_{C_{i}^{1,2,3}}^{1,3},\text{rotate}(W_{R_{i}^{1}}^{4}),W_{R_{i}^{ 1}}^{6}\};\)
10\(W_{4}^{i+1}\leftarrow\)\(\{W_{C_{i}^{1,2,3}}^{1,3},\text{rotate}(W_{R_{i}^{1}}^{4}),W_{R_{i}^{ 1}}^{6}\};\)
11\(W_{4}^{i+1}\leftarrow\)\(\{W_{C_{i}^{1,2,3}}^{1,3},\text{rotate}(W_{R_{i}^{1}}^{4}),W_{R_{i}^{ 1}}^{6}\};\)
12\(W_{4}^{i+1}\leftarrow\)\(\{W_{C_{i}^{1,2,3}}^{1,3},\text{rotate}(W_{R_{i}^{1}}^{4}),\text{ rotate}-\text{back}(W_{R_{i}^{1}}^{5})\};\)\(W_{3}^{i}\leftarrow\)\(\{W_{C_{i}^{1,2,3}}^{1,3},\text{rotate}-\text{back}(W_{R_{i}^{1}}^{4}),\text{rotate}-\text{back}(W_{R_{i}^{ 1}}^{5})\};\)\(W_{4}^{i}\leftarrow\)\(\{W_{C_{i}^{1,2,3}}^{1,3},\text{rotate}(W_{R_{i}^{1}}^{4}),W_{R_{i}^{ 1}}^{5}\};\)\(\triangleright\) reversely rotating virtual models aggregation
13\(W_{0}^{i+i+1}\leftarrow\)\((W_{4}^{i+1}+W_{2}^{i+1}+W_{3}^{i+1}+W_{4}^{i+1})/4;\)
14\(i\gets i+1\)
15 end while
```
**Algorithm 1**Rotated Aggregation Algorithm (RotAgg) for training RubikONNs.
### Algorithm 1: Rotated Aggregation Training
The Rotated Aggregation Training (**RotAgg**) algorithm shown in Alg. 1 aims to update the parameters of RubikONNs by averagely aggregating gradients generated from all tasks, while the gradient of each task is computed by including the rotations in every training iteration. Therefore, the training iterations are fully aware of physical rotations of the RubikONNs architecture. We illustrate RotAgg using the same rotation designs shown in Figure 2, where the first three layers are shared layers, named as \(W_{C}^{1,2,3}\), that will not be rotated during training and inference, and the rotations layers are denoted as \(W_{R}^{A}\) and \(W_{R}^{5}\). First, RotAgg algorithm initializes one model for aggregation, and four virtual models to store temporary updates w.r.t specific rotation patterns and dataset (line 1). In every training iteration, RotAgg will first update the parameters in the four virtual models, \(W_{1
example, the first update is performed for task 1 w.r.t dataset D1 (line 4), where the model is rotated based on \(W_{1}^{i}\) with the rotation pattern of \([0^{\circ},0^{\circ}]\) as shown in Figure 2. The second update will then be performed w.r.t to task 2 dataset D2, where the virtual model \(W_{2}^{i}\) will be initialized by rotating parameters in rotation layers (L4 and L5) with the rotation pattern \([0^{\circ},90^{\circ}]\) (line 5). Similarly, the virtual models for task 3 (D3) and task 4 (D4) will be performed. Before the final weight aggregation, the four virtual models will be reverse-rotated back to the initial position (line 8). Finally, RotAgg averagged aggregates the weights from all four virtual models (line 9), and return \(W_{0}^{i+1}\) for next iteration or as final model.
### Algorithm 2: Sequential Rotation Training
The second training algorithm Sequential Rotation Training (**RotSeq**) shown in Alg. 2 aims to update the parameters of RubikONNs by sequentially updating the model w.r.t a given sequence of task orders in order to incorporate the physical rotations in the training process. Here, we illustrate Alg. 2 using a specific order of updates, i.e., D1\(\rightarrow\)D2\(\rightarrow\)D3\(\rightarrow\)D4. In the illustration example, for the first task, the model will be updated w.r.t dataset D1 without rotating the rotation layers (line 4). Unlike the RotAgg algorithm, the model will be directly updated to \(W^{i}\) after the training of the first task. Next, the weights will be rotated with the rotation pattern \([0^{\circ},90^{\circ}]\), i.e., rotating \(W_{R_{1}}^{5}\) clockwise \(90^{\circ}\) before the gradient update process for task 2 (line 5). Note that the model rotated before training for task 2 has already been updated w.r.t D1. Similarly, the model will be trained in the same sequential updating fashion according to the rotation patterns designed for task 3 (line 7) and task 4 (line 9). Note that the inner loop update order can be fixed for all iterations or can be dynamically changed through the training process. Therefore, in addition to other training parameters, RotSeq could also be impacted by the inner loop update orders. In Section 4, a comprehensive analysis of the update orders is provided.
## 4 Results
**System Setup** - The model explored in our experiments is designed with five diffractive layers as it is shown in Figure 1. The system size is set to be \(200\times 200\), i.e., the size of the diffractive layers and the total detector plane is \(200\times 200\). The input image whose original size is \(28\times 28\) will be enlarged to \(200\times 200\) and encoded on the laser light signal with the _wavelength_ = 532 \(nm\). The physical distances between layers, first layer to source, and final layer to detector, are set to be \(30\)\(cm\). The architecture is designed with the rotation patterns shown in Figure 2. The detector collects the light intensity of the ten pre-defined regions for ten classes with each size of \(20\times 20\) (Figure 1), where the sums of intensity of these ten regions are equivalent to a 1\(\times\)10 vector in float32 type. The final prediction results will be then generated using argmax.
**Training Parameters and Datasets** - To evaluate the proposed RubikONNs architecture and RotAgg and RotSeq training algorithms, we select four public image classification datasets, including 1) _MNIST-10_ (MNIST) ([14]), 2) _Fashion-MNIST_ (FMNIST) ([13]), 3) _Kuzushiji-MNIST_ (KMNIST) ([12]), and 4) _Extension-MNIST-Letters_ (EMNIST) ([12]), an extension of MNIST to handwritten letters. Specifically, for EMNIST, we customize the dataset to have the first ten classes (i.e., A-J) to match the D\({}^{2}\)NN physical system, with 48000 training examples and 8000 testing examples. In the rest of this section, the total number of training iterations of all experiments is set to 150. Within each training iteration, each dataset is trained with 1 training epoch. The learning rate in the training process is set to be \(0.01\) for all experiments cross all four datasets using Adam. The implementations are constructed using PyTorch v1.5.1. All experiments are conducted on a Nvidia 2080 Ti GPU.
### Evaluations of RubikONNs with RotAgg and RotSeq Algorithms
**RotSeq and training permutations, and RotAgg** - We first evaluate RotSeq algorithm (Alg. 2) on MTL using the four selected datasets. As discussed in Section 3, the performance of RotSeq can vary with different gradient update orders (i.e., lines 4-10 in Alg. 2). Therefore, we evaluate RotSeq algorithm with four different permutations of the gradient update sequences, shown in second column of Table 1. With such recurring permutation of training orders, each task can be trained at each position in the training order. First, for each dataset, we can see that RotSeq offers a small accuracy boost for the given task/dataset used as the last gradient update in one RotSeq training iteration. This is because RotSeq (Alg. 2) updates the parameters in a given sequence to all training tasks, where testing accuracy is basically obtained right after the gradient updates of the last task. Second, when a dataset is trained at the beginning of each training iteration (first task in the training sequence), the prediction performance of this task might slightly degrade. For example, MNIST accuracy collected using model trained with D1 \(\rightarrow\) D2 \(\rightarrow\) D3 \(\rightarrow\) D4 sequence is 0.0006/0.0006/0.0025 lower than the other three permutations.
As for the comparison between RotAgg and RotSeq rotation training algorithms, the training loss curves for two al
gorithms are shown in Figure 3. Both algorithms can converge efficiently and produce similarly decent accuracy performance, while the training loss for Algorithm 2, which trains the rotation MTL system with sequential datasets, shows more fluctuations. As a result, the model trained with RotAgg algorithm shows overall performance and robustness since the training is not impacted by orders of gradient updates. Instead, RotAgg averages the gradients obtained independently for all tasks. The advantages of RotAgg can be summarized in two: **(1)** The training hyperparameter space is much limited than RotSeq since its performance is not influenced by the gradient update order; **(2)** The algorithm is expected to be more robust than RotSeq as RotSeq has slight training bias w.r.t the gradient update order. While RotSeq performs similarly to RotAgg with these four datasets, the bias training characteristics in RotSeq can potentially result in requiring more training setup exploration. Thus, in the rest of the result section, we use RotAgg as default algorithm for RubikONN architecture exploration analysis.
**Prediction performance comparisons** - To fully demonstrate the effectiveness of the proposed approaches, we first compare the prediction performance with two existing approaches. First of all, a straightforward method to enable MTL on a fixed single-task D\({}^{2}\)NNs architecture is to simply train a D\({}^{2}\)NNs while merging the four datasets as one. Thus, we implement a straightforward baseline algorithm by extending the approach proposed by [11], where the training dataset consists of fully shuffled training samples from all four datasets, namely _BaselineMTL_, and its depth is set to be five and system size is set to be \(200\times 200\), which is the same setup as our rotation system. The evaluation result of this baseline algorithm is shown in the last row of Table 1. Next, we compare our approaches to a specific MTL D\({}^{2}\)NNs architecture. Specifically, [10] proposes a novel D\({}^{2}\)NN architecture that utilizes transfer learning concepts from conventional neural networks, which includes shared diffractive layers (shared weights) and independent diffractive layers at the output stage for each task. To make a fair comparison, we extend that architecture to deal with four tasks and set three layers for the shared diffractive layers and two layers for the independent diffractive layers in each channel for four tasks and the same system size (\(200\times 200\)) as our system.
As shown in Table 1, we can see that by utilizing the physical rotation properties with the proposed training algorithms, RubikONNs offers better prediction accuracy for all datasets. We can see that with RotAgg and RotSeq, RubikONNs performs significantly better than both baseline approaches. For example, with RotAgg algorithm, our approach offers about 2.5% accuracy increases for MNIST and FMNIST, 6.5% increases for EMNIST and 8.4% for KMNIST, compared to _BaselineMTL_[11]; compared to [10], RotAgg offers 3.5% accuracy increases on EMNIST, and performs similarly for other three tasks. This demonstrates that by utilizing the physical rotations into D\({}^{2}\)NN architecture, RubikONNs offers clear prediction improvements over other approaches, while system cost, energy consumption, and complexity into the comparisons are not yet in
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Algorithm & Permutation & MNIST\({}_{\text{D1}}\) & FMNIST\({}_{\text{D2}}\) & KMNIST\({}_{\text{D3}}\) & EMNIST\({}_{\text{D4}}\) \\ \hline \multirow{4}{*}{RotSeq (Alg. 2)} & D1 \(\rightarrow\) D2 \(\rightarrow\) D3 \(\rightarrow\)**D4** & 0.9533 & 0.8384 & 0.8275 & **0.8885** \\ & D4 \(\rightarrow\) D1 \(\rightarrow\) D2 \(\rightarrow\)**D3** & 0.9539 & 0.8394 & **0.8353** & 0.8811 \\ & D3 \(\rightarrow\) D4 \(\rightarrow\) D1 \(\rightarrow\)**D2** & 0.9539 & **0.8430** & 0.8270 & 0.8824 \\ & D2 \(\rightarrow\) D3 \(\rightarrow\) D4 \(\rightarrow\)**D1** & **0.9558** & 0.8386 & 0.8272 & 0.8834 \\ \hline RotAgg (Alg. 1) & n/a & 0.9524 & 0.8412 & 0.8272 & 0.8819 \\ \hline \hline
[10] & n/a & 0.9532 & 0.8370 & 0.8277 & 0.8464 \\ BaselineMTL & n/a & 0.9262 & 0.8176 & 0.7430 & 0.8174 \\ \hline \end{tabular}
\end{table}
Table 1: Evaluations of prediction performance on four-task multi-task learning using datasets. MNIST(D1), FMNIST(D2), KMNIST(D3), and EMNIST(D4), optimized with the proposed RotSeq and RotAgg algorithms.
Figure 3: Training loss curves with 150 epochs for two algorithms.
cluded in the comparisons.
**Accuracy-Efficiency Comparison** - To fully evaluate the efficiency of the models regarding the system cost, complexity, and energy efficiency, we introduce an accuracy-cost evaluation metric, where hardware cost is the sum of diffractive layer cost and detector cost1. In Table 2, single-task cost metric is set as the baseline (unit 1), and the improvement of the architectures is calculated using Equation 10. Note that in Table 2, the baseline results are collected using single-task implementation with five layers and \(200\times 200\) system size, and our results are generated using RotAgg algorithm. We can see that our approach offers **more than 4.0\(\times\) and 2.0\(\times\) hardware cost efficiency improvements** compared to [11] and [12], respectively. Regarding energy efficiency, we evaluate the power consumption per task. Our approach demonstrates **2.7\(\times\) and 5.3\(\times\) energy efficiency improvements** compared to [11] and [12], respectively. Note that the power consumption of DONNs is orders of magnitude more efficient than conventional digital platforms. Thus, we only compared to DONNs baselines in this work since the advantages of DONNs over conventional DNN hardware have been demonstrated.
Footnote 1: The layer fabrication cost is \(\sim\)$100 and detector cost is $1,500 - $10,000. We formulate the cost of $100 as unit 1, thus, the layer cost for a 5-layer ONN is 5 and one detector cost is 10 for the cost efficiency estimation.
\[\text{Acc-Efficiency Metric} =\frac{Acc_{\text{MTL}}}{Acc_{\text{baseline}}}\cdot\frac{Cost_{ \text{MTL}}}{Cost_{\text{baseline}}};\] \[Cost =\text{\#. Detectors or }\mu W\text{/fps/task} \tag{10}\]
### Design Space of RubikONNs Architecture
With the proposed architecture and training algorithms, the rotation architecture can be designed in many different variants. Specifically, the rotation angles of the rotation layers, and the index of rotation layers to be rotated, which are independent to all other system and algorithm specifications. For example, instead of rotating the layers clockwise \(90^{\circ}\), the layers can also be rotated \(180^{\circ}\) and \(270^{\circ}\) (\(-90^{\circ}\)). Similarly, the architecture can also be designed by selecting other layers other than the 4\({}^{th}\) and 5\({}^{th}\) layers to be rotated. Thus, in this section, we provide experimental analysis of other variants of the proposed architecture by evaluating different rotation angles and various rotation layer selections. To limit the exploration space on the algorithm side for RubikONN architecture exploration, we explore the rotation angles and rotation layers only using **RotAgg** algorithm.
**Analysis of different rotation angles** - Since each diffractive layer can rotate close-wise \(90^{\circ}\), \(180^{\circ}\), and \(270^{\circ}\) (\(-90^{\circ}\)), the rotation angle can be independent from layer to layer, e.g., rotating 4\({}^{th}\) layer \(90^{\circ}\) and rotating 5\({}^{th}\) layer by \(180^{\circ}\). To evaluate the impacts of rotation angles, the experiments shown in Table 3 are conducted with fixed selection of rotation layers, i.e., 4\({}^{th}\) and 5\({}^{th}\) layers. Table 3 shows the accuracy of four datasets in the model trained with RotAgg when different rotation angles are applied to last two layers. Specifically, we evaluate two different rotation angle settings: (1) same rotation angles for both layers; or (2) different rotation angles for the two layers. For example, (\(90^{\circ}\), \(180^{\circ}\)) means that if the 4\({}^{th}\) layer is designed to be rotated for a given task, it rotates \(90^{\circ}\); and 5\({}^{th}\) layer is rotated \(180^{\circ}\) if needed. In general, with different rotation angles, RubikONNs shows little fluctuation in terms of accuracy.
**Analysis of rotated layer selections** - Let the number of tasks be 4 and each rotation layer can only rotate clock-wise \(90^{\circ}\), the total number of layer selections is \(\mathbf{C}_{5}^{2}\)=\(10\). According to studies of conventional neural networks, the layers close to the inputs are usually very important for feature extractions, while the layers close to outputs are crucial for generating the final prediction class. Thus, we evaluate three combinations, including (1) the last two layers, (2) the first two layers, and (3) the first and the last layers. The results are shown in Table 4. We can see that (a) the models trained with RotAgg algorithm perform almost the same, regardless of which layers are selected as rotation layers; (2) including the last layer (5\({}^{th}\)) in the rotation layers performs slightly better on average.
In summary, Table 3 and Table 4 results suggest the follows: **(1)** The rotation layers are preferred to be selected close to the output. **(2)** The prediction performance is not restricted to specific rotation angles, which offers possibility to encode more forward functions, and it is the key to enable larger number of tasks for MTL.
### RubikONNs MTL Explainability
To understand the impacts of rotations for MTL, we measure the internal propagation of RubikONNs between the source, layers and detectors. Specifically, we measure the intensity of the light in the RubikONNs at inference phase, shown in Figures 4. The visualizations of the forward propagation shown in Figures 4 are organized by applying a same image from one dataset using all rotation patterns, following the designed rotation patterns shown in Figure 2. It is known that the main idea of DNNs is that layers close to the input focus on extracting features, and layers close to the output focus on finalizing the predictions using the extracted features. The intuition of RubikONNs architecture is relatively the same, and has been demonstrated based on the propagation measurements. For example, in Figure 4, the input image is from MNIST dataset, where four complete propagation measurements are included w.r.t the rotation patterns for task MNIST, FMNIST, KMNIST, and EMNIST, respectively. We can see that the outputs of the first three layers are identical for all four cases, since the first three layers are not the rotation layers. The differences of forward propagation are observed starting from the 4\({}^{th}\) layer, which is rotated clockwise \(90^{\circ}\) for MNIST and KMNIST tasks, and remains un-rotated for FMNIST and EMNIST tasks. Similarly, since the 5\({}^{th}\) layer is also designed to be rotated as well, the outputs collected by the detectors clearly show four different intensity distributions. Additional outputs of 4\({}^{th}\) and 5\({}^{th}\) layers w.r.t other three datasets are shown in Figure 5, which further confirms that RubikONNs is able to successfully encode four different forward functions, which are properly optimized for four tasks using the proposed training algorithms.
## 5 Conclusions
This work proposes a novel optical neural architecture **RubikONNs** architecture, which utilizes the physical properties of optical computing systems to encode multiple feed-forward functions by rotating the non-reconfigurable hardware system. To optimize the MTL performance of RubikONNs, two novel domain-specific physics-aware training algorithms RotAgg and RotSeq are proposed, such that RubikONNs offers **4\(\times\)** implementation cost and energy efficiencies improvements, with marginal accuracy degradation. Finally, a comprehensive RubikONNs design space exploration analysis and explainability are provided to offer concrete design methodologies for practical uses.
The ONNs have the potential to handle more complex image tasks, including image classification tasks and graph tasks [22] with outstanding energy efficiency advantage over conventional NNs (e.g., CNNs), which can already be observed with simple datasets like MNIST (\(\sim\)3 orders compared to CPU/GPU in our setups). It is believed that more significant improvements can be achieved when dealing with more complex tasks requiring more computation resources, as there is no extra energy cost for computations with the modulations of the light signal in ONNs.
Future works will focus on dealing with more complex datasets. For example, we can do MTL with RubikONNs for RGB image classification by extending the optical channels for "R", "G", "B" channels with rotation scheme in each channel to realize the MTL for RGB images.
**Acknowledgement** This work is funded by National Science Foundation (NSF) under NSF-2008144, NSF-2019306, NSF-2019336, and NSF-2047176.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{
\begin{tabular}{c} Rotated \\ Layers \\ \end{tabular} } & \multicolumn{4}{c}{**RotAgg** (Alg. 1) & \multicolumn{1}{c}{} \\ \cline{2-5} & MNIST & FMNIST & KMNIST & EMNIST \\ \hline \(4^{th}\), \(5^{th}\) & 0.9524 & 0.8412 & 0.8272 & 0.8819 \\ \(1^{st}\), \(2^{nd}\) & 0.9536 & 0.8490 & 0.8199 & 0.8813 \\ \(1^{st}\), \(5^{th}\) & 0.9510 & 0.8473 & 0.8210 & 0.8865 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Design space explorations with different selections of rotation layers with rotation angle \(90^{\circ}\).
Figure 4: Visualization of light propagation patterns at inference with MNIST image “1” as input, using all four rotations of RubikONNs.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline & \multicolumn{3}{c|}{[Lin _et al._, 2018]} & \multicolumn{3}{c|}{[Li _et al._, 2021]} & \multicolumn{3}{c|}{**This work**} \\ \cline{2-11} & D1 & D2 & D3 & D4 & D1 & D2 & D3 & D4 & D1 & D2 & D3 & D4 \\ \hline Layer Cost & 5 & 5 & 5 & 5 & \multicolumn{3}{c|}{11} & \multicolumn{3}{c|}{5} \\ \hline Detector Cost* & 10 & 10 & 10 & 10 & 20 & \multicolumn{3}{c|}{10} \\ \hline _Acc.(\%)_ & 96.4 & 86.5 & 86.1 & 90.9 & 95.6 & 83.1 & 81.4 & 84.3 & 95.2 & 84.1 & 82.7 & 88.2 \\ \hline Cost Efficiency & 1 & 1 & 1 & 1 & 2.0\(\times\) & 2.1\(\times\) & 1.9\(\times\) & **4.0\(\times\)** & **4.1\(\times\)** & **4.1\(\times\)** \\ \hline Power (\(\mu W\)/fps/task) & \multicolumn{3}{c|}{4.67\(\times\)\(10^{-7}\)} & \multicolumn{3}{c|}{8.86\(\times\)\(10^{-7}\)} & \multicolumn{3}{c|}{**1.7\(\times\)\(10^{-7}\)**} \\ \hline \hline \end{tabular}
\end{table}
Table 2: Evaluations of hardware efficiency on multi-task learning compared with [17] and [16], using datasets.
MNIST(D1), FMNIST(D2), KMNIST(D3), and EMNIST(D4).
Figure 5: Visualization of light propagation measurements (input, and outputs of 4\({}^{th}\) and 5\({}^{th}\) layers) at inference phase with FMNIST, KMNIST, and EMNIST images, using all four rotations of RubikONNs.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{
\begin{tabular}{c} Rotation \\ Angle \\ \end{tabular} } & \multicolumn{3}{c}{**RotAgg** (Alg. 1) w rot layers = 4\({}^{th}\), 5\({}^{th}\) \\ \cline{2-5} & MNIST & FMNIST & KMNIST & EMNIST \\ \hline \(90^{\circ}\), \(90^{\circ}\) & 0.9524 & 0.8412 & 0.8272 & 0.8819 \\ \(180^{\circ}\), \(180^{\circ}\) & 0.9532 & 0.8514 & 0.8313 & 0.8809 \\ \(270^{\circ}\), \(270^{\circ}\) & 0.9527 & 0.8353 & 0.8240 & 0.8845 \\ \(90^{\circ}\), \(-90^{\circ}\) & 0.9518 & 0.8427 & 0.8272 & 0.8879 \\ \(90^{\circ}\), \(180^{\circ}\) & 0.9514 & 0.8413 & 0.8227 & 0.8851 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Explorations with various rotation angles (clockwise) with the 4\({}^{th}\) and 5\({}^{th}\) layers as rotation layers. |
2304.07378 | A Reconfigurable Linear RF Analog Processor for Realizing Microwave
Artificial Neural Network | Owing to the data explosion and rapid development of artificial intelligence
(AI), particularly deep neural networks (DNNs), the ever-increasing demand for
large-scale matrix-vector multiplication has become one of the major issues in
machine learning (ML). Training and evaluating such neural networks rely on
heavy computational resources, resulting in significant system latency and
power consumption. To overcome these issues, analog computing using optical
interferometric-based linear processors have recently appeared as promising
candidates in accelerating matrix-vector multiplication and lowering power
consumption. On the other hand, radio frequency (RF) electromagnetic waves can
also exhibit similar advantages as the optical counterpart by performing analog
computation at light speed with lower power. Furthermore, RF devices have extra
benefits such as lower cost, mature fabrication, and analog-digital mixed
design simplicity, which has great potential in realizing affordable, scalable,
low latency, low power, near-sensor radio frequency neural network (RFNN) that
may greatly enrich RF signal processing capability. In this work, we propose a
2X2 reconfigurable linear RF analog processor in theory and experiment, which
can be applied as a matrix multiplier in an artificial neural network (ANN).
The proposed device can be utilized to realize a 2X2 simple RFNN for data
classification. An 8X8 linear analog processor formed by 28 RFNN devices are
also applied in a 4-layer ANN for Modified National Institute of Standards and
Technology (MNIST) dataset classification. | Minning Zhu, Tzu-Wei Kuo, Chung-Tse Michael Wu | 2023-04-14T20:21:17Z | http://arxiv.org/abs/2304.07378v2 | # A Reconfigurable Linear RF Analog Processor for Realizing Microwave Artificial Neural Network
###### Abstract
Owing to the data explosion and rapid development of artificial intelligence (AI), particularly deep neural networks (DNNs), the ever-increasing demand for large-scale matrix-vector multiplication has become one of the major issues in machine learning (ML). Training and evaluating such neural networks rely on heavy computational resources, resulting in significant system latency and power consumption. To overcome these issues, analog computing using optical interferometric-based linear processors have recently appeared as promising candidates in accelerating matrix-vector multiplication and lowering power consumption. On the other hand, radio frequency (RF) electromagnetic waves can also exhibit similar advantages as the optical counterpart by performing analog computation at light speed with lower power. Furthermore, RF devices have extra benefits such as lower cost, mature fabrication, and analog-digital mixed design simplicity, which has great potential in realizing affordable, scalable, low latency, low power, near-sensor radio frequency neural network (RFNN) that may greatly enrich RF signal processing capability. In this work, we propose a 2\(\times\)2 reconfigurable linear RF analog processor in theory and experiment, which can be applied as a matrix multiplier in an artificial neural network (ANN). The proposed device can be utilized to realize a 2\(\times\)2 simple RFNN for data classification. An 8\(\times\)8 linear analog processor formed by 28 RFNN devices are also applied in a 4-layer ANN for Modified National Institute of Standards and Technology (MNIST) dataset classification.
Artificial intelligence (AI), machine learning (ML), analog signal processing, RF analog processor, RF neural network (RFNN)
## I Introduction
Deep neural network (DNN) as a thriving method in artificial intelligence (AI) and machine learning (ML) has impressively accelerated the development in many applications [1]-[2], including image classification, speech recognition, recommendation system, etc. One of the fundamental building blocks in neural networks is the fully connected neural network, where each neuron [3] is connected to all neurons from the adjacent layer by weighted synapses. A simple multilayer example is shown in Fig. 1, which is inspired by the feedforward architecture of the mammalian visual system [4] as well as other architectures [5]. The information processing in such layered neural structures can be conveniently treated with matrix operations. For the hidden Layer-[\(n\)], the neuron of each layer can be expressed as follows:
\[Z_{a_{n}\times 1}^{[n]}=W_{a_{n}\times a_{n-1}}^{[n]}A_{a_{n-1}\times 1} ^{[n-1]}+B_{a_{n}\times 1}^{[n]} \tag{1}\] \[A_{a_{n}\times 1}^{[n]}=Activation\Big{\{}Z_{a_{n}\times 1}^{[n]}\Big{\}} \tag{2}\]
where \(A_{a_{n}\times 1}^{[n]}\) in (1) represents the vector of Layer-\(n\), which contains \(q_{n}\) neurons and \(B_{q_{n}\times 1}^{[n]}\) is the threshold vector. \(W_{q_{n}\times q_{n-1}}^{[n]}\) represents the weight matrix between Layer-[\(n\)] and its previous layer, Layer-[\(n\)-1]. Nonlinear activation functions in (2), such as hyperbolic tangent (_Tanh_), rectified linear unit (_ReLU_), and _Sigmoid_[6] are commonly applied to each hidden layer. As a result, matrix multiplication can be widely applied in both forward (inference) and backward (learning) propagation [1, 7]. Nevertheless, large ML models demand a proliferation of variables. For instance, a powerful neural network that aims to partially function as the human brain can easily scale up to billions of parameters to execute a single task, not to mention the tremendous structure complexity and connection plasticity of the human brain [8]. Recent analyses show that the demand for computing power will double every
Fig. 1: Example of a fully connected neural network.
3.4 months, much faster than the rate of Moore's law [9]. Although, in recent years, graphics processor unites (GPU) and the tensor processing unit (TPU) are used to alleviate the computational speed and energy efficiency bottleneck [10], a faster and more efficient realization for matrix multiplication is needed more than ever. On the other hand, conventional von Neumann computing architectures have dominated the modern computer structure design with separated processing and memory units. Nevertheless, it suffers from performing large matrix multiplication when the amount of data shuttling between the processing and memory units increases geometrically, which contributes to at least 50% of the energy dissipation [11].
The spiking neural network (SNN), as a neuromorphic network structure, is one of the non-von Neumann computing structures that combines the processing and the memory units to perform calculations within locality. As such, there is no need to conduct matrix multiplications since all neurons are event driven and can operate locally and asynchronously. Such a structure can harness the two major local-learning features commonly seen in the human brain, the spike-timing-dependent plasticity (STDP) and spike-rate-dependent plasticity (SRDP) [12]. SNN can be realized by both analog and digital approaches. For instance, memristors, discovered by Chua [13] in 1971, can be leveraged to realized analog SNN. By changing the internal ion distribution, memristors can store analog information in the form of resistance [14], which have the potential in realizing neuromorphic computing. On the digital side, CMOS-based technology has been used to design a neuromorphic chip with digital blocks to form a SNN [15]-[17] that emulates the behavior of each neuron in an asynchronous fashion. Due to the complexity of the emulating circuitry, the scalability and power efficiency are limited [18]. Although the SNN model is a progressive approach that aims mimicking the neuromorphic structure of an intelligence being, more fundamental collaborations between ANN designers and biological studies are encouraged [19].
While in-memory computing architecture can allow for acceleration of analog matrix multiplication based on the parallel computing capability, it needs to contain a large number of nonvolatile memory units that can store information precisely in order to perform calculation later, such as memristors or flash memory. It is worth mentioning that memristors can also be used to build high-density crossbar array (CBA), where multiple perpendicular bias lines are used to form a 2D array whose intersection contains a memristor as a memory unit [20]-[22]. In this case, each element of the matrix can be saved and read out by the corresponding two bias lines. Although such structure is very suitable for matrix multiplication, the biasing precision [23] remains one of the major challenges when scaling. On the other hand, while nonvolatile flash memory used in DNNs and SNNs has advantages in mature manufacturing technology, low power, and scalability, it suffers from reliability issues [24].
Recently, the diffraction principle of electromagnetic waves is leveraged to create connections between layers of memorable units in a 3D scenario, which has been applied to analog signal processing form neural networks [25][26]. In particular, this has been achieved by utilizing metasurface arrays, where each layer consists of corresponding spatial modulation [27] inspired by the optical 4f system [28][29]. Although such designs have fast computational speed and power efficiency, they are inevitably bulky due to multiple layers of 2D structures.
In addition, integrated optical designs have recently also been utilized to perform analog complex matrix multiplication with optical networks based on Mach-Zehnder interferometers (MZIs) [30]-[33]. While such a chip-based method is promising in achieving good power efficiency with a small chip size, the fabrication cost and complexity of such an integrated optical device are relatively high. On the other hand, RF/microwave circuits have widely been utilized in analog signal processing [34]. In addition to low power consumption and fast processing speed, RF analog signal processing has many advantages such as low cost, mature fabrication, and ease of analog-digital mixing design. Compared with the optical approach, it requires no additional conversion between electrical and optical signals. While the concept of AI/ML has already been incorporated into RF component design optimization [35] and signal fingerprint enhancement [36], the realization of an RF reconfigurable analog matrix multiplier has not yet been performed to create a microwave artificial neural network or RF neural network (RFNN), which is a promising step in emerging near-sensor and in-sensor computing [37].
To this end, in this work, we propose a reconfigurable quadrature-hybrid-based linear RF analog processor that can perform a set of analog matrix-vector multiplications. The proposed 2\(\times\)2 reconfigurable linear RF analog processor can be used as part of a multi-layer artificial neural network with additional post data processing for tasks such as data classification and handwriting recognition. The paper is organized as follows: Section II gives a theoretical analysis of the proposed linear RF analog processor. In Section III, a proof-of-concept prototype device is provided, where theoretical analysis and measurement results are introduced. To illustrate how the proposed linear RF analog processor can be utilized in RFNN, in Section IV, the classification capability of a simple 2 \(\times\) 2 RFNN is introduced with theory, simulation, and experimental results. Moreover, the measured S-parameters of the unit cell are utilized to synthesize an 8\(\times\)8 linear RF analog processor, which is then used in a neural network for handwriting recognition with Modified National Institute of Standards and Technology database (MNIST dataset). Comparison with other approaches as well as possible future research directions for such RF-based neural networks are discussed in Section V.
Figure 2: Proposed reconfigurable linear RF analog processor.
## II Method and Analysis
Analogous to the optical approach based on MZIs, which consists of two beam splitters and two phase shifters [30], the proposed linear RF analog processor consists of two quadrature (\(90^{\circ}\)) hybrids and two phase shifters, as is illustrated in Fig. 2. The quadrature hybrid is a 3-dB (50:50) directional coupler with \(90^{\circ}\) phase difference between the two output ports. By tuning the first phase shifter, one can split the power from each input port and combine them at the two output ports, correspondingly. Then, the second phase shifter can provide extra phase difference between them. Therefore, the magnitude ratio and phase difference at the output ports can be tuned individually.
The S-parameters of the quadrature hybrid at the design frequency \(f_{0}\) can be expressed as follows [38]:
\[S_{qh}=\frac{-1}{\sqrt{2}}\begin{bmatrix}0&j&1&0\\ j&0&0&1\\ 1&0&0&j\\ 0&1&j&0\end{bmatrix} \tag{3}\]
Consider only the voltage components propagating in the forward direction, from ports P1 and P4 to ports P2 and P3, we can express the voltage transformation matrix as:
\[\begin{bmatrix}V_{2}^{-}\\ V_{3}^{-}\end{bmatrix}=\frac{-1}{\sqrt{2}}\begin{bmatrix}j&1\\ 1&j\end{bmatrix}\begin{bmatrix}V_{4}^{+}\\ V_{4}^{+}\end{bmatrix} \tag{4}\]
Therefore, the total voltage transformation matrix of the device can be obtained:
\[\begin{bmatrix}V_{2}^{-}\\ V_{3}^{-}\end{bmatrix}=\begin{bmatrix}e^{-j\phi}&0\\ 0&1\end{bmatrix}\frac{-1}{\sqrt{2}}\begin{bmatrix}j&1\\ 1&j\end{bmatrix}\begin{bmatrix}e^{-j\theta}&0\\ 0&1\end{bmatrix}\frac{-1}{\sqrt{2}}\begin{bmatrix}j&1\\ 1&j\end{bmatrix}\begin{bmatrix}V_{1}^{+}\\ V_{4}^{+}\end{bmatrix}\\ =je^{-\theta}\begin{bmatrix}e^{-j\phi}\sin\begin{pmatrix}\theta\\ 2\end{pmatrix}&e^{-j\phi}\cos\begin{pmatrix}\theta\\ 2\end{pmatrix}\\ \cos\begin{pmatrix}\theta\\ 2\end{pmatrix}&-\sin\begin{pmatrix}\theta\\ 2\end{pmatrix}\end{bmatrix}\begin{bmatrix}V_{4}^{+}\\ V_{4}^{+}\end{bmatrix} \tag{5}\]
One may notice the difference with the optical version of the equation in [30], which is because the phase delay here is defined as a negative value at the output port. The four corresponding S-parameters of the device then can be expressed as:
\[S_{21}=C\;e^{-j\phi}\sin\begin{pmatrix}\theta\\ 2\end{pmatrix} \tag{6}\]
\[S_{31}=C\cos\begin{pmatrix}\theta\\ 2\end{pmatrix} \tag{7}\]
\[S_{24}=C\;e^{-j\phi}\cos\begin{pmatrix}\theta\\ 2\end{pmatrix} \tag{8}\]
\[S_{34}=-C\sin\begin{pmatrix}\theta\\ 2\end{pmatrix} \tag{9}\]
where \(C=je^{-\theta/2}\). When \(\theta\) increases from 0 to \(\pi\), the device state switches from the cross state (CS) to the bar state (BS), as shown in Fig. 3(a)-(b). For example, assuming \(P_{1}=0.5\) mW, \(P_{4}=1.5\) mW, and both inputs are in phase, the voltage
magnitude at port P2 and P3 from port P1 and P4 varies with \(\theta\), as shown in Fig. 3(c), which can be expressed as:
\[V_{21}=\sqrt{2Z_{0}P_{1}}S_{21} \tag{10}\]
\[V_{31}=\sqrt{2Z_{0}P_{1}}S_{31} \tag{11}\]
\[V_{24}=\sqrt{2Z_{0}P_{4}}S_{24} \tag{12}\]
\[V_{34}=\sqrt{2Z_{0}P_{4}}S_{34} \tag{13}\]
where \(Z_{0}\) is the characteristic impedance of the transmission line and \(V_{nm}\) denotes the voltage magnitude at port \(n\) when excited from port \(m\). In addition, the power received at ports P2 and P3 can be calculated as follows:
\[P_{2}=\frac{|V_{21}+V_{24}|^{2}}{2Z_{0}} \tag{14}\]
\[P_{3}=\frac{|V_{31}+V_{34}|^{2}}{2Z_{0}} \tag{15}\]
Substituting (10)-(13), we have
\[P_{2}=(P_{1}+P_{4})sin^{2}\left(\frac{\theta}{2}+\Delta\right) \tag{16}\]
\[P_{3}=(P_{1}+P_{4})cos^{2}\left(\frac{\theta}{2}+\Delta\right) \tag{17}\]
where \(\Delta=\text{acos}\left(\sqrt{P_{1}}/\sqrt{P_{1}+P_{4}}\right)\). As plotted in Fig. 3(d), the output power, \(P_{2}\) and \(P_{3}\), will also vary with phase \(\theta\). When port P2 reaches its maximum power output, the power at port P3 goes to its minimum, and vice versa. Also, from (6) and (8), tuning phase \(\phi\) can add extra phase to port P2. Therefore, changing the magnitude ratio and phase difference between port
Figure 3: Schematic of two states, (a) cross state, (b) bar state, and corresponding transfer relation of (c) voltage, (d) output power between four ports, when \(\theta\) varies from \(\theta\) to \(2\pi\).
P2 and P3 can be realized by tuning both phase shifters.
Now, if we consider the linear RF analog processor as a 2\(\times\)2 transformation matrix \(t(\theta,\phi)\), it can be written as:
\[t(\theta,\phi)=\begin{bmatrix}S_{21}&S_{24}\\ S_{31}&S_{34}\end{bmatrix}\text{and}\;\;tt^{H}=I\;, \tag{18}\]
which belongs to a particular unitary group of degree two, i.e., \(U(2)\)[30]. Since each element of \(t(\theta,\phi)\) is not independent of each other, one single device cannot represent an arbitrary matrix. Nevertheless, it can be utilized as the complex unitary matrix in singular value decomposition (SVD) to help synthesize an arbitrary matrix in a cascade fashion, whereas an \(N\times N\) matrix can also be formed with multiple such unitary matrices.
## III Experimental Verification
A prototype of the proposed linear RF analog processor is shown in Fig. 4. The device is fabricated on a Rogers RO4360G2 PCB board with a dielectric constant of 6.15, which consists of two quadrature hybrids with a center frequency at \(f_{0}=2\) GHz and two identical discrete phase shifters. Each discrete phase shifter utilizes two SP6T RF switches (Mini-Circuits JSW6-33DR+) that can switch RF signals among 6 different paths with various lengths, labeled as \(L_{\#}\) shown in Fig. 4. The SP6T RF switch contains one input and six outputs, which can be digitally switched using a 3-bit high (\(\sim\)3V) and low (\(\sim\)1.8V) bias voltage signal. Two switches work together to make one of the six transmission lines connected to the rest of the circuit. As such, the device has a total of 36 states since there are 2 phase shifters. Each state can be labeled as \(L_{n}L_{m}\) and results in a phase difference combination of \(\theta_{n}\) and \(\phi_{m}\), where \(\theta_{n}=\phi_{n}=\beta L_{\#}\), and \(\beta\) is the propagation constant of the transmission line. For the prototype, the discrete phase differences associated with each path are listed in Table I. The switch bias lines are routed and connected through the backside of the board. Fig. 5(a)-(b) show the measured return loss of the device when both phase shifters are at state \(L_{1}L_{1}\) or \(L_{6}L_{6}\).
Fig. 5(c)-(f) indicates how the insertion loss varies with different phase shifter state \(L_{n}L_{1}\). From \(n=1\)_to_\(6\)_, \(S_{21}\) and \(S_{34}\) increases, while \(S_{24}\) and \(S_{31}\) decreases. Fig. 6 shows how the magnitudes of the insertion loss at 2 GHz varies when tuning the \(\theta\) phase shifter. By comparing the values from theory (dashed), simulation (solid), and measurement (plus marks), we can see the signal input from ports P1 and P4 are redistributed to ports P2 and P3, and their magnitude changes with the \(\theta\) phase shifter. The simulated and measured data of the prototype also show a similar magnitude shifting tendency. It can be observed that the maximum magnitudes from the simulation and measurement results are lower than the theoretical value, which may be due to the loss and phase deviation coming from the imperfect PCB fabrication.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline \(\beta L_{\#}\) & \(\beta L_{1}\) & \(\beta L_{2}\) & \(\beta L_{3}\) & \(\beta L_{4}\) & \(\beta L_{5}\) & \(\beta L_{6}\) \\ \hline Phase & \multirow{2}{*}{29\({}^{\circ}\)} & \multirow{2}{*}{53\({}^{\circ}\)} & \multirow{2}{*}{75\({}^{\circ}\)} & \multirow{2}{*}{104\({}^{\circ}\)} & \multirow{2}{*}{135\({}^{\circ}\)} & \multirow{2}{*}{154\({}^{\circ}\)} \\ Difference & & & & & & \\ \hline \end{tabular}
\end{table}
Table I: discrete phase difference at \(f_{0}=2\)GHz
Figure 4: Prototype of the proposed linear RF analog processor (DC biasing lines are routed through vias to the back side of the PCB board).
Figure 5: Measured frequency response of the proposed device: return loss of all four ports when the device at (a), state \(L_{1}L_{1}\), (b), state \(L_{6}L_{6}\), and insertion loss (c) \(S_{21}\), (d) \(S_{31}\), (e) \(S_{24}\), (f) \(S_{34}\) when the device at state \(L_{n}L_{1}\), \(n=1,2,...,6\).
## IV Applications
### 2\(\times\)2 Artificial Neural Network
A 2\(\times\)2 linear RF analog processor can exhibit good versatility for simple applications. Here, we adopt the matrix as part of a simple 3-layer 2\(\times\)2 neural network, as illustrated in Fig. 7. The linear RF analog processor shown in Fig. 7 handles the weights multiplication to the hidden layer as follows:
\[\begin{bmatrix}S_{21}&S_{24}\\ S_{31}&S_{34}\end{bmatrix}\times\begin{bmatrix}x_{1}\\ x_{2}\end{bmatrix}=\begin{bmatrix}x_{1}\\ x_{2}\end{bmatrix} \tag{19}\]
\[[w_{1}\quad w_{2}]\times abs\begin{bmatrix}\begin{bmatrix}x_{1}\\ x_{2}\end{bmatrix}\end{bmatrix}+b=z_{out} \tag{20}\]
\[Sigmoid(z_{out})=\hat{\gamma} \tag{21}\]
where, \(x_{1}\) and \(x_{2}\) are inputs, \(S_{21},S_{24},S_{31},\) and \(S_{34}\) are weights between the input and the hidden layer, which can be a positive or negative value. \(w_{1}\) and \(w_{2}\) are weights between the hidden and the output layer, and \(b\) is the bias value of the output neuron. Since we measure the magnitude of the device, the absolute function is naturally applied as the nonlinear activation function to the hidden layer as its activation function. As such, Equations (19)-(21) list the entire forward propagation calculation in the neural network illustrated in Fig. 7. The weights between the first and hidden layers are determined by S-parameters of the linear processor, which is reconfigurable with \(\theta\) and \(\phi\) according to (6)-(9). Once the phase shifter values are determined, one can perform matrix-vector multiplication operation to the weights and the input values, and its result with absolute activation function applied can be measured. It is noted that the operations for the bias in (20), matrix multiplication in (20), and the activation function in (21) for the output layer are conducted in the post data processing. The activation function \(Sigmoid(z)=1/(1+e^{-z})\) applied to the output layer is commonly used for binary classification [39], such that the final output value \(\hat{\gamma}\) ranges between 0 and 1.
For the binary classification purpose, we set the condition as follows: when \(\hat{\gamma}\geq 0.5\), it falls to category '1', otherwise, it falls to category '0'. To derive the dividing line, which happens at the critical condition when \(\hat{\gamma}=0.5\) or \(z_{out}=0\), we have:
\[\begin{split} z_{out}=|V_{21}+V_{24}|w_{1}\\ +|V_{31}+V_{34}|w_{2}+b=0\end{split} \tag{22}\]
since,
\[|V_{21}+V_{24}|=V_{1}^{+}\sin\left(\frac{\theta}{2}\right)+V_{4}^{+}\cos \left(\frac{\theta}{2}\right) \tag{23}\]
\[\begin{split}\left(V_{1}^{+}\cos\left(\frac{\theta}{2}\right)-V _{4}^{+}\sin\left(\frac{\theta}{2}\right)\right.\right.\left.\left.\left.\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \
well explains the role of \(\theta\) and \(\psi\), when determining the shape of the classification region. In particular, \(\theta\) determines the orientation of wedge-shape '1' region, while \(\psi\) determines the angle of the wedge. Therefore, once given the dataset for binary classification, one can choose the approximate phase value of \(\theta\) and half-wedge angle of \(\psi\), and calculate for \(w_{1}\), \(w_{2}\), and \(b\). This will give a better initial value for further optimization, i.e., the supervised learning process, during which the transformation of the predicted '0' to '1' region will become sharper as the training error keeps decreasing.
Since the \(\phi\) phase does not affect the magnitude at port P2, this leaves us 6 states related to \(\theta\). After the learning process, the neural network can automatically operate as six different binary classifiers, in which \(\theta\) determines the orientation of the wedge. Fig. 9 shows the \(\gamma\) of the entire input space, where the \(x\)-axis and \(y\)-axis are input magnitude \(V_{4}^{+}\) and \(V_{1}^{+}\), respectively. The transformation matrix required in (18) is based on the measured S-parameters of the prototype at 2 GHz. Where, the associated states are \(L_{n}L_{6}\), \(n=1,2,...,6\). For each state, the '0' region (blue area) and '1' region (yellow area) are clearly classified by the 2\(\times\)2 neural network, i.e., the linear RF analog processor along with post data processing. The contour lines of values 0.1, 0.5, and 0.9 illustrate a sharp evaluation transformation from \(\gamma=0\) to \(\gamma=1\). Once trained, the RFNN becomes a reconfigurable neural network that can perform 6 different binary classifiers by tuning the \(\theta\) phase shifter.
While Fig. 9 is calculated based on the measured S-parameters of the device, to verify the classification capability of the device, we should directly feed in power in port P1 and P4, and then measure the output power at port P2 and P3, to see whether the neural network can classify the data once been trained. For the input space, the \(x\)-axis and \(y\)-axis are \(V_{4}^{+}\) and \(V_{1}^{+}\), respectively. To apply the entire input space of a specific range, we mesh it into 11 by 11 grids, and measure the output power at P2 and P3 with all input power combinations for all six device states, in which the post-processing is conducted on a computer. Once we obtain the optimized parameters, the evaluated \(\gamma\) distribution of the six different binary classifiers can be plotted as shown in Fig. 10. The patterns at different \(\theta\) phase shifter states are similar to those shown in Fig. 9, thereby verifying the classification capability of the proposed RFNN.
In addition to the wedge-shape datasets, our simple neural network can also be trained to classify more datasets. Due to the limited number of neurons, the dataset can only be classified with two cuts. The forward/backward propagation during the neural network supervised training process is shown in Fig. 11(a). If necessary, the training dataset (\(D_{x}\), \(D_{y}\)) can be pre-processed by the computer, including shift and then scale it down with a factor of \(\gamma\) to fit the working input space. Then, analog RF power with voltage magnitude equal to (\(\gamma D_{x}\), \(\gamma D_{y}\)) will be the input signals at P1 and P4. The output power measured in ports P2 and P3 will be converted to voltage magnitude and scaled back by (\(\sqrt{2Z_{0}P_{2}}/\gamma,\sqrt{2Z_{0}P_{3}}/\gamma\)) for further post processing. The predicted value will be used to compare with the ground truth and calculate the error. After one batch of data processed, the accumulated error can be used to update the parameters for the post-processing part, and to update physical parameters on the physical device, such as the biasing voltage, digital biasing code. Such learning process considering real devices can achieve a better performance of a given system [42]. The inference (prediction) process is shown in Fig. 11(b). The trained parameters are configured for the physical device and the post-processing unit. The testing data from the computer will be transformed to input signal
Figure 8: Classification distribution of the neuron network (a) and (b) its relation to \(\theta,\psi\).
Figure 9: Classification results of the neuron network based on the measured S-parameters when the path of the \(\theta\) phase shifter switches from \(L_{1}\) to \(L_{6}\).
and the outcome is measured and post-processed. The prediction by the entire neural network falls into the category of '0' or '1', in which the accuracy can be obtained by comparing the predicted category with the ground truth of the testing dataset.
In addition to the classification results shown in Fig. 10, four more classifiers are trained as illustrative examples, where their classification performance is shown in Fig. 12. The train/test data ranges from 0 to 30, which are multiplied by the scaling factor \(\gamma=1/100\) during the pre-processing and scaled back in the post processing step. It is noted that the data points of label '1' are presented with blue '+' markers, and those of label '0' are with white '*' markers. Fig. 12(a) shows the first case, where the data points labeled with '1' are distributed at the upper right corner and those labeled with '0' are evenly distributed in the rest of space. In this scenario, the neural network picks the state of \(L_{2}L_{6}\) during the training process (the \(\mathbf{\phi}\) phase shifter is fixed in all four cases), resulting in an accuracy of around 94% for the test data. In the second case and the third case, shown in Fig. 12(b) and (c), respectively, the data points with different labels are distributed diagonally with a slight overlap, in which one is along the direction to the upper right corner and the other to the lower right corner. In both cases, the classifier provides a good accuracy of 98% and 96%, respectively. For the case in Fig. 12(c), the \(\theta\) phase shifter is switched to state \(L_{4}\). In the fourth case shown in Fig. 12(d), the data with label '1' are surrounded by those with label '0', and therefore, it is difficult for a 2\(\times\)2 neural network to handle the classification. As a result, the accuracy for this case decreases to around 74%. Another reason that can affect the accuracy is that the phase shifter of the prototype can only provide fixed six discrete phase difference, thereby limiting the orientation freedom of the wedge-shape classification region. Therefore, the reconfigurability can be further enhanced with continuous low-loss phase shifters instead of discrete ones. On the other hand, the precision may also be improved when incorporating a larger neural network even with coarse phase resolution, e.g., binary neural network [43].
Figure 11: Schematics of the training process and inference (prediction) process of the proposed RF neural network.
Figure 12: Four examples of the classification results using the proposed RF neural network.
Figure 10: Classification results of the neuron network based on the measured output power from port P2 and P3 when the path of the \(\theta\) phase shifter switches from \(L_{1}\) to \(L_{6}\).
### _MNIST Dataset Handwriting Recognition_
The \(2\times 2\) linear RF analog processor can be utilized to synthesize an arbitrary \(N\times N\) unitary matrix \(U^{\prime}(N)\), which can be decomposed as:
\[U^{\prime}(N)=\Sigma(N)\times T(N) \tag{27}\]
where \(\Sigma(N)\) is an \(N\)-dimensional diagonal matrix with modulus of all its diagonal elements to be \(1\) and \(T(N)\) is also an \(N\times N\) unitary matrix that can be further decomposed to an identity matrix by a series of \(N\)-dimensional rotational matrices \(R_{N\times N}^{(i)\ -1}\). If we keep (\(N\)-2) dimensions unchanged (identical) for each rotational matrix, by multiplying each rotational matrix we are rotating 2 dimensions at a time to a vector in the \(N\)-dimension Hilbert Space [44][45]:
\[T(N)\cdot R_{N\times N}^{(1)\ -1}\cdot R_{N\times N}^{(2)\ -1}\cdot...\ \cdot R_{N\times N}^{(S)\ -1}=I(N) \tag{28}\]
where \(S=N(N-1)/2\). By multiplying \(S\) such rotational matrices, we can decompose the entire matrix \(T(N)\). Since each rotational matrix is also a unitary matrix, one can easily find its inverse matrix \(R_{N\times N}^{(i)}=Hermitian(R_{N\times N}^{(i)\ -1})\)and compose the matrix \(T(N)\) back. If each inverse matrix can be realized by analog hardware designs, then one can synthesize arbitrary unitary matrix \(U^{\prime}(N)\), in which \(\Sigma(N)\) can be implemented with any devices capable of tuning phase.
The proposed \(2\times 2\) linear RF analog processor can form a unitary matrix and its magnitude and phase at port 2 can be tuned separately, which can be used to synthesize \(T(N)\) and \(\Sigma(N)\). Fig. 13 shows an example of how to synthesize a \(U^{\prime}(4)\) with the proposed analog processor. The left portion synthesizes the matrix \(T(4)\) and the right portion is for the diagonal matrix \(\Sigma(N)\). When labeling all processors, one follows the order from left to right, and top to bottom, until all possible columns of processors are filled. Taking the example in Fig. 13, there are five columns of processors in the left portion, each labeled with index \(i\) as (1, 2, (3&4), 5, 6). Each processor crosses two adjacent channels leading to corresponding inputs (orange color) and has two tunable phases \(\theta_{i}\) and \(\phi_{j}\) that can be calculated from the required rotational matrix \(R_{N\times N}^{(i)\ -1}\) decomposing matrix \(T(N)\). Each \(N\)-dimensional rotational matrix \(R_{N\times N}^{(i)\ -1}\) can be formed from an identity matrix replacing four adjacent elements with a unitary matrix along the diagonal, which can be expressed as:
\[R_{N\times N}^{(i)\ -1}=\begin{bmatrix}1&\cdots&0&0&\cdots&0\\ \vdots&\ddots&\vdots&\vdots&\ddots&\vdots\\ 0&\cdots&r_{pp}&r_{pq}&\cdots&0\\ 0&\cdots&\tau_{qp}&\tau_{qq}&\cdots&0\\ \vdots&\ddots&\vdots&\vdots&\ddots&\vdots\\ 0&\cdots&0&0&\cdots&1\end{bmatrix} \tag{29}\]
\[\begin{bmatrix}r_{pp}&r_{pq}\\ \tau_{qp}&r_{qq}\end{bmatrix}=t^{H} \tag{30}\]
where \(p\) and \(q\) are the channel number of the \(i\)th processor, for example, processor 3 is crossing channel 1 and 2, therefore, \(p=1\) and \(q=2\). Processor 5 is crossing channel 2 and 3, therefore, \(p=2\) and \(q=3\). In practice, the phase value of each processor can be calculated using stochastic optimization methods to avoid imaginary phase condition [30]. Thus, one can easily scale up the design to synthesize an arbitrary \(N\times N\) unitary matrix.
Furthermore, since all real matrices can be decomposed using SVD:
\[M=UDV^{H} \tag{31}\]
where \(U,\,V\) are unitary matrices, and \(D\) is a diagonal matrix, one can synthesize an arbitrary matrix with two unitary matrices as shown in Fig. 13. It is noted that the diagonal matrix can be absorbed into one single matrix in the unitary matrix [46].
While one can synthesize an arbitrary matrix, a unitary matrix, e.g. \(T(N)\) in Fig. 13 can provide sufficient parameters for specific applications. Here, we design an \(8\times 8\) matrix \(T(8)\) that can be employed in a multilayer fully connected neural network performing classification for handwritten digits in the MNIST dataset as shown in Fig. 14. The weights between hidden Layer-1 and hidden Layer-2 are realized by the \(8\times 8\) linear RF analog processor, which is constructed with 28 \(2\times 2\) RF linear analog processor devices as building blocks shown in Fig. 4. Each \(2\times 2\) processor device has 36 different states since there are two phase shifters, and each has 6 discrete phase values.
The entire RFNN has four layers, one input layer, two hidden layers, and one output layer. Each 2D image of \(28\times 28\) pixels are reshaped into a \(784\times 1\) vector and then compressed into eight features when reaching the first hidden layer. The weights between the fully connected hidden Layer-1 and 2 is synthesized by the \(8\times 8\) linear RF analog processor. The output layer maps the eight outputs of the analog processor into 10 classes. From the MNIST dataset, we utilize \(50000\) images to train the RFNN and the rest \(10000\) images to test the trained network. The activation functions for the hidden Layer-1 and the output layer are _leaky-ReLU_ and _Softmax_, respectively, while the magnitude detection (absolute function) is naturally adopted as the activation function for the hidden Layer-2. In addition, the hidden Layer-2 has no bias parameters. The entire model is built on MATLAB and trained with mini-batch (batch size 10) stochastic gradient descendent method [47]. The training period contains 100 iterations with a learning rate of 0.005 and all training instances are randomly shuffled for each iteration. Although the entire training is done on a computer, the \(8\times 8\) linear RF analog processor is simulated based on the measurement data of the unit cell \(2\times 2\) linear RF analog
Fig. 13: Schematic of a \(4\times 4\) linear RF analog processor.
processor and constructed based on the same method as shown in the 4x4 case in Fig 13.
The training and testing results are shown in Fig. 15. We compare the RFNN (analog) with the conventional artificial neural network (digital) of the same dimension, where the training accuracy and error in classification of the MNIST training dataset are shown in Fig. 15. It can be observed that the training accuracy of the RFNN is around 91.7%, and the testing accuracy is 91.6%, which is very close to the traditional digital version of the same neural network, with training accuracy of 94.1% and testing accuracy of 93.1%. The performance degradation on the analog version results from the limited discrete phases tunability.
The confusion matrix of the testing dataset for the trained RFNN is shown in Fig. 16. It can be seen that the predicted label and the real label are mostly correct and dominate the diagonal elements of the confusion matrix. The major misclassifications are between the labels of '7' to '9' and '9' to '4', which is classical for the fully connected multi-layer neural networks. Approaches that are specifically designed to handle spatial information and local features, such as convolutional neural networks (CNN), are more suitable for such tasks.
## V Discussion
Since the EM wavelength in vacuum at the RF region is about \(10^{5}\) larger compared to the infrared light, the physical length of a unit cell linear RF analog processor (Fig. 4) is much larger than that of the unit cell based on MZIs. However, due to the size of two thermos-optical phase modulators and biasing requirement, each optical unit cell is 100-um long, which is about 64 times its wavelength (1.545um) [32][48]. On the other hand, the proposed RF unit cell is based on sub-wavelength quadrature hybrids and the length of the unit cell device is roughly one wavelength. It is also worth mentioning that the total size of the RF unit cell can keep decreasing in designs with a smaller center wavelength. Due to the shape of the quadrature hybrids and sufficient router space requirement, one needs to find those low loss substrates that can realize a large transmission line wavelength to width ratio \(\chi\), given a fixed 50-Ohm characteristic impedance. Based on the microstrip transmission line theory, such requirement is easier to achieve on a PCB board with thin substrate thickness but high dielectric constant. If we fix \(\chi=100\), with a dielectric constant of 10 and thickness of 0.125mm, we can increase the center frequency easily to \(f_{0}=10GHz\). The wavelength in this material is about 12 mm. The typical microstrip transmission line insertion loss on such PCB board is around 0.25-dB per wavelength, which is equal to 5-dB loss per 20 passive 2\(\times\)2 processor devices in series. A 20\(\times\)20 passive linear processor array could be very useful for ultrafast signal processing at low cost, since the application of an 8\(\times\)8 analog matrix formed by 28 unit cells in the handwriting recognition RFNN in previous section has shown its capability. To realize an even larger size of RFNN, one can utilize tensor-train (TT)-decomposed synaptic interconnections to greatly reduce the number of processor devices [49][50] with little precision degradation.
Much lower latency is another benefit for such analog signal processing system, unlike the traditional digital processor, which is restricted to a gigahertz (GHz) clock rate. In addition, the computational complexity of a \(N\times N\) matrix-vector multiplication scales as \(N^{2}/p\), which is on the order of \(O(N^{2})\), where the constant \(p\) is the number of very limited parallelisms. However, for analog computing, \(p=N\) can be easily realized,
Fig. 16: The confusion matrix for the test dataset, showing a testing accuracy of 91.6%.
Fig. 14: Schematic of the handwriting recognition RFNN.
Fig. 15: The training accuracy (blue) and error (orange) of the analog (solid line) and digital (dashed line) artificial neural network.
and therefore the computation complexity reduces to the order of \(O(N)\). The delay can be estimated to be proportional to the length of the processor, since one can roughly assume the signal transmitting in the optical/PCB platform at the speed of light.
When calculating the power consumption for the reconfigurable designs, active components such as phase shifters or RF switches need to be counted in. The RF switch that we use has a power consumption of 0.12-mW, which is much smaller than the thermo-optic phase modulator (about 10-mW) [32]. The total power consumption synthesizing an \(N\times N\) arbitrary unitary matrix, as is shown in Fig. 13, could be as low as \(0.12\times N(N+1)\)mW. For specific applications, one may replace the phase shifters with fixed-length passive transmission line once trained and make the entire processor passive. If so, the power consumption of the whole device depends on the sensitivity of the power detector at the output and the power loss of the transmission lines. Since the calculation on such analog devices is much faster than the digital computer, one may compare the power per FLOP (floating points operations) of the RF processor, which is much lower than a digital computer. If we choose the RF power detection rate at the output to be \(f_{d}\approx 10MHz\), which corresponds to \(10^{7}\)\(N\)-dimensional analog matrix-vector multiplication in one second. This requires a conventional computer to do \(2N^{2}\times 10^{7}\)FLOPs per second. Since a typical sensitivity of a RF power detector can be -60dBm, the output power necessary during forward propagation is estimated to be around \(10^{-5}NmW\) (considering a 10-dB insertion loss). Therefore, for a passive design, the minimum energy consumption per FLOP of the RF processor scale as \(1/(2N)\) f per FLOP, which is much smaller than the convention GPUs (NVIDIA V100) and FPGA (Arria 10), where the power consumption is 31-pJ per FLOP and 62-pJ per FLOP, respectively.
Although in the handwriting recognition neural networks, we utilize four layers of neurons, except for the absolute function in the second hidden layer, we compute all the activation functions in post data processing. In future, by utilizing nonlinear RF device as activation function and power compensation between two linear layers, one may realize multilayer neural network with such RF processors, which will further enhance its applications. In the optical platform, the optical-to-electrical solution is utilized to realize activation function [52][53]. In the RF circuit platform, one can directly utilize electrical-to-electrical processing solution [54][55]. For example, power detectors and transistors can be used to design non-linear activation function and additional static voltage may serve as bias for each neuron. Such kind of activation also can benefit from the separation of power supply of each layer of neurons. Therefore, it is possible to apply multiple layers of neurons in the neural network. Table II lists the merit comparison among RFNN, ONN, FGPA, and traditional digital platform under fixed \(N=20\) and the frequency of the RFNN is assumed to be 10 GHz.
## VI Conclusion
This work proposes a linear RF analog processor, in which a 2\(\times\)2 prototype structure is demonstrated for proof-of-concept, which can be used for analog matrix multiplication. The processer has been applied to the hidden layer matrix multiplication of a simple 2\(\times\)2 neural network with necessary post data processing. Theory analysis, simulation, and measurement results show that the artificial neural network can be successfully trained as a reconfigurable binary classifier with a clear separation of two classes. By tuning the discrete \(\theta\) phase shifter, one can apply it to make classifiers oriented in 6 different directions. By varying parameters in post processing, more classifiers can be trained. Moreover, the \(\phi\) phase shifter only affects the phase of one output port, which can be used for phase adjusting between the two output ports. The proposed RF analog processer has great potential to be scaled up to realize a low cost, fast, and power-efficient solution for analog matrix multiplication and low latency near-sensor applications. A 4-layer neural network that can do handwriting recognition of the MNIST dataset is also demonstrated. The network utilizes an \(8\times 8\) linear RF analog processor simulated based on the measured S-parameters of the prototype unit cell. Training results show that it can reach test accuracy of 91.6%. Such neural networks working in RF region may also benefit from the mature wireless communication techniques in terms of writing and reading information from each neuron and synapsis. Since the 2\(\times\)2 unit cell device is of sub-wavelength size, one can further miniaturize and integrate the design at higher frequencies. Moreover, machine learning (ML) techniques such as transfer learning and adversarial reprogramming may also be incorporated to enhance their functionalities [57][58].
|
2305.11141 | Clifford Group Equivariant Neural Networks | We introduce Clifford Group Equivariant Neural Networks: a novel approach for
constructing $\mathrm{O}(n)$- and $\mathrm{E}(n)$-equivariant models. We
identify and study the $\textit{Clifford group}$, a subgroup inside the
Clifford algebra tailored to achieve several favorable properties. Primarily,
the group's action forms an orthogonal automorphism that extends beyond the
typical vector space to the entire Clifford algebra while respecting the
multivector grading. This leads to several non-equivalent subrepresentations
corresponding to the multivector decomposition. Furthermore, we prove that the
action respects not just the vector space structure of the Clifford algebra but
also its multiplicative structure, i.e., the geometric product. These findings
imply that every polynomial in multivectors, An advantage worth mentioning is
that we obtain expressive layers that can elegantly generalize to inner-product
spaces of any dimension. We demonstrate, notably from a single core
implementation, state-of-the-art performance on several distinct tasks,
including a three-dimensional $n$-body experiment, a four-dimensional
Lorentz-equivariant high-energy physics experiment, and a five-dimensional
convex hull experiment. | David Ruhe, Johannes Brandstetter, Patrick Forré | 2023-05-18T17:35:35Z | http://arxiv.org/abs/2305.11141v5 | # Clifford Group Equivariant Neural Networks
###### Abstract
We introduce Clifford Group Equivariant Neural Networks: a novel approach for constructing \(\mathrm{E}(n)\)-equivariant networks. We identify and study the _Clifford group_, a subgroup inside the Clifford algebra, whose definition we slightly adjust to achieve several favorable properties. Primarily, the group's action forms an orthogonal automorphism that extends beyond the typical vector space to the entire Clifford algebra while respecting the multivector grading. This leads to several non-equivalent subrepresentations corresponding to the multivector decomposition. Furthermore, we prove that the action respects not just the vector space structure of the Clifford algebra but also its multiplicative structure, i.e., the geometric product. These findings imply that every polynomial in multivectors, including their grade projections, constitutes an equivariant map with respect to the Clifford group, allowing us to parameterize equivariant neural network layers. Notable advantages are that these layers operate directly on a vector basis and elegantly generalize to any dimension. We demonstrate, notably from a single core implementation, state-of-the-art performance on several distinct tasks, including a three-dimensional \(n\)-body experiment, a four-dimensional Lorentz-equivariant high-energy physics experiment, and a five-dimensional convex hull experiment.
## 1 Introduction
Incorporating _group equivariance_ to ensure symmetry constraints in neural networks has been a highly fruitful line of research [19; 89; 20; 59; 86; 88; 30; 14; 87; 16]. Besides translation and permutation equivariance [92; 72], rotation equivariance proved to be vitally important for many graph-structured problems as encountered in, e.g., the natural sciences. Representative works include Tensor Field Networks [79], LieConv [31], and EGNNs [71]. Applications of such methods include modeling the dynamics of complex physical systems or motion trajectories [53; 12]; studying or generating molecules, proteins, and crystals [68; 39; 18; 73; 94; 80; 4]; and point cloud analysis [90; 81]. Note that many of these focus on three-dimensional problems involving rotation, reflection, or translation equivariance by considering, e.g., the groups \(\mathrm{O}(3)\), \(\mathrm{SO}(3)\), or \(\mathrm{E}(3)\).
Such equivariant neural networks can be broadly divided into three categories: approaches that scalarize geometric quantities, methods employing regular group representations, and those utilizing irreducible representations, often of \(\mathrm{O}(3)\)[42]. Scalarization methods operate exclusively on scalar features or manipulate higher-order geometric quantities such as vectors via scalar multiplication [73; 23; 54; 57; 71; 50; 35; 74; 26; 47; 78]. Such methods can be limited by the fact that they do not extract all directional information. Regular representation methods construct equivariant
transformations through an integral over the respective group [19; 59]. For continuous Lie groups, however, this integral is intractable and requires coarse approximation [31; 7]. Methods of the third category employ the irreducible representations of \(\mathrm{O}(3)\) (the Wigner-D matrices) and operate in a _steerable_ spherical harmonics basis [79; 2; 34; 12; 5]. This basis allows a decomposition into type-\(l\) vector subspaces that transform under \(D^{l}\): the type-\(l\) matrix representation of \(\mathrm{O}(3)\)[60; 33]. Using tensor products decomposed using Clebsch-Gordan coefficients (Clebsch-Gordan tensor product), vectors (of different types) interact equivariantly. These tensor products can be parameterized using learnable weights. Key limitations of steerable methods include the necessity for an alternative basis, along with acquiring the Clebsch-Gordan coefficients, which, although they are known for unitary groups of any dimension [51], are not trivial to obtain [1].
We propose _Clifford Group Equivariant Neural Networks_ (CGENNs): an equivariant parameterization of neural networks based on _Clifford algebras_. Inside the algebra, we identify the _Clifford group_ and its action, termed the _adjusted twisted conjugation_, which has several advantageous properties. Unlike classical approaches that represent these groups on their corresponding vector spaces, we carefully extend the action to the entire Clifford algebra. There, it automatically acts as an _orthogonal automorphism_ that respects the multivector grading [11; 70], enabling nontrivial subrepresentations that operate on the algebra subspaces. Furthermore, the adjusted twisted conjugation respects the Clifford algebra's multiplicative structure, specifically the _geometric product_, allowing us to bypass the need for explicit tensor product representations. As a result, we obtain two remarkable properties. First, all polynomials in multivectors generate Clifford group equivariant maps from the Clifford algebra to itself. Additionally, _grade projection_ also is an equivariant operation, allowing for a denser parameterization of such polynomials. We then demonstrate how one can construct parameterizable neural network layers using these properties.
Our method comes with several advantages. First, instead of operating on alternative basis representations such as the spherical harmonics, CGENNs (similarly to scalarization methods) directly transform data in a vector basis. Second, we can extract and transform higher-order features that carry vector-valued information, resulting in a more accurate and nuanced interpretation of the underlying structures compared to scalarization methods. Finally, our method readily generalizes to orthogonal groups regardless of the dimension or metric signature of the space. These advantages are demonstrated on equivariance benchmarks of different dimensionality. Note that specialized tools were developed for many of these tasks, while CGENNs can be applied more generally.
## 2 The Clifford Algebra
Clifford algebras (also known as _geometric algebras_) are powerful mathematical objects with applications in various areas of science and engineering. For a complete formal construction, we
Figure 1: CGENNs (represented with \(\phi\)) are able to operate on multivectors (elements of the Clifford algebra) in an \(\mathrm{E}(n)\)-equivariant way. Specifically, when an action \(\rho(w)\) of the Clifford group, representing an orthogonal transformation such as a rotation, is applied to the data, the model’s representations _corotate_. Multivectors can be decomposed into scalar, vector, bivector, trivector, and even higher-order components. These elements can represent geometric quantities such as areas or volumes. The action \(\rho(w)\) is designed to respect these structures when acting on them.
refer the reader to Appendix C. Let \(V\) be a finite-dimensional vector space over a field \(\mathbb{F}\) equipped with a _quadratic form_\(\mathfrak{q}:V\to\mathbb{F}\). The _Clifford algebra_\(\operatorname{Cl}(V,\mathfrak{q})\) is the unitary, associative, non-commutative algebra generated by \(V\) such that for every \(v\in V\) the relation \(v^{2}=\mathfrak{q}(v)\) holds, i.e., _vectors square to scalars_. This simple property solely generates a unique mathematical theory that underpins many applications. Note that every element \(x\) of the Clifford algebra \(\operatorname{Cl}(V,\mathfrak{q})\) is a linear combination of (formal, non-commutative) products of vectors modulo the condition that every appearing square \(v^{2}\) gets identified with the scalar square \(\mathfrak{q}(v)\): \(x=\sum_{i\in I}c_{i}\cdot v_{i,1}\cdots v_{i,k_{i}}\)1. Here, the index set \(I\) is finite, \(c_{i}\in\mathbb{F}\), \(k\in\mathbb{N}_{0}\), \(v_{i,j}\in V\). The Clifford algebra's associated _bi-linear form_\(\mathfrak{b}(v_{1},v_{2}):=\frac{1}{2}\left(\mathfrak{q}(v_{1}+v_{2})- \mathfrak{q}(v_{1})-\mathfrak{q}(v_{2})\right)\) yields the _fundamental Clifford identity_: \(v_{1}v_{2}+v_{2}v_{1}=2\mathfrak{b}(v_{1},v_{2})\) for \(v_{1},v_{2}\in V\) (Lemma C.3). In this context, the quantity \(v_{1}v_{2}\) represents the _geometric product_, which is aptly named for its ability to compute geometric properties and facilitate various transformations. Note that when \(v_{1},v_{2}\) are orthogonal (e.g., for orthogonal basis vectors), \(\mathfrak{b}(v_{1},v_{2})=0\), in which case \(v_{1}v_{2}=-v_{2}v_{1}\). The dimensionality of the algebra is \(2^{n}\), where \(n:=\dim V\) (Theorem C.15). Let \(e_{1},\ldots,e_{n}\) be an orthogonal basis of \(V\). The tuple \((e_{A})_{A\subseteq[n]},[n]:=\{1,\ldots,n\}\), is an orthogonal basis for \(\operatorname{Cl}(V,\mathfrak{q})\), where for all such \(A\) the product \(e_{A}:=\prod_{i\in A}e_{i}\) is taken in increasing order (Theorem C.26). We see below that we can decompose the algebra into vector subspaces \(\operatorname{Cl}^{(m)}(V,\mathfrak{q}),m=0,\ldots,n\), called _grades_, where \(\dim\operatorname{Cl}^{(m)}(V,\mathfrak{q})=\binom{n}{m}\). Elements of grade \(m=0\) and \(m=1\) are scalars (\(\operatorname{Cl}^{(0)}(V,\mathfrak{q})=\mathbb{F}\)) and vectors (\(\operatorname{Cl}^{(1)}(V,\mathfrak{q})=V\)), respectively, while elements of grade \(m=2\) and \(m=3\) are referred to as _bivectors_ and _trivectors_. Similar terms are used for elements of even higher grade. These higher-order grades can represent (oriented) points, areas, and volumes, as depicted in Figure 1.
Footnote 1: If \(k_{i}=0\) the product of vectors \(v_{i,1}\cdots v_{i,k_{i}}\) is empty, and, in this case, we mean the unit \(1\) in \(\operatorname{Cl}(V,\mathfrak{q})\).
Clifford algebras provide tools that allow for meaningful algebraic representation and manipulation of geometric quantities, including areas and volumes [69; 29; 28]. In addition, they offer generalizations such as extensions of the exterior and Grassmannian algebras, along with the natural inclusion of complex numbers and Hamilton's quaternions [40; 41]. Applications of Clifford algebras can be found in robotics [6; 43], computer graphics [84; 13], signal processing [44; 8], and animation [45; 17]. In the context of machine learning, Clifford algebras have been employed to improve the performance of algorithms in various tasks. For example, [63] learn an equivariant embedding using _geometric neurons_ used for classification tasks. Further, [76] introduce geometric algebra attention networks for point cloud problems in physics, chemistry, and biology. More recently, [11] introduce Clifford neural layers and Clifford Fourier transforms for accelerating solving partial differential equations. [70] continue this direction, strengthening the geometric inductive bias by the introduction of geometric templates. Further, [61; 64; 93] introduce complex-valued and quaternion-valued networks. Finally, [56] define normalizing flows on the group of unit quaternions for sampling molecular crystal structures.
## 3 Theoretical Results
In order to construct equivariant multivector-valued neural networks, we outline our theoretical results. We first introduce the following theorem regarding the multivector grading of the Clifford algebra, which is well-known in case the algebra's metric is non-degenerate. Although the general case, including a potentially degenerate metric, appears to be well-known, we were unable to find a self-contained proof during our studies. Hence, we include it here for completeness.
**Theorem 3.1** (The multivector grading of the Clifford algebra).: _Let \(e_{1},\ldots,e_{n}\) and \(b_{1},\ldots,b_{n}\) be two orthogonal bases of \((V,\mathfrak{q})\). Then the following sub-vector spaces \(\operatorname{Cl}^{(m)}(V,\mathfrak{q})\) of \(\operatorname{Cl}(V,\mathfrak{q})\), \(m=0,\ldots,n\), are independent of the choice of the orthogonal basis, i.e.,_
\[\operatorname{Cl}^{(m)}(V,\mathfrak{q}):=\operatorname{span}\left\{e_{A}\,|\,A \subseteq[n],|A|=m\right\}\stackrel{{!}}{{=}}\operatorname{ span}\left\{b_{A}\,|\,A\subseteq[n],|A|=m\right\}. \tag{1}\]
The proof can be found in Theorem C.27. We now claim that the Clifford algebra \(\operatorname{Cl}(V,\mathfrak{q})\) decomposes into an orthogonal direct sum of the vector subspaces \(\operatorname{Cl}^{(m)}(V,\mathfrak{q})\). To this end, we need to extend the bilinear form \(\mathfrak{b}\) from \(V\) to \(\operatorname{Cl}(V,\mathfrak{q})\). For elements \(x_{1},x_{2},x\in\operatorname{Cl}(V,\mathfrak{q})\), the _extended bilinear form_\(\bar{\mathfrak{b}}\) and the _extended quadratic form_\(\bar{\mathfrak{q}}\) are given via the projection onto the zero-component, where
\(\beta:\operatorname{Cl}(V,\mathfrak{q})\to\operatorname{Cl}(V,\mathfrak{q})\) denotes the _main anti-involution_ of \(\operatorname{Cl}(V,\mathfrak{q})\)2 :
Footnote 2: Recall that any \(x\in\operatorname{Cl}(V,\mathfrak{q})\) can be written as a linear combination of vector products. \(\beta\) is the map that reverses the order: \(\beta\left(\sum_{i\in I}c_{i}\cdot v_{i,1}\cdots v_{i,k_{i}}\right):=\sum_{i\in I }c_{i}\cdot v_{i,k_{i}}\cdots v_{i,1}\). For details, see Definition C.18.
\[\bar{\mathfrak{b}}:\operatorname{Cl}(V,\mathfrak{q})\times\operatorname{Cl}(V,\mathfrak{q})\to\mathbb{F},\hskip 28.452756pt\bar{\mathfrak{b}}(x_{1},x_{2}):=( \beta(x_{1})x_{2})^{(0)},\hskip 28.452756pt\bar{\mathfrak{q}}(x):=\bar{ \mathfrak{b}}(x,x). \tag{2}\]
Note that by construction, both \(\bar{\mathfrak{b}}\) and \(\bar{\mathfrak{q}}\) reduce to their original versions when restricted to \(V\). Using the extended quadratic form, the tuple \((\operatorname{Cl}(V,\mathfrak{q}),\bar{\mathfrak{q}})\) turns into a quadratic vector space in itself. As a corollary (see Corollary C.29), the Clifford algebra has an orthogonal-sum decomposition w.r.t. the extended bilinear form \(\bar{\mathfrak{b}}\):
\[\operatorname{Cl}(V,\mathfrak{q})=\bigoplus_{m=0}^{n}\operatorname{Cl}^{(m)}( V,\mathfrak{q}),\hskip 56.905512pt\dim\operatorname{Cl}^{(m)}(V, \mathfrak{q})=\binom{n}{m}. \tag{3}\]
This result implies that we can always write \(x\in\operatorname{Cl}(V,\mathfrak{q})\) as \(x=x^{(0)}+x^{(1)}+\cdots+x^{(n)}\), where \(x^{(m)}\in\operatorname{Cl}^{(m)}(V,\mathfrak{q})\) denotes the grade-\(m\) part of \(x\). Selecting a grade defines an orthogonal projection:
\[(\_)^{(m)}:\operatorname{Cl}(V,\mathfrak{q})\to\operatorname{Cl}^{(m)}(V, \mathfrak{q}),\hskip 36.135ptx\mapsto x^{(m)},\hskip 36.135ptm=0,\ldots,n. \tag{4}\]
Let us further introduce the notation \(\operatorname{Cl}^{[0]}(V,\mathfrak{q}):=\bigoplus_{m\text{ even}}^{n} \operatorname{Cl}^{(m)}(V,\mathfrak{q})\), whose elements are of _even parity_, and \(\operatorname{Cl}^{[1]}(V,\mathfrak{q}):=\bigoplus_{m\text{ odd}}^{n} \operatorname{Cl}^{(m)}(V,\mathfrak{q})\) for those of _odd parity_. We use \(x=x^{[0]}+x^{[1]}\) to denote the parity decomposition of a multivector.
### The Clifford Group and its Clifford Algebra Representations
Let \(\operatorname{Cl}^{\times}(V,\mathfrak{q})\) denote the group of invertible elements of the Clifford algebra, i.e., the set of those elements \(w\in\operatorname{Cl}(V,\mathfrak{q})\) that have an inverse \(w^{-1}\in\operatorname{Cl}(V,\mathfrak{q})\): \(w^{-1}w=ww^{-1}=1\). For \(w\in\operatorname{Cl}^{\times}(V,\mathfrak{q})\), we then define the _adjusted twisted conjugation_ as follows:
\[\rho(w):\operatorname{Cl}(V,\mathfrak{q})\to\operatorname{Cl}(V,\mathfrak{q}),\hskip 36.135pt\rho(w)(x):=wx^{[0]}w^{-1}+\alpha(w)x^{[1]}w^{-1}, \tag{5}\]
where \(\alpha\) is the _main involution_ of \(\operatorname{Cl}(V,\mathfrak{q})\), which is given by \(\alpha(w):=w^{[0]}-w^{[1]}\). This map \(\rho(w):\operatorname{Cl}(V,\mathfrak{q})\to\operatorname{Cl}(V,\mathfrak{q})\), notably not just from \(V\to V\), will be essential for constructing equivariant neural networks operating on the Clifford algebra. In general, \(\rho\) and similar maps defined in the literature do not always posses the required properties (see Motivation D.1). However, when our \(\rho\) is restricted to a carefully chosen subgroup of \(\operatorname{Cl}^{\times}(V,\mathfrak{q})\), many desirable characteristics emerge. This subgroup will be called the _Clifford group3_ of \(\operatorname{Cl}(V,\mathfrak{q})\) and we define it as:
Footnote 3: We elaborate shortly on the term _Clifford group_ in contrast to similar definitions in Remark D.14.
\[\Gamma^{[\times]}(V,\mathfrak{q}):=\left\{w\in\operatorname{Cl}^{\times}(V, \mathfrak{q})\cap\left(\operatorname{Cl}^{[0]}(V,\mathfrak{q})\cup\operatorname {Cl}^{[1]}(V,\mathfrak{q})\right)\Big{|}\,\forall v\in V.\,\rho(w)(v)\in V \right\}. \tag{6}\]
In words, \(\Gamma^{[\times]}(V,\mathfrak{q})\) contains all invertible (parity) homogeneous elements that preserve vectors (\(m=1\) elements) via \(\rho\). The _special Clifford group_ is defined as \(\Gamma^{[0]}(V,\mathfrak{q}):=\Gamma^{[\times]}(V,\mathfrak{q})\cap \operatorname{Cl}^{[0]}(V,\mathfrak{q})\).
Regarding the adjusted twisted conjugation, \(\rho(w)\) was ensured to reduce to a _reflection_ when restricted to \(V-\) a property that we will conveniently use in the upcoming section. Specifically, when \(w,x\in\operatorname{Cl}^{(1)}(V,\mathfrak{q})=V\), \(w\in\operatorname{Cl}^{\times}(V,\mathfrak{q})\), then \(\rho(w)\) reflects \(x\) in the hyperplane that is normal to \(w\):
\[\rho(w)(x)=-wxw^{-1}\stackrel{{!}}{{=}}x-2\frac{\mathfrak{b}(w,x)}{ \mathfrak{b}(w,w)}w. \tag{7}\]
Next, we collect several advantageous identities of \(\rho\) in the following theorem, which we elaborate on afterwards. For proofs, consider Lemma D.8, Theorem D.10, and Theorem D.28.
**Theorem 3.2**.: _Let \(w_{1},w_{2},w\in\Gamma^{[\times]}(V,\mathfrak{q})\), \(x_{1},x_{2},x\in\operatorname{Cl}(V,\mathfrak{q})\), \(m\in\{0,\ldots,n\}\). \(\rho\) then satisfies:_
1. _Additivity:_ \(\rho(w)(x_{1}+x_{2})=\rho(w)(x_{1})+\rho(w)(x_{2})\)_,_
2. _Multiplicativity:_ \(\rho(w)(x_{1}x_{2})=\rho(w)(x_{1})\rho(w)(x_{2})\)_,_
3. _Invertibility:_ \(\rho(w^{-1})(x)=\rho(w)^{-1}(x)\)_,_
_._
4. _Composition:_ \(\rho(w_{2})\left(\rho(w_{1})(x)\right)=\rho(w_{2}w_{1})(x)\)_,_
5. _Orthogonality:_ \(\bar{\mathfrak{b}}\left(\rho(w)(x_{1}),\rho(w)(x_{2})\right)=\bar{\mathfrak{b}} \left(x_{1},x_{2}\right)\)_._
The first two properties state that \(\rho(w)\) is not only _additive_, but even _multiplicative_ regarding the geometric product. The third states that \(\rho(w)\) is invertible, making it an _algebra automorphism_ of \(\operatorname{Cl}(V,\mathfrak{q})\).
The fourth property then states that \(\rho:\,\Gamma^{[\times]}(V,\mathfrak{q})\to\operatorname{Aut}_{\mathbf{Alg}} (\operatorname{Cl}(V,\mathfrak{q}))\) is also a _group homomorphism_ to the group of all algebra automorphisms. In other words, \(\operatorname{Cl}(V,\mathfrak{q})\) is a linear representation of \(\Gamma^{[\times]}(V,\mathfrak{q})\) and, moreover, it is also an algebra representation of \(\Gamma^{[\times]}(V,\mathfrak{q})\). Finally, the last point shows that each \(\rho(w)\) generates an orthogonal map with respect to the extended bilinear form \(\bar{\mathfrak{b}}\). These properties yield the following results (see also Theorem D.16 and Corollary D.18).
**Corollary 3.3** (All grade projections are Clifford group equivariant).: _For \(w\in\Gamma^{[\times]}(V,\mathfrak{q})\), \(x\in\operatorname{Cl}(V,\mathfrak{q})\) and \(m=0,\dots,n\) we have the following equivariance property:_
\[\rho(w)(x^{(m)})=\left(\rho(w)(x)\right)^{(m)}. \tag{8}\]
_In particular, for \(x\in\operatorname{Cl}^{(m)}(V,\mathfrak{q})\) we also have \(\rho(w)(x)\in\operatorname{Cl}^{(m)}(V,\mathfrak{q})\)._
This implies that the grade projections: \(\operatorname{Cl}(V,\mathfrak{q})\to\operatorname{Cl}^{(m)}(V,\mathfrak{q})\) are \(\Gamma^{[\times]}(V,\mathfrak{q})\)-equivariant maps, and, that each \(\operatorname{Cl}^{(m)}(V,\mathfrak{q})\) constitutes an orthogonal representation of \(\Gamma^{[\times]}(V,\mathfrak{q})\). The latter means that we have group homomorphisms from the Clifford group to the group of all orthogonal invertible linear transformations of \(\operatorname{Cl}^{(m)}(V,\mathfrak{q})\): \(\rho:\,\Gamma^{[\times]}(V,\mathfrak{q})\to\operatorname{O}\left(\operatorname {Cl}^{(m)}(V,\mathfrak{q}),\bar{\mathfrak{q}}\right)\).
**Corollary 3.4** (All polynomials are Clifford group equivariant).: _Let \(F\in\mathbb{F}[T_{1},\dots,T_{\ell}]\) be a polynomial in \(\ell\) variables with coefficients in \(\mathbb{F}\), \(w\in\Gamma^{[\times]}(V,\mathfrak{q})\). Further, consider \(\ell\) elements \(x_{1},\dots,x_{\ell}\in\operatorname{Cl}(V,\mathfrak{q})\). Then we have the following equivariance property:_
\[\rho(w)\left(F(x_{1},\dots,x_{\ell})\right)=F(\rho(w)(x_{1}),\dots,\rho(w)(x_{ \ell})). \tag{9}\]
To prove that \(\rho(w)\) distributes over any polynomial, we use both its additivity and multiplicativity regarding the geometric product.
By learning the coefficients of such polynomials, we have achieved flexible parameterizations of Clifford group equivariant layers for Clifford algebras over quadratic vector spaces of any dimension or metric. We further involve grade projections to achieve denser parameterizations (see Figure 2). More details regarding neural network constructions follow in Section 4.
### Orthogonal Representations
To relate the theory above to the classical orthogonal group, we will restrict \(\rho(w)\) in this section to \(V\) for \(w\in\Gamma^{[\times]}(V,\mathfrak{q})\). Further, we restrict \((V,\mathfrak{q})\) to be _non-degenerate_ as we never consider degenerate quadratic forms in our experiments. In the supplementary material, however, we are more general.
**Theorem 3.5** (The adjusted twisted conjugation acting on \(V\)).: _The range of the Clifford group \(\Gamma^{[\times]}(V,\mathfrak{q})\) under the adjusted twisted conjugation \(\rho\), when restricted to act on \(V\), coincides with the group of all orthogonal (invertible) linear maps \(\operatorname{O}(V,\mathfrak{q})\) of \((V,\mathfrak{q})\). Its kernel is given by \(\ker\rho=\mathbb{F}^{\times}\)._
\[\operatorname{ran}\left(\rho:\,\Gamma^{[\times]}(V,\mathfrak{q})\to \operatorname{GL}(V)\right)=\operatorname{O}(V,\mathfrak{q}). \tag{10}\]
Figure 2: Commutative diagrams expressing Clifford group equivariance with respect to the main operations: polynomials \(F\) (left) and grade projections \((\_)^{(m)}\) (right).
This theorem allows one to provide explicit descriptions of the element of the Clifford group, see Corollary D.26. To show Theorem 3.5, we invoke _Cartan-Dieudonne_ (Theorem B.13), stating that every orthogonal transformation decomposes into compositions of reflections, and recall that \(\rho(w)\) reduces to a reflection when restricted to \(V\), see Theorem D.24. Consequently, **equivariance w.r.t. the Clifford group**\(\Gamma^{[\times]}(V,\mathfrak{q})\), acting on \(\mathrm{Cl}(V,\mathfrak{q})\) via \(\rho\), rules as in Theorem 3.2, **implies equivariance w.r.t. the orthogonal group**\(\mathrm{O}(V,\mathfrak{q})\), acting on \(\mathrm{Cl}(V,\mathfrak{q})\) via \(\rho\), (Remark D.30).
Finally, it is worth noting that our method is also \(\mathrm{Pin}\) and \(\mathrm{Spin}\) group-equivariant4. These groups, sharing properties with the Clifford group, are also often studied in relation to the orthogonal group.
Footnote 4: We discuss in Appendix D.6 a general definition of these groups.
## 4 Methodology
We restrict our layers to use \(\mathbb{F}:=\mathbb{R}\). Our method is most similar to steerable methods such as [12]. However, unlike these works, we do not require an alternative basis representation based on spherical harmonics, nor do we need to worry about Clebsch-Gordan coefficients. Instead, we consider simply a steerable vector basis for \(V\), which then automatically induces a _steerable multivector basis_ for \(\mathrm{Cl}(V,\mathfrak{q})\). By steerability, we mean that this basis can be transformed in a predictable way under an action from the Clifford group, which acts orthogonally on both \(V\) and \(\mathrm{Cl}(V,\mathfrak{q})\) (see Figure 1).
We present layers yielding Clifford group-equivariant optimizable transformations. All the main ideas are based on Corollary 3.3 and Corollary 3.4. It is worth mentioning that the methods presented here form a first exploration of applying our theoretical results, making future optimizations rather likely.
**Linear Layers** Let \(x_{1},\ldots,x_{\ell}\in\mathrm{Cl}(V,\mathfrak{q})\) be a tuple of multivectors expressed in a steerable basis, where \(\ell\) represents the number of input channels. Using the fact that a polynomial restricted to the first order constitutes a linear map, we can construct a linear layer by setting
\[y^{(k)}_{c_{\text{out}}}:=T^{\text{lin}}_{\phi_{c_{\text{out}}}}(x_{1},\ldots, x_{\ell})^{(k)}:=\sum_{c_{\text{in}}=1}^{\ell}\phi_{c_{\text{out}}\text{in}k}\,x ^{(k)}_{c_{\text{in}}}, \tag{11}\]
where \(\phi_{c_{\text{in}}c_{\text{in}}k}\in\mathbb{R}\) are optimizable coefficients and \(c_{\text{in}}\), \(c_{\text{out}}\) denote the input and output channel, respectively. As such, \(T_{\phi}:\mathrm{Cl}(V,\mathfrak{q})^{\ell}\rightarrow\mathrm{Cl}(V,\mathfrak{ q})\) is a linear transformation in each algebra subspace \(k\). Recall that this is possible due to the result that \(\rho(w)\) respects the multivector subspaces. This computes a transformation for the output channel \(c_{\text{out}}\); the map can be repeated (using different sets of parameters) for other output channels, similar to classical neural network linear layers. For \(y^{(0)}_{c_{\text{out}}}\in\mathbb{R}\) (the scalar part of the Clifford algebra), we can additionally learn an invariant bias parameter.
**Geometric Product Layers** A core strength of our method comes from the fact that we can also parameterize interaction terms. In this work, we only consider layers up to second order. Higher-order interactions are indirectly modeled via multiple successive layers. As an example, we take the pair \(x_{1},x_{2}\). Their interaction terms take the form \(\left(x_{1}^{(i)}x_{2}^{(j)}\right)^{(k)},i,j,k=0,\ldots,n\); where we again make use of the fact that \(\rho(w)\) respects grade projections. As such, all the grade-\(k\) terms resulting from the interaction of \(x_{1}\) and \(x_{2}\) are parameterized with
\[P_{\phi}(x_{1},x_{2})^{(k)}:=\sum_{i=0}^{n}\sum_{j=0}^{n}\phi_{ijk}\,\left(x_{ 1}^{(i)}x_{2}^{(j)}\right)^{(k)}, \tag{12}\]
where \(P_{\phi}:\mathrm{Cl}(V,\mathfrak{q})\times\mathrm{Cl}(V,\mathfrak{q}) \rightarrow\mathrm{Cl}(V,\mathfrak{q})\). This means that we get \((n+1)^{3}\) parameters for every geometric product between a pair of multivectors5. Parameterizing and computing all second-order terms amounts to \(\ell^{2}\) such operations, which can be computationally expensive given a reasonable number of channels \(\ell\). Instead, we first apply a linear map to obtain \(y_{1},\ldots,y_{\ell}\in\mathrm{Cl}(V,\mathfrak{q})\). Through this map, the mixing (i.e., the terms that will get multiplied) is learned. That is, we only get \(\ell\) pairs \((x_{1},y_{1}),\ldots,(x_{\ell},y_{\ell})\) from which we then compute \(z^{(k)}_{c_{\text{out}}}:=P_{\phi_{c_{\text{out}}}}(x_{c_{\text{in}}},y_{c_{ \text{in}}})^{(k)}\). Note that here we have \(c_{\text{in}}=c_{\text{out}}\), i.e., the number of channels does not change. Hence, we refer to this layer as
the _element-wise_ geometric product layer. We can obtain a more expressive (yet more expensive) parameterization by linearly combining such products by computing
\[z_{c_{\text{out}}}^{(k)}:=T_{\phi_{c_{\text{out}}}}^{\text{prod}}(x_{1},\ldots,x_ {\ell},y_{1},\ldots,y_{\ell})^{(k)}:=\sum_{c_{\text{in}}=1}^{\ell}P_{\phi_{c_{ \text{out}}}c_{\text{in}}}(x_{c_{\text{in}}},y_{c_{\text{in}}})^{(k)}, \tag{13}\]
which we call the _fully-connected_ geometric product layer. Computational feasibility and experimental verification should determine which parameterization is preferred.
**Normalization and Nonlinearities** Since our layers involve quadratic and potentially higher-order interaction terms, we need to ensure numerical stability. In order to do so, we use a normalization operating on each multivector subspace before computing geometric products by putting
\[x^{(m)}\mapsto\frac{x^{(m)}}{\sigma(a_{m})\,\left(\bar{\mathfrak{q}}(x^{(m)})- 1\right)+1}, \tag{14}\]
where \(x^{(m)}\in\mathrm{Cl}^{(m)}(V,\mathfrak{q})\). Here, \(\sigma\) denotes the logistic sigmoid function, and \(a_{m}\in\mathbb{R}\) is a learned scalar. The denominator interpolates between \(1\) and the quadratic form \(\bar{\mathfrak{q}}\left(x^{(m)}\right)\), normalizing the magnitude of \(x^{(m)}\). This ensures that the geometric products do not cause numerical instabilities without losing information about the magnitude of \(x^{(m)}\), where a learned scalar interpolates between both regimes. Note that by Theorem 3.2, \(\bar{\mathfrak{q}}(x^{(m)})\) is invariant under the action of the Clifford group, rendering Equation (14) an equivariant map.
Next, we use the layer-wide normalization scheme proposed by [70], which, since it is also based on the extended quadratic form, is also equivariant with respect to the adjusted twisted conjugation.
Regarding nonlinearities, first note that the geometric product layers are already nonlinear. However, we still require activation functions after linear layers. To this end, we use a slightly adjusted version of the units proposed by [70]. Since the scalar subspace \(\mathrm{Cl}^{(0)}(V,\mathfrak{q})\) is always invariant with respect to the adjusted twisted conjugation, we can apply \(x^{(m)}\mapsto\mathrm{ReLU}\left(x^{(m)}\right)\) when \(m=0\) and \(x^{(m)}\mapsto\sigma_{\phi}\left(\bar{\mathfrak{q}}\left(x^{(m)}\right)\right) x^{(m)}\) otherwise. We can replace ReLU with any common scalar activation function. Here, \(\sigma_{\phi}\) represents a potentially parameterized nonlinear function. Usually, however, we restrict it to be the sigmoid function. Since we modify \(x^{(m)}\) with an invariant scalar quantity, we retain equivariance. Such gating activations are commonly used in the equivariance literature [88; 36].
## 5 Experiments
Here, we show that CGENNs excel across challenging tasks, attaining top performance in several unique contexts that typically require specialized tools.
### Estimating Volumetric Quantities
\(\mathrm{O}(3)\) **Experiment: Signed Volumes** This task highlights the fact that equivariant architectures based on scalarization are not able to extract some essential geometric properties from input data. In a synthetic setting, we simulate a dataset consisting of random three-dimensional tetrahedra. A main advantage of our method is that it can extract covariant quantities including (among others) _signed volumes_, which we demonstrate in this task. Signed volumes are geometrically significant because they capture the orientation of geometric objects in multidimensional spaces. For instance, in computer graphics, they can determine whether a 3D object is facing towards or away from the camera, enabling proper rendering and improving the overall visual experience. The loss function in this experiment is the mean-squared error between the signed volume and its true value. Note that we are predicting a _covariant_ quantity under \(\mathrm{O}(3)\) transformations. As such, scalarization methods (which rely on invariant quantities) cannot extract this feature from the data. This is illustrated in the left part of Figure 3. We compare against a standard multilayer perceptron (MLP), an MLP version of the \(\mathrm{E}(n)\)-GNN [71] which uses neural networks to update positions with scalar multiplication, _Vector Neurons_ (VN) [26], and _Geometric Vector Perceptrons_ (GVP) [50]. We see that the scalarization methods fail to access covariant features, which are necessary for this task, as evidenced by their test loss not improving even with more available data. The multilayer perceptron, although a universal
approximator, lacks the correct inductive biases. Our model, however, has the correct inductive biases (e.g., the equivariance property) and can also access the signed volume.
\(\mathrm{O}(5)\) **Experiment: Convex Hulls** We go a step further and consider a _five-dimensional_ space, showcasing our models' ability to generalize to high dimensions. We also make the task more challenging by including more points and estimating the volume of the convex hull generated by these points - a task that requires sophisticated algorithms in the classical case. Note that some points may live inside the hull and do not contribute to the volume. We use the same network architectures as before (but now embedded in a five-dimensional space) and present the results in Figure 4. We report the error bars for CGENNs, representing three times the standard deviation of the results of eight runs with varying seeds. Volume (unsigned) is an invariant quantity, enabling the baseline methods to approximate its value. However, we still see that CGENNs outperform the other methods, the only exception being the low-data regime of only 256 available data points. We attribute this to our method being more flexible by considering more than only invariant features, making it slightly more prone to overfitting. To mitigate this issue, future work could explore regularization techniques or other methods to reduce overfitting in low-data scenarios.
### \(\mathrm{O}(5)\) Experiment: Regression
We compare against the methods presented by [32] who propose an \(\mathrm{O}(5)\)-invariant regression problem. The task is to estimate the function \(f(x_{1},x_{2}):=\sin(\|x_{1}\|)-\|x_{2}\|^{3}/2+\frac{x_{1}^{\top}x_{2}}{\|x_ {1}\|\|x_{2}\|}\), where the five-dimensional vectors \(x_{1},x_{2}\) are sampled from a standard Gaussian distribution in order to simulate train, test, and validation datasets. The results are shown in Figure 4. We see that we significantly outperform the MLP and MLP+Aug baselines representing a vanilla MLP and an MLP trained with data augmentation, respectively. Moreover, we also demonstrate an outstanding performance in contrast with the \(\mathrm{O}(5)\) and \(\mathrm{SO}(5)\) EMLP architecture of [32].
### \(\mathrm{E}(3)\) Experiment: \(n\)-Body System
The \(n\)-body experiment [53] serves as a benchmark for assessing the performance of equivariant (graph) neural networks in simulating physical systems [42]. In this experiment, the dynamics of \(n=5\) charged particles in a three-dimensional space are simulated. Given the initial positions and velocities of these particles, the task is to accurately estimate their positions after 1 000 timesteps. To address this challenge, we construct a graph neural network (GNN) using the Clifford equivariant layers introduced in the previous section. The input to the network consists of the mean-subtracted positions of the particles (to achieve translation equivariance) and their velocities. Additionally, we include the invariant charges as part of
the input and their products as edge attributes. We compare against the steerable SE(3)-Transformers [34], Tensor Field Networks [79], and SEGNN [12]. Scalarization baselines include Radial Field [57] and EGNN [71]. Finally, NMP [37] is not an \(\mathrm{E}(3)\)-equivariant method. The number of parameters in our model is maintained similar to the EGNN baseline to ensure a fair comparison.
Results of our experiment are presented in Table 1, where we also present for CGENN three times the standard deviation of three identical runs with different seeds. Our approach significantly outperforms earlier methods and is considerably better than [12], thereby surpassing the baselines. This experiment again demonstrates the advantage of leveraging covariant information in addition to scalar quantities, as it allows for a more accurate representation of the underlying physics and leads to better predictions.
### \(\mathrm{O}(1,3)\) Experiment: Top Tagging
Jet tagging in collider physics is a technique used to identify and categorize high-energy jets produced in particle collisions, as measured by, e.g., CERN's ATLAS detector [15]. By combining information from various parts of the detector, it is possible to trace back the origin of these jets and classify them. Reconstruction algorithms rely on the recovery of the jet's energy, momentum, and other properties [55; 21]. The current experiment seeks to tag jets arising from the heaviest particles of the standard model: the "top quarks" [49]. A jet tag should be invariant with respect to the reference frame in which the jet is observed, whereas the frames themselves change under _Lorentz boosts_ due to the relativistic nature of the particles. A Lorentz boost is a transformation that relates the space and time coordinates of an event as seen from two inertial reference frames that are moving relative to each other at a constant velocity. A defining characteristic of these transformations is that they preserve the _Minkowski metric_, which is given by \(\gamma(ct,x,y,z):=(ct)^{2}-x^{2}-y^{2}-z^{2}\). The set of all such transformations is captured by the orthogonal group \(\mathrm{O}(1,3)\); therefore, our method is fully compatible with modeling this problem.
We evaluate our model on a top tagging benchmark published by [52]. It contains 1.2M training entries, 400k validation entries, and 400k testing entries. For each jet, the energy-momentum \(4\)-vectors are available for up to \(200\) constituent particles, making this a much larger-scale experiment than the ones presented earlier. The baselines include ResNeXt [91], P-CNN [22], PFN [58], ParticleNet [67], LGN [10], EGNN [71], and the more recent LorentzNet [38]. Among these, LGN is a steerable method, whereas EGNN and LorentzNet are scalarization methods. The other methods are not Lorentz-equivariant. Among the performance metrics, there are classification accuracy, Area Under the Receiver Operating Characteristic Curve (AUC), and the background rejection rate \(1/\epsilon_{B}\) at signal efficiencies of \(\epsilon_{S}=0.3\) and \(\epsilon_{S}=0.5\), where \(\epsilon_{B}\) and \(\epsilon_{S}\) are the false positive and true positive rates, respectively. We observe that LorentzNet, a method that uses invariant quantities, is an extremely competitive baseline that was optimized for this task. Despite this, CGENNs are able to match its performance while maintaining the same core implementation.
## 6 Conclusion
We presented a novel approach for constructing \(\mathrm{E}(n)\)-equivariant neural networks based on Clifford algebras. After establishing the required theoretical results, we proposed parameterizations of
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Model & Accuracy (\(\uparrow\)) & AUC (\(\uparrow\)) & \begin{tabular}{c} \(1/\epsilon_{B}\) (\(\uparrow\)) \\ (\(\epsilon_{S}=0.5\)) \\ \end{tabular} &
\begin{tabular}{c} \(1/\epsilon_{B}\) (\(\uparrow\)) \\ (\(\epsilon_{S}=0.3\)) \\ \end{tabular} \\ \hline ResNeXt [91] & \(0.936\) & \(0.9837\) & \(302\) & \(1147\) \\ P-CNN [22] & \(0.930\) & \(0.9803\) & \(201\) & \(759\) \\ PFN [58] & \(0.932\) & \(0.9819\) & \(247\) & \(888\) \\ ParticleNet [67] & \(0.940\) & \(0.9858\) & \(397\) & \(1615\) \\ EGNN [71] & \(0.922\) & \(0.9760\) & \(148\) & \(540\) \\ LGN [10] & \(0.929\) & \(0.9640\) & \(124\) & \(435\) \\ LorentzNet [38] & \(\mathbf{0.942}\) & \(\mathbf{0.9868}\) & \(\mathbf{498}\) & \(\mathbf{2195}\) \\ \hline CGENN & \(\mathbf{0.942}\) & \(\mathbf{0.9869}\) & \(\mathbf{500}\) & \(\mathbf{2172}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance comparison between our proposed method and alternative algorithms on the top tagging experiment. We present the accuracy, Area Under the Receiver Operating Characteristic Curve (AUC), and background rejection \(1/\epsilon_{B}\) and at signal efficiencies of \(\epsilon_{S}=0.3\) and \(\epsilon_{S}=0.5\).
nonlinear multivector-valued maps that exhibit notable versatility and applicability across several scenarios varying in dimension. This was achieved by the core insight that polynomials in multivectors are \(\mathrm{O}(n)\)-equivariant functions. Theoretical results were empirically substantiated in three distinct experiments, outperforming or matching baselines that were specifically designed for these tasks.
**Limitations** CGENNs induce a (non-prohibitive) degree of computational overhead similar to other steerable methods. On the plus side, we believe that improved code implementations such as custom GPU kernels or alternative parameterizations to the current ones can significantly alleviate this issue, potentially also resulting in improved performances on benchmark datasets. This work provides solid theoretical and experimental foundations for such developments.
|
2301.01048 | A Theory of I/O-Efficient Sparse Neural Network Inference | As the accuracy of machine learning models increases at a fast rate, so does
their demand for energy and compute resources. On a low level, the major part
of these resources is consumed by data movement between different memory units.
Modern hardware architectures contain a form of fast memory (e.g., cache,
registers), which is small, and a slow memory (e.g., DRAM), which is larger but
expensive to access. We can only process data that is stored in fast memory,
which incurs data movement (input/output-operations, or I/Os) between the two
units. In this paper, we provide a rigorous theoretical analysis of the I/Os
needed in sparse feedforward neural network (FFNN) inference. We establish
bounds that determine the optimal number of I/Os up to a factor of 2 and
present a method that uses a number of I/Os within that range. Much of the
I/O-complexity is determined by a few high-level properties of the FFNN (number
of inputs, outputs, neurons, and connections), but if we want to get closer to
the exact lower bound, the instance-specific sparsity patterns need to be
considered. Departing from the 2-optimal computation strategy, we show how to
reduce the number of I/Os further with simulated annealing. Complementing this
result, we provide an algorithm that constructively generates networks with
maximum I/O-efficiency for inference. We test the algorithms and empirically
verify our theoretical and algorithmic contributions. In our experiments on
real hardware we observe speedups of up to 45$\times$ relative to the standard
way of performing inference. | Niels Gleinig, Tal Ben-Nun, Torsten Hoefler | 2023-01-03T11:23:46Z | http://arxiv.org/abs/2301.01048v1 | # A Theory of I/O-Efficient Sparse Neural Network Inference
###### Abstract
As the accuracy of machine learning models increases at a fast rate, so does their demand for energy and compute resources. On a low level, the major part of these resources is consumed by data movement between different memory units. Modern hardware architectures contain a form of _fast memory_ (e.g., cache, registers), which is small, and a _slow memory_ (e.g., DRAM), which is larger but expensive to access. We can only process data that is stored in fast memory, which incurs data movement (input/output-operations, or _I/Os_) between the two units. In this paper, we provide a rigorous theoretical analysis of the I/Os needed in sparse feedforward neural network (FFNN) inference. We establish bounds that determine the optimal number of I/Os up to a factor of \(2\) and present a method that uses a number of I/Os within that range. Much of the _I/O-complexity_ is determined by a few high-level properties of the FFNN (number of inputs, outputs, neurons, and connections), but if we want to get closer to the exact lower bound, the instance-specific sparsity patterns need to be considered. Departing from the 2-optimal computation strategy, we show how to reduce the number of I/Os further with simulated annealing. Complementing this result, we provide an algorithm that constructively generates networks with maximum I/O-efficiency for inference. We test the algorithms and empirically verify our theoretical and algorithmic contributions. In our experiments on real hardware we observe speedups of up to 45\(\times\) relative to the standard way of performing inference.
Neural Network Inference, Sparse Neural Networks, I/O-Complexity, Simulated Annealing.
## I Introduction
Almost all modern computing systems deploy different memory units: Some are fast but small and some are large but slow. Data can only be processed in fast memory. As the fast memory is very limited in size, data needs to be moved between fast and slow memory over the course of a computation. These data movements are called **I/Os** (abbreviating _input/output-operations_). They are very expensive in time and energy. For example, on an NVIDIA V100 GPU, loading values from global memory costs _up to 514.5\(\times\) more clock cycles_ than one fused-multiply-add (FMA) operation between two 32-bit floating point values (or 14\(\times\) if stored on an L1 cache) [1].
In deep learning, and scientific computing in general, I/Os account for the major part of time and energy expenses [2]. Hence, it is important to use I/Os efficiently by performing computations in a way that makes use of "data locality" and "data reuse". We need to schedule the computational steps in such a way that accesses to the same data are close together, sparing eviction of values from fast memory (e.g., cache) and reading them again. Therefore, optimizing the implementation of DNN inference [3] and training [4] often requires tailor-made implementations for the available memory architectures in order to reach high utilization.
In this paper, we analyze and optimize I/Os with a theoretical model that applies across the wide variety of hardware in which this phenomenon occurs. We consider exclusively sparse Feed-Forward Neural Networks (FFNNs) without shared connections. This model contains, but is not limited to, pruned Multi-Layer Perceptrons (MLP). We assume that the FFNN is given as a list of weighted edges in a Directed Acyclic Graph (DAG) together with one additional value for each vertex (being the input-value for input-neurons and the bias for non-input neurons). In this setting, the primary way to obtain I/O-efficiency is to "use" the connections in an efficient order. That is, when we use a connection, the value of the input neuron and the value of the partial sum of the output neuron should ideally already be in fast memory. Therefore, we need to find an order of the connections, in which the connections that have neurons in common, are close together.
Finding an order which is optimal with respect to this property is a combinatorial problem. We will show that if we use a traditional computational order, which corresponds to a natural interpretation of FFNN inference (for example, seeing it as a sequence of matrix vector multiplications) we can be far from optimal: The number of read-I/Os could be up to twice the optimal number, and the number of write-I/Os could differ from the optimal number by an arbitrarily large factor.
Computations in ML have some special features that we can use when aiming for I/O-efficiency. Unlike the research on I/O-efficient computing outside of ML, there are more degrees of freedom that we can exploit. For example, there are several acceptable solutions to a given problem: Two different FFNNs may produce different outputs, yet both can have a satisfying accuracy. It thus makes sense to adapt the neural architecture to the hardware. On the other hand, there is additional freedom on the side of the hardware, as it is not uncommon to search or even build hardware for specific ML applications [5, 6, 3].
Hence, our investigation is directed towards answering the following questions: (1) _If we are given an FFNN and a fast memory of a given size, what is the **minimal number of I/Os** needed to perform inference on this FFNN?_ (2) _For a given FFNN, what is the **smallest memory size** with which we can perform inference with a minimal number of I/Os (i.e., we can avoid having to read and write intermediate results due to lack of memory)?_ and (3) _For a fast memory of a given size,
_what are the **FFNN architectures** on which we can perform inference with a minimal number of I/Os?_
### _Background and related work_
I/O-complexityThere is an extensive line of theoretical research on I/O-efficient algorithms outside of machine learning. Hong & Kung's seminal work [7] is among the first to investigate I/O-efficient algorithms formally. This work introduced a theoretical model called the _red-blue pebble game_ and used this model to show how standard matrix multiplication or FFT should be scheduled to use an asymptotically minimal number of I/Os. This model has been extended and altered in various ways. These extensions have been used to establish lower bounds and matching upper bounds for problems such as sorting, permutation, and matrix transposition [8]; as well as sparse matrix dense vector multiplication [9], and various graph algorithms [10]. However, these solutions are typically only optimal up to constant multiplicative factors and often these factors are large or not exactly known. In fact, it has been shown that finding exactly optimal solutions to the red-blue pebble game (corresponding to computation schedules with an exact minimal number of data movements) is a PSPACE-complete problem [11]. Furthermore, this problem has been hypothesized to be even hard to approximate [11], since inapproximability is known for related pebble games [12, 13].
There are also bounds on the I/O-complexity of general computations with large proportions of inputs [14]. These bounds apply to the problem considered in our paper. However, they are less fine-grained (they do not distinguish between read- and write-I/Os as we do) and generally less tight as the ones that we establish in this paper.
In ML-specific algorithms, I/O-efficiency has been practically identified as a crucial performance factor. Performance modeling of DNNs is usually derived from operation counts [15] and sums of fixed values obtained from benchmarking individual layers [16, 17]. As for I/O complexity of DNN evaluation, the only theoretical work apart from this one, to the best of our knowledge, is given by Demmel & Dinh [18], who provide communication bounds for convolutional and pooling layers.
Network pruning and sparsityDNN pruning (i.e., removing certain edges or nodes from a network while preserving accuracy) has been extensively studied over the last decades [19, 20, 21]. Pruning is of much practical interest for edge computing devices, as it allows to drastically reduce the memory footprint and number of floating point operations. Dense (fully-connected) layers have been a prime target for pruning, as they tend to account for a major proportion of the memory size. For example, in AlexNet [22], the final dense layers contain \(90\%\) of the parameters. Also transformers [23], which are currently very popular and achieve state-of-the-art results in NLP tasks, include FFNNs that comprise a large proportion of the overall size. For example, BERT [23] includes several FFNNs of depth \(2\) and weight matrices of dimensions \(1024\times 4096\) and \(4096\times 1024\) and the major part of BERT's parameters and compute time for inference come from these FFNNs [24].
There are different ways to prune networks, e.g., by a given weight threshold [25, 26], the top-\(k\) values [27, 28], or by way of gradual weight elimination [19]. Regardless of the method, pruning dense layers leads to sparse and unstructured networks, for which data-locality is more difficult to obtain.
Hardware ArchitecturesThere has been work on hardware-based optimizations for inference, especially for the class of sparse FFNNs, where Sze et al. [3] provide a detailed survey. Specific approaches include EDEN [29], which uses approximate DRAM for improving energy-efficiency; Bhardwaj et al. [30], who consider communication in inference in a distributed setting; and EIE [5] and Eyeriss [31] runs inference on a compressed CNN directly, using accelerated sparse matrix-vector multiplications and run-length encoding, respectively. While these approaches improve certain aspects of DNN processing, as the network size grows, such architectures ultimately also resort to multi-level memory hierarchies, which our approach addresses.
### _Contributions_
Our main theoretical contributions are a collection of theorems and propositions that analyze the I/Os in FFNN inference. Theorem 1 provides bounds on the number of I/Os that are needed to perform inference on a given FFNN-architecture. In Proposition 1, we show that these bounds are optimal in the sense that none of them could be tightened by multiplying with any factor other than \(1\). The lower and upper bounds on the total number of I/Os differ by a multiplicative factor of at most \(2\). The proof of Theorem 1 is constructive and shows how a computation can be done to use a number of I/Os within this 2-optimal range. Departing from 2-optimality, our goal is to get closer to the _exact_ optimum by reordering the connections beneficially. We approach this problem with Simulated Annealing. In Section V we present a method for generating FFNNs, which according to Theorem 2 completely characterizes the architectures on which we can perform inference with a minimal number of I/Os for a given memory size (i.e., exactly matching the lower bound). Hence, this method can be used as a powerful tool to co-design neural networks and hardware. As a corollary of this theorem we obtain an upper bound on the smallest possible memory size that allows us to perform inference with maximal I/O-efficiency (that is, without having to write or read any temporary values). We test the algorithms on random FFNNs across a wide range of parameter settings (density, depth, width, memory sizes).
## II Model and notations
We assume that initially all FFNN parameters and all input values are laid out in slow memory. These data are exactly the union of the following three types of data:
* The weighted connections.
* One bias value for each non-input neuron.
* One input value for each input neuron.
Each connection is described by an independent parameter triple \((i,j,w_{ij})\), where \(i\) is the input neuron, \(j\) the output neuron, and \(w_{ij}\) the weight (there is no reuse or sharing of weights). We denote the number of connections as \(W\), the number of neurons \(N\), the number of input neurons \(I\), and the number of output neurons \(S\). In our theoretical analysis, we
assume the connections (the entire triples describing them), the numerical values at the neurons (biases, partial sums, outputs of the activation functions), and all other numerical values involved in the computation are data types of the same size. Hence, the entire problem size is given by the total number of weights, biases, and input neurons: \(W+(N-I)+I=W+N\). We have a fast memory that can hold at any given time a number of values (of this data type) that is given by the parameter \(M\), where we assume \(M\geq 3\). In this model the single parameter \(M\) defines the whole architecture: fast memory of size \(M\) and slow memory of unlimited size. We can perform arbitrary computations "for free" on data that is stored in fast memory, including application of activation functions to those values. In particular, if a connection, the value of its input neuron, and the partial sum of its output neuron (which we consider initially the bias) are stored in fast memory, then we can add the product of the input value and the connection weight to the partial sum of the output neuron (and we assume that no additional memory is required for this operation). If we want to do a computation that requires a value that is currently only stored in slow memory, we first need to move this value from slow to fast memory. This movement counts as 1 **read-I/O**. Furthermore, if our memory is full, we first need to free up space before we can read new values. To do this, we can delete values. However, if we need a value again in future computation steps or if it is the value of an output neuron, we have to write the value to slow memory before deleting it. Deletions are for free whereas each write operation counts as 1 **write-I/O**. Initially, all \(N+W\) parameters are stored in slow memory. The number of I/Os of a computation is the sum of read-I/Os and write-I/Os. The goal is to perform inference computation, namely compute and store (i.e., write back to slow memory) the value of all output-neurons, with as few I/Os as possible. We formally define the optimum that we aim for:
**Definition 1**.: _For a given FFNN \(\mathcal{N}\) and fast memory size \(M\), we let **I/Os**\((\mathcal{N},M)\in\mathbb{N}\) denote the minimum number of overall I/Os that are needed to perform inference on \(\mathcal{N}\) with a memory size \(M\), where the minimum is taken over all possible computation strategies (i.e., sequences of computation-, read-, and write- steps, that solve this problem). Let **r/Os**\((\mathcal{N},M)\in\mathbb{N}\) denote the minimum number of read-I/Os and **w/Os**\((\mathcal{N},M)\in\mathbb{N}\) denote the minimum number of write-I/Os needed._
### _How can we describe inference-computations in this model?_
We introduce the concept of _eviction policy_, and show that a computation corresponds to an _eviction policy_ together with a topological order of the connections. An **eviction policy** is a set of rules or instructions that specify how we evict data from fast memory. That is, when the fast memory is full and we need to evict a value to create free space, the eviction policy determines which of the values to evict. **LRU** (least-recently-used) is the eviction policy defined by always evicting the value that has been used least recently among all values in fast memory. **RR** (round-robin) is the eviction policy defined by having a pointer specifying the value to be evicted next and moving this pointer one place to the right whenever we evict a value (and moving the pointer again to the first place of the memory when we reach the end). The eviction policy **MIN** (also known as Belady's optimal replacement algorithm) is defined by always evicting the value that will be referenced farthest in the future (if there are values that will not be used again, any of those is evicted). It has been shown that **MIN** uses the minimal number of I/Os for a given sequence of computation steps [32]. Notice that while it is difficult to implement **MIN** eviction policies for general computations, in the case of FFNN inference it is trivial to implement it offline once we fixed a topological order in which we process the weights (always evict the activations adjacent to weights farthest away in the given topological order).
We assume that an efficient eviction policy is provided. That is, when we evict a value that is either (1) already stored in slow memory (for example, when we read the value of a computed neuron to use it for its outgoing connections, but do not change it), or (2) a value that we will not need again in the future (a computed non-output-neuron has been used already for computing all neurons that depend on it), we simply delete it from fast memory, without spending a write-I/O.
Now, let \(e_{1},e_{2},\ldots,e_{W}\) be a topological order of the connections of the neural network (that is, whenever \(e_{i}\) and \(e_{j}\) are connections for which the output-neuron of \(e_{i}\) is the input-neuron of \(e_{j}\), we have \(i<j\)). Together with an eviction policy, a topological order of the connections gives rise to an inference-computation in a natural way, shown by Algorithm 1.
```
1:Input: A topological order of the connections \(e_{1},\ldots,e_{W}\)
2:Output: Values of the output neurons
3:for i = 1 to \(W\)do
4: Read the connection \(e_{i}=(a,b,w)\);
5:if value of the input neuron, \(n_{a}\), is not in fast memory then
6: Read \(n_{a}\) (possibly, first evicting one value, if fast memory is full);
7:endif
8:if partial sum of the output neuron, \(n_{b}\), is not in fast memory then
9: Read \(n_{b}\) (possibly, first evicting one value, if fast memory is full);
10:endif
11: Update \(n_{b}=n_{b}+w\cdot n_{a}\)
12:if there is no connection after \(e_{i}\) with output-neuron \(n_{b}\) then
13: Apply the activation function: \(n_{b}=f(n_{b})\);
14:endif
15:endfor
```
**Algorithm 1** Inference Algorithm
Notice that this pseudocode generalizes all "standard ways" of performing inference. For example, matrix-vector-multiplication based inference would correspond to orders that start with all connections from the first layer, followed by the connections of the second layer and so on. Yet, it also
allows us to perform inference on FFNN-architectures given by any possible DAG (including those with very "chaotic" skip connections) and not just those that are layered.
As for optimizing I/Os, it gives us more flexibility, as it allows us to employ any possible topological order of the connections, including those _that do not correspond to layer-after-layer computations_. For example, it allows us to start computing neurons of the layers \(i+1,i+2,\ldots\) even when not all neurons of the \(i\)-th layer have been computed. This can save I/Os, because when we finish computing a neuron from the \(i\)-th layer, we can directly reuse it to start computing neurons of the \((i+1)\)-st layer, instead of storing it and reading it again later. Since for each connection we perform a computation for which we need the value of the input neuron and the partial sum at its output neuron, we would like to find an order in which connections that have neurons in common, are clustered together.
## III Bounds for inference
In this section we present generic bounds on the I/O-complexities of inference. They serve as guidelines for the more instance-specific optimizations that we consider later on.
**Theorem 1**.: _Let \(\mathcal{N}\) be a connected FFNN and assume \(M\geq 3\). Then the optimal number of I/Os for inference satisfies_
\[W+N+S\leq\textbf{I/Os}(\mathcal{N},M)\leq 2\cdot(W+N-I). \tag{1}\]
_The optimal number of read-I/Os for this problem satisfies_
\[W+N\leq\textbf{rI/Os}(\mathcal{N},M)\leq 2\cdot W+N-I. \tag{2}\]
_The optimal number of write-I/Os for this problem satisfies_
\[S\leq\textbf{wI/Os}(\mathcal{N},M)\leq N-I. \tag{3}\]
Proof.: The weights, biases and input values have a total size of \(N+W\). Since we cannot perform inference without having read all of these data at least once, we obtain the lower bound for the number of read-I/Os. Likewise, we cannot perform inference with less than \(S\) write-I/Os, because (by definition of the inference problem) we need to write the values of all \(S\) output-neurons. Hence, the lower bound for the total number of I/Os follows by adding the lower bounds for the read- and write-I/Os.
Now it remains to show that we can do inference using no more than \(N+2\cdot W-I\) read-I/Os, \(N-I\) write-I/Os and \(2\cdot(N+W-I)\) overall I/Os. To achieve this, we fix a topological order of the non-input neurons: \(n_{1},\ldots,n_{N-I}\). Then, we reorder the connections in such a way that their output neurons appear in the order of this topological order (notice that this is also a topological order of the connections; see Figure 1 for an illustration of this association of topological orderings). Notice that this order is naturally partitioned into intervals: It begins with an interval of connections ending in \(n_{1}\), followed by the connections ending in \(n_{2}\) and so on.
We now show that performing inference in this order with our inference-algorithm (Algorithm 1 from the main paper) and a MIN eviction policy, costs at most \(2W+N-I\) read-I/Os and \(N-I\) write-I/Os. As we start reading the connections, we spend \(1\) read-I/O to read the bias of \(n_{1}\) and then at most \(2\) read-I/Os for each of the connections that end on \(n_{1}\) (\(1\) for reading the input neuron and \(1\) for reading the connection itself). Once we passed through this interval of connections that end on \(n_{1}\), we apply the activation to finish the computation of neuron \(n_{1}\). Then we continue with the interval of connections that end on \(n_{2}\). Also in this interval of connections as well as in all following intervals of connections, we spend \(1\) read-I/O for the bias of the output-neuron and at most \(2\) on each of the connections. Since there is exactly one interval of connections for each of the \(N-I\) non-input neurons, we spend at most \(N-I\) read-I/Os to read the biases and at most additional \(2W\) for the connections and their input-neurons, adding up to \(2W+N-I\) read-I/Os.
Now we count the write-I/Os deployed in this computation. Since all connections that end in the same neuron follow each other, we always compute neurons without having to write temporary values (once we start computing a neuron all of the following computation steps are also directed towards computing this neuron, and we only start computing another neuron once the previous is finished). From this it follows, that if we spend a write-I/O on some neuron, then we are writing a fully computed neuron value to slow memory. And hence, if we read and evict this value again, we do not spend another write-I/O on this value (since the fully computed value is already stored in slow memory, the efficient eviction policy will evict this value by deleting it). Therefore, we spend at most one write-I/O for each of the non-input neurons. Since there are \(N-I\) non-input neurons, we have overall at most \(N-I\) write-I/Os. Adding the read- and write-I/Os, we conclude that the total number of I/Os is at most \(N+2\cdot W-I+N-I=2\cdot(N+W-I)\).
Notice that in both Inequalities 1 and 2, the term on the right hand side is at most twice as large as the term on the left hand side. The next Proposition establishes that none of the generic bounds given by this theorem could be tightened
Fig. 1: Associated topological orderings of neurons and connections.
any further by multiplying it with a constant other than \(1\).
**Proposition 1**.: _As generic bounds that depend only on \(W,N,I\), and \(S\), the bounds in Theorem 1 are tight: For each one of them and for any \(\epsilon>0\), there are instances for which the true value differs from the bound by a multiplicative factor that lies in \([1-\epsilon,1+\epsilon]\)._
To prove this Proposition, we need to show that for each of the bounds of Theorem 1, there are instances that are arbitrarily close to the bound (close in terms of multiplicative factors).
The first lemma establishes that there are instances that exactly attain all lower bounds of Proposition 1 (and hence, are obviously arbitrarily close).
**Lemma 1**.: _On any FFNN \(\mathcal{N}=[L_{1},L_{2},\ldots,L_{d}]\) in which any two consecutive layers \(L_{i},L_{i+1}\) have together at most \(M-1\) neurons, we can do inference, with a number of read-, write-, and total I/Os that matches the lower bounds given by Theorem 1._
Proof.: We prove this by describing a computation strategy that achieves this.
Read the \(|L_{1}|\) input values into fast memory. Also initialize in fast memory \(|L_{2}|\) partial sums with the biases of the neurons in \(L_{2}\). Since \(|L_{1}|+|L_{2}|\leq M-1\), we have in fast memory free space for at least one more value. We use this free space to iterate over the weights between \(L_{1}\) and \(L_{2}\) and perform the following steps for each one of them:
1. Read the weight.
2. Multiply it with the value of its input neuron (which is already in fast memory).
3. Add this product to the partial sum of its output neuron.
4. Delete the weight from fast memory.
Once this iteration is finished, we apply the activation function to the partial sums of \(L_{2}\), which finishes the computation of the neurons in this layer. Now we do not need the values from \(L_{1}\) anymore and can delete them from fast memory. This gives us enough free space to read the biases of \(L_{3}\), and compute this layer in the same way. We proceed like this, computing layer after layer. Once we finish the computation of the neuron-vales of the last layer, we write those \(S\) values back to slow memory.
Notice that we read each of the \(N-I\) biases, \(I\) input values, and \(W\) weights exactly once. Hence we use \(N+W\) read-I/Os. We only use write-I/Os to write the \(S\) output values. Hence, altogether we use \(N+W+S\) I/Os.
Alternatively, we could have proved the previous Lemma by verifying that an FFNN with the mentioned properties, can be constructed with the Compact Growth method that we introduce later. The next lemma shows that there are instances that use a number of read- and total I/Os that are arbitrarily close to their upper bound.
**Lemma 2**.: _For every \(\epsilon>0\), there exist FFNNs \(\mathcal{N}\) (of arbitrary size) that have \(I/Os(\mathcal{N},M)>(1-\epsilon)\cdot 2\cdot(W+N-I)\) and \(rI/Os(\mathcal{N},M)>(1-\epsilon)\cdot(2\cdot W+N-I)\)._
Proof.: This is the case for architectures, where a large proportion of the neurons are input neurons. For example, in a tree with \(I\) input neurons, all connected to a single output neuron, we have \(I/Os(\mathcal{N},M)=\cdot 2\cdot(W+N-I)\) and \(rI/Os(\mathcal{N},M)=2\cdot W+N-I\).
The next lemma shows that there are instances that use a number of write-I/Os that is arbitrarily close to the upper bound.
**Lemma 3**.: _For every \(\epsilon>0\), there exist FFNNs \(\mathcal{N}\) (of arbitrary size) that have \(wI/Os(\mathcal{N},M)>(1-\epsilon)\cdot(N-I)\)._
Proof.: Notice that this inequality is satisfied by any FFNN for which all neurons are either input or output neurons. But to give less trivial examples, we will construct FFNNs with hidden neurons that satisfy this inequality.
For \(M,h,w\in\mathbb{N}^{+}\), consider the FFNN that has \(I\) input neurons, \(S\) output neurons, and one hidden layer with \(h\) neurons. Clearly, this FFNN requires at least \(S\) write-I/Os. So, for any parameter configuration \(I,h,S\) with \(S>h\cdot(1-\epsilon)/\epsilon\), we have \(wI/Os(\mathcal{N},M)>(1-\epsilon)\cdot(N-I)\).
The three previous lemmata provide the _tight extremal cases_ that bring us into the position to prove Proposition 1. Although the constructed architectures in the proofs of these lemmata were "artificial", they suffice to show that generic bounds that only depend on \(I,S,N,\) and \(W\), cannot be tighter (tighter by a constant multiplicative factor other than \(1\)) than our bounds. Yet, notice that, despite being "artificial", these extremal cases are more than just "a few cornercases". In fact, the constructions in each of these proofs could have been chosen to be arbitrarily large.
Proof of Proposition 1.: Lemma 1 shows that the lower bound on the number of read-, write-, and total I/Os is tight. Lemma 2 shows that the upper bound on the number of read-I/Os and total I/Os is tight. Lemma 3 shows that the upper bound on the number of write-I/Os is tight.
Notice that none of the bounds depends on \(M\). This is surprising, because for most computational problems, I/O-complexities depend strongly on the memory size1. As we increase \(M\), we gain more flexibility that we can use to avoid I/Os. Also surprising is the fact that these bounds do not depend on specific properties of the network, except for its sizes. This _does not mean that the overall I/O-complexity does not depend on these quantities_ (memory size and FFNN architecture), but from the tightness of our bounds we conclude that the dependence is only moderate.
Footnote 1: For example, in the case of matrix multiplication, the dependency on \(M\) is of the order \(O(1/\sqrt{M})\)[7].
For write-I/Os, this is very different. The bounds for write-I/Os given by Theorem 1 are less tight. In fact, they can be arbitrarily loose (despite being optimal in the sense of Proposition 1). The problem for write-I/Os is that, it is simply not possible to give interesting bounds that only depend on \(W,N,I\), and \(S\) but no other factors such as the connection order, the memory size \(M\), and the FFNN architecture (this impossibility is implied by Proposition 1). The following Proposition shows that inference in a _layer-after-layer_ fashion can be arbitrarily more expensive than using an optimal order.
**Proposition 2**.: _For every \(c\in\mathbb{N}\) and every memory size \(M\in\mathbb{N}\), there exists an FFNN \(\mathcal{N}_{M,c}\), such that performing inference in a layer-after-layer fashion, requires at least \(c\) times more write-I/Os than optimal._
Proof.: Consider a sparse FFNN that has \(c+2\) layers: \(1\) neuron in the input layer, \(2M\) neurons in each of the \(c\) hidden layers, and one neuron in the output layer. Let each hidden neuron have exactly one incoming and one outgoing connection, let the input-neuron be connected to each neuron of the first hidden layer, and let the output-neuron have incoming connections from all neurons of the last hidden layer (in other words, this FFNN has \(2M\) chains of neurons of length \(c+2\) that meet in the input and output neuron). If we perform inference on this FFNN in a layer-after-layer fashion, we need the values of all \(2M\) neurons of each hidden layer to compute the next layer. Since our fast memory has only capacity for \(M\) neurons, we will need to store the other \(M\) neurons, requiring at least \(M\) writes for each hidden layer. Hence, if we perform inference on this FFNN in a layer-after-layer fashion, we would need to use at least \(M\times c\) write-I/Os. Yet, if we compute chain after chain, a single write-I/O would suffice.
## IV Connection Reordering: Adapting the order to the FFNN and hardware
In this section we introduce _Connection Reordering_, which is a method to optimize the topological order of the connections for a given FFNN architecture and memory size \(M\). This method depends on the following hyperparameters: The **window size**\(ws\in\mathbb{N}\), the **cooling rate**\(\sigma\in\mathbb{R}\), and the **number of iterations**\(T\in\mathbb{N}\). The high-level idea of this method is based on Simulated Annealing [33]: Over \(T\) iterations, we perform random changes to the topological order (we call this _creating neighbors_) and either retain or discard the changes with a probability that depends on the quality of the old and the new order (called _updating_). Now, we fully specify this method by describing the processes of creating neighbors and updating.
### _Creating neighbors_
We start with a topological order of the connections \(e_{1},e_{2},\ldots,e_{W}\) and let **Oldi/Os**\(\in\mathbb{N}\) denote the number of I/Os used in this topological order. The number of I/Os obviously depends on the memory size and eviction policy, but those are fixed throughout the execution of this algorithm and hence the number of I/Os depends only on the topological order of the connections.
First, we choose uniformly at random one connection \(e_{i}\). We let \(w\) be an integer that we choose uniformly at random from \(\{0,1,\ldots,ws-1\}\). We consider the window of connections \(e_{i},e_{i+1},\ldots,e_{\min(i+w,W)}\). We choose the direction in which we move the connections from this window: Either left or right with probability \(0.5\).
Case 1: Moving to the leftIf we move the connections to the left, we start moving the leftmost connection \(e_{i}\) of the window. We move \(e_{i}\) to the left until we encounter another connection \(e_{s}\) that has the same input neuron as \(e_{i}\), or whose output neuron is equal to the input neuron of \(e_{i}\) (if we never encounter such a connection, we move \(e_{i}\) to the very beginning of the order). We insert \(e_{i}\) right next to \(e_{s}\) so that \(\ldots,e_{s},e_{i},e_{s+1},\ldots\) is the new order. Note that this is again a _topological_ order. Then we continue moving the second leftmost connection in the window in the same fashion. We continue like this until we moved all connections from this window.
Case 2: Moving to the rightIf we move the connections to the right, we start moving the _rightmost_ connection of the window. We move it until we encounter another connection \(e_{z}\), which has the same _output_ neuron as \(e_{i}\) or whose input neuron is equal to the output neuron of \(e_{i}\). We insert \(e_{i}\) right before \(e_{z}\) so that \(\ldots,e_{z-1},e_{i},e_{z},\ldots\) is the new order. Then we continue with the second rightmost connection from the window. We continue like this until we moved all connections from this window.
### _Updating_
After we moved all connections from the window, we have a new topological order. We denote it \(\tilde{e_{1}},\tilde{e_{2}},\ldots,\tilde{e_{W}}\). We measure how many I/Os are used to perform inference in this new order and call this number **newI/Os**. If **newI/Os**\(\ll\)**Ol/Os**, we certainly update \(e_{1}=\tilde{e_{1}},e_{2}=\tilde{e_{2}},\ldots,e_{W}=\tilde{e_{W}}\) and **old/Os**=**newI/Os**.
If **newI/Os**\(\geq\)**oldI/Os**, we either update or go back to the old order. We decide this at random, choosing to update with a probability \(2^{-(\text{newI/Os}-\text{oldI/Os})+\epsilon^{\sigma}}\), where \(t\) is the iteration number.
## V Compact Growth: Adapting FFNNs to hardware
In this section we present a construction scheme that completely characterizes the FFNN architectures that allow inference with a minimal number of I/Os for a given memory size \(M\). This answers the following question: For a given memory size \(M\), which are the FFNN architectures on which we can do inference with a number of I/Os that matches the lower bound given by Theorem 1? Or equivalently: Which are the FFNNs on which we can do inference without having to write or read intermediate values due to lack of memory? The idea is to couple the construction of the FFNN closely to the steps in the inference computation. More precisely, we will build the FFNN by a sequence of steps of four different types.
We introduce the concepts of _pebbles_ and _bags_ to reason about I/Os in our computation. Each pebble corresponds to one neuron. A pebble can be either gray (if it is not yet fully computed) or black (if it is fully computed and can be used by its outgoing connections). The bag represents the fast memory.
### _The construction rules_
We start our construction with an "empty FFNN" (there are no neurons and no connections) and an empty bag that represents the content of our fast memory. We build up the FFNN by a sequence of construction steps of the following four types (for each type we write in parentheses the corresponding computation step into which it can be translated).
**1)** Whenever we have less or equal than \(M-2\) pebbles in our bag, we can either add a gray or a black pebble to our
bag and simultaneously add an isolated neuron to our FFNN (reading a neuron into fast memory).
**2)** When we have a black and a gray pebble in our bag, we can draw a connection in our FFNN, that starts at the neuron that corresponds to the black pebble and ends at the neuron that corresponds to the gray pebble. (multiply weight with the value of the input neuron and add it to the partial sum of the output neuron).
**3)** We can turn a gray pebble into a black pebble (finish computation of the neuron by applying activation function).
**4)** We can remove a black pebble from the bag (delete from fast memory).
The following theorem summarizes the properties of FFNNs built by a sequence of the above steps.
**Theorem 2** (Compact growth).: _Let \(\mathcal{N}\) be a connected FFNN with \(W\) weights and \(N\) neurons. Given a memory of size \(M\geq 3\), we can do inference on \(\mathcal{N}\) with \(N+W\) read-I/Os and \(S\) write-I/Os if and only if \(\mathcal{N}\) can be constructed by the compact growth scheme that we just described._
Proof of Theorem 2.: If we can build \(\mathcal{N}\) by a sequence of these construction steps, then the corresponding steps in the parentheses, describe a valid sequence of computation steps for inference. By only allowing to insert pebbles when we have (strictly) less than \(M-1\) pebbles in the bag, we ensure that there are never more than \(M-1\) pebbles in memory and hence there is one free space for a connection. Hence, this sequence of computation steps can be executed with a fast memory of size \(M\) without requiring any I/Os on temporary values. This proves the "if" part of the statement.
Conversely, assume for a given \(\mathcal{N}\) and \(M\geq 3\), we can order the connections in a way that allows us to perform inference with a minimal number of I/Os. Denote this order \(e_{1},e_{2},\ldots,e_{W}\). We will show how we can translate the computation steps into a sequence of pebble-moves that creates the FFNN. According to Theorem 1, a _minimal number of I/Os_ means that we perform \(W+N\) read-I/Os and \(S\) write-I/Os. Since this is the number of read-I/Os required to read all necessary parameters once, and write out the values of the output-neurons once, these I/Os only suffice to perform these necessary reads and writes. In particular, we cannot spend any more I/Os to write or read temporary values. Hence, for each neuron we will read one value at some moment (the bias or the input value), then have this value in fast memory for a certain interval of time during which we update it according to the progress of the computation, and then evict the fully computed neuron value when it is not needed anymore. After that we will not be able to use this value again. We translate this into our pebble construction: Putting a gray pebble into our bag, having it there for a certain amount of time, transforming it into a black pebble (once the neuron is fully computed), and then removing it (when the neuron value is not needed anymore and is evicted).
During this interval, this optimal computation strategy must use all incoming and outgoing connections of this neuron: The incoming connections during the first part of the interval (when the neuron is not yet fully computed and hence the pebble is gray) and the outgoing connections during the second part. Further, any time we use a connection, the "other end" of the connection must also be in fast memory and hence the corresponding pebble must be in the bag (remember that we insert a pebble whenever we read a bias/input and remove the pebble when we evict the value). Also, any time we use a connection, the color of the pebble of the input neuron is already black (because we turn a pebble black as soon as the neuron is fully computed) and the pebble of the output neuron is still gray. Hence, as we use the connections on the computation side, we can draw the connections on the side of the pebble model.
Due to the restricted memory size, we can assume without loss of generality that we have never more than \(M-1\) neuron values in fast memory (if we read an \(M\)-th value, there is no more space for a connection, and the computation is stuck until we remove one neuron value; but then we could just as well first remove this neuron value before reading another one). And hence, we can assume without loss of generality that we only read values when there are at most \(M-2\) values in fast memory. On the side of the pebble construction, this ensures that we only add pebbles, when there are at most \(M-2\) pebbles in the bag.
We can apply this theorem to the problem of _optimal hardware for a given FFNN_. To do this, we consider the **bandwidth** of an FFNN, defined as the smallest \(k\in\mathbb{N}\) for which there exists a topological order of the neurons, such that any two connected neurons are at most \(k\) steps apart in the topological order. We prove now that when FFNNs have bandwidth \(k\), we can build them with compact growth, using \(M=k+2\).
**Corollary 1**.: _Let \(\mathcal{N}\) have bandwidth \(k\in\mathbb{N}\). If we have a memory size \(M\geq k+2\), then we can perform inference on \(\mathcal{N}\) without reading or writing any temporary values._
Proof.: Let \(\mathcal{N}\) have bandwidth \(k\). This means that there exists a topological order of the neurons \(n_{1},n_{2},\ldots,n_{N}\), such that for any \(i\in\{1,\ldots,N\}\), all incoming connections to \(n_{i}\) are among the \(k\) previous neurons \(\{n_{max(1,i-k)},n_{max(1,i-k)+1},\ldots,n_{i-1}\}\). So, we can build \(\mathcal{N}\) with compact growth, by adding the pebbles according to the topological order of the neurons, and when the bag is full, removing the pebble that was added \(k+1\) steps ago. This ensures that when we add a pebble of a neuron \(n\) to the bag, the pebbles of all other neurons on which \(n\) directly depends are also in the bag and we can add all connections that go to \(n\). Hence, with a memory size of \(k+2\), we can build \(\mathcal{N}\).
## VI Experimental Evaluation
To demonstrate both the theoretical accuracy as well as the practical utility of our results, we run both simulated experiments counting I/Os and measure CPU performance of the reordered sparse neural networks compared with high-performance libraries.
As inputs, we use two types of networks: MLPs with varying depth, breadth, and density; and fully-connected layers from
the popular BERT [34] Transformer [23] neural network. For the former, we generate five randomly-sparse FFNNs (obtained by random edge sampling or using Compact Growth). For the latter, we use a pre-trained BERT\({}_{\text{LARGE}}\) neural network and perform magnitude pruning [21].
### _Simulated Experiments_
To test Connection Reordering (CR) and Compact Growth (CG), we implement Algorithm 1 and cache simulation, along with LRU, RR, and MIN eviction policies. We run each experiment configuration with five random MLPs and BERT, reporting the median values and 95% nonparametric confidence intervals as error bars.
#### Iv-A1 Connection Reordering
We evaluate CR by optimizing the I/Os of random sparse FFNNs with varying structural and sparsity properties. Figure 2 shows four dimensions in which we vary a baseline FFNN: a 10% dense, \(4\)-layer MLP with \(500\) neurons in each layer and one output neuron, running inference with a fast memory size of \(100\) elements. We either vary density, width, depth, or fast memory size, while keeping the other parameters constant at their baseline value. We use a MIN eviction policy. Initially, we order the connections as in the proof of Theorem 1, so that we are guaranteed to not use more than twice the optimal number of I/Os. Then, we apply CR with \(T=\)1,000,000, \(\sigma=0.2\), and set \(ws\) to four times the average in-degree of the network. As we can see in these examples, CR reduces the total number of I/Os further by up to \(43.5\%\) and brings us up to \(97.4\%\) closer to the theoretical lower bound. Further, we can see that CR gives consistently large improvements across all parameter configurations (except for the cases where the initial I/O-complexity is already close to the lower bound).
#### Iv-A2 Compact Growth
We generate three FFNNs with Compact Growth as described in Appendix B. These FFNNs correspond to the values \(M_{g}=100,300\), and \(500\). Then we count the I/Os used for these FFNNs with varying sizes of fast memory \(M\). These numbers are shown in Figure 3. As predicted by Theorem 2, we can see for all three FFNNs that when \(M\geq M_{g}\), we use a minimal number of I/Os. In these cases, we directly achieve this minimal number of I/Os using the connection order produced by CG, without applying CR.
When we additionally apply CR, we are also able to use a minimal number of I/Os in some ranges of memory size \(M\) with \(M<M_{g}\). Of course this is not a contradiction to Theorem 2: The theorem says that we can achieve a minimal number of I/Os using a memory size \(M\) if and only if the FFNN can be built using \(M_{g}=M\). But this does not imply that the same FFNN cannot be built with \(M_{g}<M\).
#### Iv-A3 Cache Eviction Policies
Figure 4 shows the I/O-complexity evolution (over Simulated Annealing iterations) of random FFNNs, for the RR, LRU, and MIN cache eviction policies. Owing to the update scheme, we observe a decaying convergence rate towards a minimum value, where the majority of I/O reduction happens in the first 10,000 iterations. It is interesting to observe that upon optimization, both RR and LRU converge to similar I/Os, which indicates that CR can tune FFNNs for hardware architectures with specific caching mechanisms.
#### Iv-A4 Size of Fast Memory
In Figure 5, we show how the number of I/Os changes as we increase the size of the fast
Fig. 2: Connection Reordering for varying sparse neural network properties. Unless otherwise stated, the network is a 500-neuron wide, 4-layer MLP, with an edge density of 10% and \(M=100\). The lower bound is obtained from Theorem 1. When too small, I/Os for MLPs are annotated.
memory, both before and after Connection Reordering (CR). As we can see, once the fast memory is sufficiently large, we use a number of I/Os that matches the theoretical lower bound provided by Theorem 1, with and without CR. But for insufficient memory size, we see that with CR we reduce I/Os and converge faster to the lower bound.
#### Iv-A5 Application to Transformer-model BERT
Transformers [23] are a DNN architecture built from so-called _encoders_ and _decoders_. A characteristic ingredient of transformers are _attention heads_, which make them suitable for natural language processing and related tasks. A prominent example of a Transformer is BERT [34], which was introduced in 2019. BERT's groundbreaking results made Transformers arguably the most popular and extensively investigated DNN architecture of the past years. The main challenge encountered when deploying transformers is the large size of these models and the resulting demand for compute resources [24]. BERT includes several FFNNs of depth \(2\) and weight matrices of dimensions \(1024\times 4096\) and \(4096\times 1024\). The major part of BERT's parameters and compute time for inference come from these FFNNs [24]. This makes them a prime target for pruning. We took one of these FFNNs and pruned it by removing the connections with the weights of smallest absolute value. Figure 6 shows the I/O-counts and the I/O lower bounds for inference on this FFNN. We considered different levels of sparsity and different cache eviction policies. Further, we assumed a fast memory of size \(100\).
### _Performance Experiments_
To validate the performance benefits of I/O reduction, we perform batched inference on an Intel CPU and measure the performance. For all experiments, we reorder the FFNNs with the simulated optimal (MIN) eviction policy, and then run the FFNN before and after reordering, as well as in the traditional, layer-based approach using MKL for sparse-dense matrix matrix multiplication (CSRMM). Using batched inference (as is performed in production environments) enables the use of SIMD instructions and to better saturate the memory bandwidth. We run each experiment 10 times, with a batch size of 128, reporting median values and error bars the shortest and longest execution time. Annotations show the speed-ups that we obtain with our methods (without and with reordering) relative to the layer-based inference approach.
The hardware used for the experiments is a 32-core Intel Xeon Gold 6130 CPU (running at 2.10 GHz) with 1.5 TB
Fig. 4: I/Os for different cache eviction policies.
Fig. 5: Study of different fast memory sizes (\(M\)) on random sparse FFNNs (\(3\) layers of \(500\) neurons each followed by one output neuron, \(1\%\) density).
Fig. 3: Study of Compact Growth-generated FFNNs designed for different fast memory sizes \(M_{g}\) (\(1,000\) neurons with in-degree of \(4\) and one output-neuron with in-degree \(M_{g}\)).
RAM. For software, we use Intel oneAPI MKL 2021.3.0 for CSRMM, and GCC 10.2.0 for our reordered networks.
#### Vi-B1 Random sparse FFNNs
Starting with the same baseline FFNN from the simulated experiments (a 10% dense, 4-layer MLP with 500 neurons in each layer and one output neuron) we measure the execution time of our methods varying density, width, or depth while keeping the other parameters constant at their baseline value.
In Figure 6(a) we show the execution time for varying degrees of density. As we can see, the sparser the FFNN, the larger the speedup that we obtain with our method relative to the standard, layer-wise way of performing inference. For example, at density 0.1% we get a speed-up of about 45x. On the other hand, the more sparse the FFNN, the less additional speedup we gain by reordering. In fact, somewhat surprisingly, for density values below 1% the execution time is slightly lower without than with reordering, which we attribute to two causes: (a) the relatively low reduction in I/Os (see Section VI-A1); and (b) the small size of the networks, which makes the performance benefit negligible due to the FFNNs fitting in L2 caches.
Another effect shown in Figure 6(a) is speedup compared with MKL on 100% dense FFNNs. The reason for this phenomenon is that we use CSRMM in that case for consistency in measurements, rather than a dense matrix- dense matrix multiplication (GEMM), which explains the observed slowdown.
In Figures 6(b)-6(c) we show the execution time as a function of the depth (Figure 6(b)) and breadth (Figure 6(c)) of the FFNN (while keeping the other structural parameters constant at their baseline values). In both cases we observe speedups that grow linearly with depth and breadth (and hence, with the size of
Fig. 6: Connection Reordering for an MLP from \(\text{BERT}_{\text{LARGE}}\) using different eviction policies, different edge densities and \(M=100\). The lower bound is obtained from Theorem 1.
Fig. 7: Execution time for randomly-sparse FFNNs with different methods.
the instance), however, with diminishing returns as the network grows in one dimension and not in the others.
In sum, our MLP performance experiments align with the simulation results, indicating that reordering is beneficial in practice for sparse neural networks, especially for cases where density is low.
#### Vi-A2 BERT
In Figure 8 we show the execution times for inference on the previously introduced encoder MLP from BERT\({}_{\text{LARGE}}\) with varying degrees of density. In the case of MKL inference at 10% density we had one clear outlier (as defined by Tukey's method [35]; one run took 106 ms while in the other 9 runs it took about 17 ms) which we removed from the data shown in this figure. The figure shows that runtime after reordering is always lower than before reordering, and confirms that even for realistic network modules with only two layers, there is a benefit in using our method, especially for low densities.
## VII Discussion
### _Prefetching_
Modern systems support prefetching to accelerate I/Os which is not considered in the presented model. Notice however that the effect of prefetching is "orthogonal" to the optimizations considered in this paper as prefetching does not reduce the overall amount of I/Os but it only affects the time the I/Os take.
### _Cost of using a given topological order_
The computations of "good" (that is, I/O-efficient) topological orders are performed "offline" and once the order is determined and we perform inference, we do not need to look at it explicitly again as it is encoded in the way the connections are laid out. Hence, during inference there is no additional cost associated with processing the connections according to any given topological order.
### _The term "I/O"_
The theory in the paper applies not only to caches, but local memory of any kind (including, e.g., registers). I/O refers to a load/store from a "slow" memory, which could be represented by off-chip memory, communicating with another node, or even scratch-pad memory. The work does neither assume nor rely on caches, and covers aspects such as how to stream memory efficiently into a chip.
## VIII Conclusions
We established tight bounds on the I/O-complexity of FFNN inference. We present a 2-optimal computation strategy and show how combinatorial optimizations on the order of the connections can lead to further improvements in I/Os. Given a fixed memory size, Compact Growth allows us to create FFNNs on which inference can be performed I/O-optimally: we do not need to read nor write any temporary values. Our work provides theoretical insight into the problem of mapping existing neural networks to hardware, such that energy-consumption and latency during deployment are minimized. Most importantly, these insights indicate that the improvements are achieved only when _dispensing with the straightforward, layer-by-layer manner of processing DNNs_. In turn, our proposed Connection Reordering overcomes this misconception, reducing the I/O costs significantly. In our experiments on real hardware we observe speedups of up to 45\(\times\) relative to the standard (CSRMM-based) way of performing inference. Further, between our 2-optimal computation strategy and the strategy that is further improved by connection reordering, we observe speedups of up to 1.17\(\times\).
## Acknowledgements
This work was supported by the EU Horizon project GLACIATION under grant agreement No. 101070141. T.B.N. is supported by the Swiss National Science Foundation (Ambizione Project #185778). We also acknowledge the Swiss National Supercomputing Centre (CSCS) for support and access to computational resources.
|
2309.01124 | Distribution System Power-Flow Solution by Hierarchical Artificial
Neural Networks Structure | In this paper, a new method for solving the power flow problem in
distribution systems which is fast, parallel, as well as modular,
straightforward, simplified and generic is proposed. This approach is based on
a hierarchical construction of an ANNs tree. The power system is divided into
multiple clusters, with a modular architecture. For each cluster an ANN is
constructed, were the ANNs of the different clusters are organized in a
hierarchical manner in which the data from a lower-level layer is fed into an
upper layer in accordance with the electric correlation between the clusters.
The solution time is fast as it is based on the neural networks predictions and
also enables parallel computing of all clusters in any given layer. The various
clusters have a uniform designed single-hidden-layer ANNs, thus providing a
straightforward, simple and generic architectural implementation. The suggested
methodology is an important milestone for bypassing power flow classical
methods and introducing a novel machine learning based approach. The solution
for three-phase unbalance IEEE-123 system as well as EPRI Ckt5 system are
presented. The predictions of the ANNs of the hierarchical structures are
compared to the solution as calculated by OpenDSS simulation software, with
very promising results. | Arbel Yaniv, Yuval Beck | 2023-09-03T09:39:42Z | http://arxiv.org/abs/2309.01124v1 | # Distribution System Power-Flow Solution by Hierarchical Artificial Neural Networks Structure
###### Abstract
In this paper, a new method for solving the power flow problem in distribution systems which is fast, parallel, as well as modular, straightforward, simplified and generic is proposed. This approach is based on a hierarchical construction of an ANNs tree. The power system is divided into multiple clusters, with a modular architecture. For each cluster an ANN is constructed, were the ANNs of the different clusters are organized in a hierarchical manner in which the data from a lower-level layer is fed into an upper layer in accordance with the electric correlation between the clusters. The solution time is fast as it is based on the neural networks predictions and also enables parallel computing of all clusters in any given layer. The various clusters have a uniform designed single-hidden-layer ANNs, thus providing a straightforward, simple and generic architectural implementation. The suggested methodology is an important milestone for bypassing power flow classical methods and introducing a novel machine learning based approach. The solution for three-phase unbalance IEEE-123 system as well as EPRI Ckt5 system are presented. The predictions of the ANNs of the hierarchical structures are compared to the solution as calculated by OpenDSS simulation software, with very promising results.
Power flow, Machine learning, Supervised learning, Distribution systems, Hierarchical computation.
## I Introduction
The integration of technologies of monitoring, controlling and supervision into conventional power grids has been accelerated in recent years, thus transforming them into smart grids. Such technologies are essential in order to enable high penetration of renewable energy sources that are required in order to decrease greenhouse gas emissions. The problem of solving the state of a grid is called "Power-Flow", and it consists of a system of \(2(n-1)\) non-linear equations, where \(n\) is the number of nodes [1]. A typical distribution system can have thousands of nodes and this set of non-linear equations must be solved numerically. For control and optimization applications, one needs to solve the power flow (PF) problem many times, as the search space of possible configurations is very large.
As a result, for real-time control and optimization purposes, the solution time of the power flow problem is a critical factor. Classical numerical solution methods such as Newton-Raphson (NR), Gauss-Seidel and their derivatives [2],[3],[4], or sequential Forward Backward Sweep (FBS) algorithm [5], which is also suitable for three phase unbalanced distribution systems[6] and a modified un-sequential scheme has been developed. However, all of the above mentioned methods are too slow for control and real time optimization applications.
The other limitation of such numerical methods is their dependency on the parameters' data which is often not fully available. For example, in order to solve the PF set of equations, the admittance matrix must be known, whereas practically in distribution systems it is typically only partially known.
Both of the above mentioned limitations can be mitigated using ML approaches: the training of neural networks on historical data measurements eliminates the need of the parameters' data, and while the training stage can be long, once it is done, ANNs yield very fast predictions in comparison to numerical approaches.
Neural networks are used for various purposes in the context of power systems, among them is to solve the power flow problem. A work of ML for power system operation support as in [7], includes a preliminary study of testing deep neural networks for approximating load-flow of Matpower 30 and 118-bus grids via Tensorflow framework. In [8], a physics-guided neural network is presented. Inspired by unsupervised and supervised auto-encoders, a framework of neural networks that simultaneously model PF solvers and rebuilds the PF model is suggested. However, the suggested model is restricted as it requires accurate topology information.
Although deep neural networks such as the above mentioned ones are very effective for euclidean data, they are not suitable for processing graph-structured data, such as the power flow problem, as it may be irregular in comparison to euclidean data. This limitation motivated the development of graph neural networks (GNNs) [9]. GNNs capture the dependence in graphs via the distributed computing theory synchronous messagepassing system. The system, however, is very inefficient when dealing with large graphs when messages need to pass through long paths.
GNNs have several uses in the context of power systems, such as parameter and state estimations [10]. A learning model that utilizes the structure of the power grid is proposed in [11]. It is also usefull for power flow and optimal power flow [12], [13]. However, this model is limited only for power grids where all lines have the same physical characteristics.
Another variation of this approach is graph convolutional neural network (GCN) [14]. This generic and data-driven approach approximates the load flow calculations, by learning the loading on each line instead of the actual voltages. An un-supervised graph neural solver was implemented in [15], which calculate power flow by minimizing the violation of Kirchhoff's law at each bus.
While GNN has shown excellent results for certain applications, considering its limitations [16], it is still far from being a frontrunner for PF-based applications.
In real-world applications, complex and large problems can often be divided into sub problems for simplification. One such problem is the classification task, which is generally a multi-class problem [17]-[18]. There, every neural network is assigned with a task of solving independently one of these sub-problems.
Another approach for a scaled solution of unbalanced distribution system (DS) uses relaxed sub-problems of low complexity. Such algorithm is based on the relaxation of the non-linear set of equations as conic constraints with directional constraints over multiple iterations of a second order cone programming (SOCP) [19].
In this paper the PF problem is solved by dividing the distribution system into clusters. The division is done by means of InfoMap algorithm [20]. These clusters are organized in a hierarchical structure. Each cluster is implemented by a designated ANN in such a way that each layer of ANNs feeds on its results for the active and reactive powers to the upper layer of ANNs as an input. Once the system is divided by InfoMap algorithm, each cluster is solved by a separate single-hidden-layer neural network with a uniform design. As Infomap is a multi-level algorithm, it provides a wide variety of possible partition schemes, out of which it is possible to choose a partition of the original graph representing the distribution system according to the architecture and performance objectives.
The paper shows the theory of the proposed method as well as a full simulation examples on the unbalanced IEEE-123 and EPRI Ckt5 distribution networks. The results are shown to have a MAE of up to 1.2%, and the computational time is substantially reduced by at least a magnitude of order in comparison to the solution by the numerical method as simulated with OpenDSS which is an open-source program that solves unbalanced distribution systems by the fixed-point iteration method [21].
The original contribution of this paper is: 1. Hierarchical structure of ANNs, which is inherently modular and constructed by simplified sub problems of the complete DS topology 2. This hierarchical structure of neural networks and layer- wise parallel computing yields fast prediction in comparison to classical numerical methods. The novel sub-problems ML orchestrate approach of a solver to the PF problem is purely data-driven, namely, there is no need to know any of the underlying physical topology of the power system. In comparison to other approaches of ANNs-based implementations for the solution of the PF problem in DSs which are characterized by complex architectures as a result of the characteristics of such real power systems, after applying the division algorithm, the utilization of simple, generic and unified design single-hidden-layer neural networks is possible via the hierarchical tree of ANNs' construction, in accordance to the clusters of the complete power system and the relations between them. This structure is also inherently modular, which is an important advantage as one of the limitations of existing neural network implementations is their limited compatibility to the dynamic nature of DSs, which can have numerous switching events causing often topology changes in a single day due to scheduled maintenance, faults and high penetration of renewable energy resources.
## II Classical numerical power flow solution of distribution grids
The PF problem is a nonlinear set of \(2(n-1)\) equations, where \(n\) is the number of nodes of the power system.
As these equations are nonlinear, numerical methods are classically used as a solution method. The PF set of equations for unbalanced distribution systems, which are common in the united states, includes three sets of equations:
\[I_{i}^{abc}=\sum_{j=1}^{n}Y_{i,j}^{abc}V_{j}^{abc}\quad i=1,\ldots,n \tag{1}\]
where:
\[I_{i}^{abc}=\begin{bmatrix}I_{i}^{a}\\ I_{i}^{b}\\ I_{i}^{c}\end{bmatrix},V_{j}^{abc}=\begin{bmatrix}V_{j}^{a}\\ V_{j}^{b}\\ V_{j}^{c}\end{bmatrix},Y_{i,j}^{abc}=\begin{bmatrix}Y_{i,j}^{aa}&Y_{i,b}^{ab}& Y_{i,j}^{ac}\\ Y_{i,j}^{b}&Y_{i,j}^{bc}&Y_{i,j}^{bc}\\ Y_{i,j}^{ca}&Y_{i,j}^{cb}&Y_{i,j}^{cc}\end{bmatrix} \tag{2}\]
where \(I_{i}^{p}\) is the injected current, \(V_{i}^{p}\) is the complex voltage at bus \(i\) for phase \(p\), and \(Y_{i,j}^{pp^{\prime}}\) is the element of the admittance matrix connecting buses \(i\),\(j\) for phase \(p\),\(p^{\prime}\). Following (1) and (2), the injected current \(I_{i}^{p}\) is as follows:
\[I_{i}^{p}=\sum_{j=1}^{n}\sum_{q=a,b,c}Y_{i,j}^{pq}V_{j}^{q}\quad i=1,\ldots,n \tag{3}\]
and the three-phase power flow equations for the unbalanced case are:
\[S_{i}^{p}=V_{p}\sum_{j=1}^{n}\sum_{q=a,b,c}(V_{j}^{p^{\prime}})^{*}(Y_{i,j}^{ pp^{\prime}})^{*}\quad i=1,\ldots,n \tag{4}\]
where \(S_{i}^{p}\) is the injected complex power at bus \(i\) for phase \(p\). Alternatively, it is possible to use methods based on symmetrical components [22]. A PF simulation software that have gained a lot of interest and is used for various applications is OpenDSS [23], which is based on a fixed-point iterative method [21]. OpenDSS is an open source software for DS simulations, which is also suitable for unbalanced systems. It is commonly used as a source for comparison, both for verification as well as for computational comparison purposes for the state-of-the-art solutions in this field [24]. The numerical solutions from OpenDSS simulation software are used in this paper as ground truth for the training and the predictions' errors evaluation of each of the ANNs in the ANNs' array structure.
## III The proposed hierarchical structure
### _General_
In this paper we solve the PF problem for DSs by means of hierarchical ANN, namely by dividing it into clusters where each cluster consists of a similar number of nodes. The network division into clusters is presented by means of community detection algorithm as is shown ahead. The
conventional PF solver of OpenDSS is used for generating the data for training and testing the ANNs. Therefor, the process of attaining all the data which might take time is only for the training stage and is not counted at the testing stage. An alternative way of attaining the data could have been using historic measured data instead of using a PF program.
### _ANN array structure implementation_
As the architecture design for a neural network of a distribution power system is a very complicated task due to the large number of nodes in real systems and the complex relations between them, we suggest to divide the power system into multiple clusters of the same order of size. As each cluster is considerably smaller than the complete system, a simple fully-connected neural network (FCNN) can be implemented for each cluster with a uniform choice of hyper-parameters as detailed in the next chapter. The ANNs for the different clusters were organized according to the hierarchical division by Infomap algorithm.
Each ANN is trained and tested with different training and testing sets according to its specific nodes and loads.
Then, each ANN yields the predictions of the voltage amplitudes, phases and correlation preserving parameters. These parameters include the data that needs to be forwarded to the ANNs on the layer above it. This data includes the active and reactive powers at the point of common coupling (PCC) between the layers. The above procedure excludes the top ANN at layer zero as seen in figure 1 which do not pass on any information due to its location.
In a hierarchical array of ANNs methodology, there is a combination of the parameters included in an ANN designed for a complete power system, and additional parameters which preserve the electrical dependence between the different clusters. The inputs of an ANN for the complete power system are active and reactive powers at each of the loads, as it has only P-Q buses (load buses), and the outputs are the amplitude and phase of the voltage at each of the system's nodes. The additional parameters for a hierarchical ANNs structure are extra inputs or outputs or both according to the location of the cluster in the hierarchy which is dictated according to the partition by the Infomap division algorithm. There are six correlation preserving parameters as inputs/outputs/both (according to the location of the cluster in the hierarchy), namely, injected active and reactive powers (for each connecting node \(i\) there will be \(P_{i,1}\),\(P_{i,2}\),\(P_{i,3}\)- the three active powers for each phase and \(Q_{i,1}\),\(Q_{i,2}\),\(Q_{i,3}\)- the three reactive powers for each of the phases).
The criteria for the additional correlation preserving parameters are as follows. Each cluster which has no clusters beneath it in the feeder (leaf), has additional output parameters of the active and reactive power which flows into the head node of that cluster (at each phase). These output parameters are than pass as input parameters to all of the ANNs which belongs to the clusters which are at the next higher layer, thus preserving the correlation of all of the leaf clusters to the cluster above it (its parent). Note that the correlation preserving parameters pass from one ANN to another at the ANN testing stage are all the solutions as predicted by the relevant ANN. In the same manner, clusters at a mid layer, have output parameters of the active and reactive power which flows into the common coupling node (CCN) of that cluster (at each phase), and additional input parameters from the cluster below it (from each of its children). Meaning, in case that a cluster has multiple clusters below it, it will have additional input parameters for each of the CCNs of the clusters below it (child). The only cluster with no additional output parameters is the cluster at the top of the DS (with the head node which is the slack bus of the entire system), as it does not pass any information as there is no clusters above it. This cluster has only additional input parameters of the active and reactive power which flows into the CCN of each of the clusters below it. The above mentioned parameters allocation methodology is demonstrated in fig. 1: As can be seen, there is a slight difference if the ANN is a bottom layer ANN or an upper one, due to the fact that bottom layer ANN are not fed with data from lower levels. Therefore, for a bottom layer cluster with \(n\) nodes, three sets of active and reactive powers will be the input of the cluster, namely, a total of \(6n\) inputs. ANNs at upper layers have an inherent feed from a lower layer. For a cluster with \(m+g\) nodes, where \(m\) is the number of independent nodes and \(g\) is the number of nodes that are fed from the lower layer, there will be \(6m\) independent inputs and \(6g\) inputs that are fed from the lower level. The minimum value of \(g\) is one, and for this case there will be only six inputs (three sets of active and reactive power for each phase). The output data will be the voltage amplitudes and angles at all \(m+g\) nodes. Both for bottom layer as well as for upper layer clusters (excluding the cluster at the top level which do not feed forward on any information), there are additional six correlation persevering output parameters of the head node of the cluster that will be used to feed the upper-level cluster as explained above. A schematic of an ANNs hierarchical array structure is shown in
Fig. 1: Schematic of the ANNs parameters allocation methodology.
Fig. 2. It can be seen that the hierarchy is divided into layers (\(k+1\) layers in the figure). The head bus is included with the upper ANN at layer 0 which consists of only a single ANN. This ANN will be the last one to be calculated and will be fed by the data of the loads included in that cluster and the data that is fed to this ANN from lower levels. Layer \(1\) will have \(N_{1}\) ANNs, layer \(2\) will consist of \(N_{2}\) ANNs and layer \(k\) will have \(N_{k}\) ANNs accordingly.
### _Community detection algorithm- InfoMap_
As above mentioned, in this paper the algorithm for dividing the DS into clusters is a community detection algorithm.
The objective of community detection in power systems is to learn how a network's structure influences the system's behavior by identifying its modular structure with respect to flow of resources. This can be done by exploiting the inference-compression duality.
According to the statistical minimum description length (MDL) principle [25], any set of data can be represented by a string of symbols from a finite alphabet, since any regularity in a set can be used to compress it. Hence, this principle can be used to find structures that are significant with respect to how resources flow through networks. This also implies that there is a duality between inference to compression of networks. This flow can be found according to a communication process in which a sender wants to communicate to a receiver regarding its trajectory. Thus, the trace of the network's flow is represented by a compressed message.
The InfoMap algorithm is used for the network division and is based on a hierarchical version of the map equation [26]. The core of Infomap algorithm follows the Louvain method: neighboring nodes are joined into modules, which subsequently are joined into supermodules. The hierarchical rebuilding of the network is repeated until the map equation cannot be reduced further. Built upon that, Infomap generalizes this search algorithm of the two-level map equation into a multilevel algorithm by a recursive search which operates on a module at any level, where for every split of a module into submodules, the two-level search algorithm is used.
Infomap was already used specifically for power grid hierarchical segmentation[27], and for guided machine learning (ML) for power grid segmentation for the task of active power management[28].
The multilevel characteristic of Infomap makes it specifically suitable for the hereby proposed approach of array of ANNs, as it is possible to choose the granularity of the partition. In case the first level partition is highly unbalanced, it is possible to continue to the second level partition of the sub-modules, and continue even farther for next level partitions as desired. We hence choose the hierarchical community detection algorithm InfoMap.
## IV ANN unit topology for each cluster
A common learning architecture is the multi-layer perceptron (MLP) artificial neural network (ANN) [29]. It can be represented as a finite directed acyclic graph organized into layers. In this graph, the nodes that do not receive connections from other nodes are referred to as input neurons. Nodes that do not send connections to other nodes are known as output neurons. The remaining nodes that lie between the input and output layers are called hidden neurons.
Using historical measurements or synthetic databases, such ANNs can be trained in a supervised manner by a training stage where the outputs are given as a part of the database, to approximate the function describing the relation between the inputs and the outputs of the ANN. In the power flow set of equations, the inputs and outputs are the known and un-known electrical parameters respectively. As elaborated in the previous section, in the proposed methodology the power system is divided into clusters and an ANN is assigned to each cluster. Thus, the hyperparameters' selection process for the ANNs assigned to the clusters is elaborated next.
### _Model's hyperparamters_
Uniform design characteristics were implemented for each of the ANNs for each of the clusters. The ANNs were chosen to be multi-layer perceptron regressors (MLPRs) with a single hidden layer according to the universal approximation theorem [30]. According to the theorem, a single hidden layer standard multilayer feed-forward network with a finite number of hidden neurons, is a universal approximator among continuous functions on compact subsets of \(Rn\), under mild assumptions on the activation function. As Q(v) and P(v) are continuous functions on compact subsets of \(Rn\), the theorem is adequate for the power flow use case under a suitable activation function. A long process is involved in the construction of ANNs, which involves both theory and trial and error. Part of the implementation choices were taken according to [31], where the guidelines for the construction of an ANN architecture for a distribution system are laid out as a preliminary work of the authors.
Fig. 2: Schematic of an ANNs hierarchical array structure.
In order to examine the choices of the various hyperparameters, an automated procedure was done via Talos. Talos is a python hyperparameter optimization library for Keras which allows to configure, perform and evaluate hyperparameter optimization experiments. With Talos, numerous experiments were conducted with different combinations of hyper-parameters options. The set of options was constructed by narrowing down relevant values according to the preliminary mentioned processes. The experiments were conducted on cluster A as depicted in figure 3, and the parameters' options are detailed in table V. More than 15,000 configurations have been tested, to cover a wide range of possibilities.
There was almost a definite division of the configurations performances according to the optimizer, from stochastic gradient descent (SGD) with the highest error, Root Mean Square Propagation (RMSprop) to adaptive moment estimation (Adam) with the lowest error. Indeed, Adam was chosen originally as the optimizer for our ANNs. The complete hierarchical architecture was tested with one of the best performing architectures from the experiment, and achieved similar results to the preliminary chosen architecture. Thus, the simulations provide a quantitative assessment for the hereby chosen design.
### _Generation of training and testing sets for the ANNs_
Load-shape is a vector that is used to describe many input states of P-Q nodes in a power system. The loadshape is usually normalized and these multipliers are adjoint with the active and reactive power values as specified in the power system's definitions for attaining the values of various types of loads (such as domestic, industrial and commercial). Since for the construction of a database, each input parameter should be assigned with numerous values in a long time series. And, in general, different loads have different values at each point in time, an assignment of different load-shapes to the active and reactive power of the P-Q buses is needed. The multiplication of the load shapes with the active or reactive power definitions yields a vector which represents the behavior of the active or reactive power at that load throughout a hole year. There are different ways to synthetically generate multiple load-shapes out of a single load shape and a few attempts were introduced by adding noise distribution to the load-shape [32]-[33].
The process of generating a substantial amount of load profiles for the various loads is done by using a non-linear companding function such as the well-known \(\mu\)-law function as commonly used in digital communication [34].
In this paper, rather than solving a set of nonlinear equations, we approach the problem using supervised ML. The method is based on an ANN, and solves the problem based on a training set composed of the results of many state solutions for the power systems produced by OpenDSS simulation software. It should be mentioned here that the choice of classical algorithm is not important, and can be changed. Since it is used only to generate the database and to verify the results, any suitable algorithm could be used. The time required to generate the data and the convergence time of the OpenDSS simulations are transparent to the ANN, since it is used only at the learning stage.
In order to have supervised learning for the neural networks, a dataset must be created. The dataset includes many inputs and outputs, some of them (80%) are used for training the ANNs, and some of them (20%) are used for testing and evaluating the ANNs performance for new unseen inputs. The inputs for each state of the system at a given point in time are the values of active and reactive power values at all loads and input correlation preserving parameters. The OpenDSS simulator calculates the solution numerically. As the power flow calculations of OpenDSS are based on the fixed point iterative numerical method, it sometimes does not converge to the correct result, as numerical methods are characterized with phenomenons of error accumulation [35] and non-convergence [36]. As a result, an outliers removal procedure was used, via
Fig. 4: Schematic topology of EPRI Ckt5, divided according to the InfoMap algorithm.
Fig. 5: Schematic of the ANN’s array structure implementation for EPRI Ckt5.
Fig. 3: Schematic topology of IEEE-123, divided according to the InfoMap algorithm.
a moving median filter with a window size of three samples. Then, the amplitude and phase of the voltage at each of the nodes and the relevant correlation preserving parameters are collected and used as the outputs for the training and evaluation corresponding to the different input states. After the training process of the ANNs is done, the PF solution will be achieved as the predication (output) of the ANNs, instead of a numerical calculation.
The power system is divided by Infomap algorithm according to the distribution systems connectivity, as it yields from the admittance matrix generated by OpenDSS. A different data-base is generated for each of the clusters according to the algorithm's division, where each database is constructed from the parameters of the buses which belongs to each cluster.
## V Simulation results
### _General_
In this section, we present the results of the simulations of two distribution system, IEEE-123 system and EPRI Ckt5 system. The ground truth data was generated in incorporation with OpenDSS's COM interface [23], and was divided into training and testing sets. The error metric used to evaluate the quality of the voltage amplitudes and phases predictions is the mean absolute error (MAE) and maximum absolute error (MAXAE):
\[MAE=\frac{\sum_{i=1}^{N}|y_{i}-\hat{y_{i}}|}{n} \tag{5}\]
\[MAXAE=\max_{i=1..N}|y_{i}-\hat{y_{i}}| \tag{6}\]
where \(N\) is the number of testing samples, \(y_{i}\) is the ground truth value according to the OpenDSS simulator results and \(\hat{y_{i}}\) is the predicted value according to the ANN. The error metric for the active and reactive power at the head of the clusters is the relative MAE also known as Mean Absolute Percentage Error (MAPE). Namely, the absolute error is divided by the absolute value of the ground truth value, as the active and reactive power errors are relative to the ground truth value. Correspondingly, the relative MAXAE also known as Maximum Absolute Percentage Error (MAXAPE) is the maximum absolute error divided by the absolute value of the ground truth value. For each cluster, the errors were averaged over the power systems' buses for 20 consecutive runs with the same training and testing sets.
### _Simulation results for the IEEE-123 distribution system_
In this section, we present the results of the OpenDSS simulations as well as the predictions of the ANNs for the IEEE-123 system.
IEEE-123 distribution system [37] operates at a nominal voltage of 4.16 kV. The topology is shown in Fig. 3.
The simulation results for the ANN of cluster \(A\) shows a MAE of 0.021% for the voltage amplitudes, as shown in Fig. 6, where each color represents a different node in the cluster. Each point within each color represents a single sample from the testing set at a different point in time. The results are 0.0189% for phase \(A\) (the 0\({}^{\circ}\) phase), 0.01% for phase \(B\) (the 120\({}^{\circ}\) phase shift) and 0.012% for phase \(C\) (the -120\({}^{\circ}\) phase shift) MAE respectively, as shown in Figs. 7-8 respectively. The relative MAE of the apparent power of the head node of cluster \(A\) is 0.407%. Similar results were obtained for clusters \(B\)-\(G\), and are detailed in table I.
### _Simulation results for EPRI Ckt5 distribution system_
In this section, we present the results of the OpenDSS simulations as well as the predictions of the ANNs for EPRI Ckt5 system.
EPRI Ckt5 [38] operates at a nominal voltage of 12.47 kV, with a total of 16,310 kVA service transformers. The topology is shown in Fig. 4.
The simulation results for the ANN of cluster \(A\) shows a MAE of 0.055% for the voltage amplitudes. The results are 0.069% for phase \(A\) (the 0\({}^{\circ}\) phase), 0.088% for phase \(B\) (the 120\({}^{\circ}\) phase shift) and 0.074% for phase \(C\) (the -120\({}^{\circ}\) phase shift) MAE respectively. The relative MAE of the apparent power of the head node of cluster \(A\) is 1.065%. Similar results were obtained for clusters \(B\)-\(G\), and are detailed at
Fig. 6: Voltage amplitude of numeric results (NR) calculated via OpenDSS vs. those predicted with the ANN of cluster A of IEEE-123
Fig. 7: Voltage phase A of numeric results (NR) calculated via OpenDSS vs. those predicted with the ANN of cluster A of IEEE-123
table II. The results of the MAXAE and MAXAPE are also detailed in table II, where clusters B,D and E consists of nodes connected to all three phases, and clusters C,F and G consists of nodes connected to phase B,B and C correspondingly. The results for the voltages of all clusters are consistently small, as well as just over 1% MAE for the apparent power at the PCC. The MAXAE results are less than 1.508% for all the clusters' voltages and less than 10% for the PCC apparent power MAXAPE. These results are common in the field, and are consistent with reported results of other works such as in [15],[39],[13],[7]. It should be noted that the results in these works were obtained for a distribution systems with around 100 buses while in this paper similar errors were derived for a network of over 3,000 nodes.
## VI Computational considerations
Among the advantages of the proposed hierarchical ANN tree structure, as a solution approach for the PF problem in distribution systems, is the reduced solution time in comparison to classic methods. This improvement is necessary due to the advancements in DS and the desire to implement control of assets in real time applications (real time here is in the range of minutes in practical cases).
The classical solution based on power flow solver (OpenDSS), as well as the ANNs were tested on a single computer for comparison. The computer is 8th generation i7 1.8GHz 8GB RAM Intel processor. It should be mentioned here that the importance is the comparison of the difference in the execution time and not the actual numbers. This is important since in industrial applications the global time can be significantly reduces by using more advanced and fast computers as well as parallel computing.
The computational time of an array tree structure (ATS) is:
\[t_{ATS}= max\left(\sum_{l=1}^{L_{path-i}}max\left(t_{j}\right)j=1\ldots N_{s} \right)i=1...P \tag{7}\]
Where \(P\) is the number of paths, \(L_{path-i}\) is the number of levels in path \(i\) and \(N_{s}\) is the number of clusters in each level, \(l\), according to the InfoMap community detection algorithm's division.
Fig. 8: Voltage phase B of numeric results (NR) calculated via OpenDSS vs. those predicted with the ANN of cluster A of IEEE-123
Fig. 10: Apparent power of numeric results (NR) calculated via OpenDSS vs. those predicted with the ANN of cluster A of IEEE-123
Fig. 9: Voltage phase C of numeric results (NR) calculated via OpenDSS vs. those predicted with the ANN of cluster A of IEEE-123
### _Computational results for the IEEE-123 system_
The time it took for each ANN to predict the testing set of IEEE-123 system is detailed in table III, averaged over 20 runs. As can be seen, the testing time of the array tree structure takes 0.022 seconds.
The solution time of the testing set for IEEE-123 via OpenDSS's python COM interface takes 1.3 seconds. Hence, the solution time via the array structure of ANNs is improved by a factor of 50 in comparison to the classical approach via the OpenDSS simulation software.
### _Computational results for EPRI Ckt5 system_
The time it took for each ANN to predict the testing set of EPRI Ckt5 system is detailed in table IV, averaged over 20 runs. According to equation 7, the testing time of the array tree structure takes 11.748 seconds.
The solution time of the testing set for EPRI Ckt5 via OpenDSS's python COM interface takes 100.062 seconds.
Hence, the solution time via the array structure of ANNs is improved by a magnitude of order in comparison to the classical approach via the OpenDSS simulation software.
## VII Conclusions and Discussion
In this paper, a PF methodology by means of a hierarchical array of ANNs was developed. The paper also describes considerations for constructing the appropriate uniform ANN design. The methodology is demonstrated and simulated for the unbalanced IEEE-123 system as well as EPRI's large-scale Ckt5 system. The discussion of the various considerations and conditions that led to the uniform design of the ANN was also empirically shown to be at least locally optimal through a massive amount of experiments via the hyper-parameter optimization library Talos.
The tree-like graph topological structure of distribution systems is utilized for a hierarchical distributed approach which is layed-out, including an appropriate division algorithm for the power system and the detailing of the required parameters for the description of each cluster and for the preservation of the electric information of the related clusters. The error performance of the results predicted by the trained ANNs tree array are assessed via the comparison to the results obtained from the numerical PF simulation software OpenDSS. The results support the method and theory that is developed in this paper. The proposed model's performance is shown to be as good as the PF problem's results of much more complicated architectures such as graph neural networks, without the redundant inherent complexity characterizing other deep networks designs [9]. The computational complexity which is a crucial factor in adopting the suggested ANN approach, is shown to reduce the time to get a result by at least magnitude of order: a factor of 50 for the IEEE-123 system and a magnitude of order of EPRI Ckt5. This is important since in real time optimizations, it happens that PF must be performed numerous times for covering a large search space as a result of the large amount of controllable elements in modern smart grids. This improvement enables a result in sufficient time for making a control command.
The hereby suggested approach offers a basis for a proper operation capabilities of DSs. Massive improvement is required in collecting or generating a quality and generalized database for the continuance of artificial intelligent based applications for smart grids. Smart grid transformation, including high penetration of distributed renewable energy sources towards reducing CO2 emissions, requires a verity of control, monitoring, and supervision technologies. For this purpose, the development of such tools to solve and optimize the system state in real-time are a necessity.
|
2310.19608 | On Feynman--Kac training of partial Bayesian neural networks | Recently, partial Bayesian neural networks (pBNNs), which only consider a
subset of the parameters to be stochastic, were shown to perform competitively
with full Bayesian neural networks. However, pBNNs are often multi-modal in the
latent variable space and thus challenging to approximate with parametric
models. To address this problem, we propose an efficient sampling-based
training strategy, wherein the training of a pBNN is formulated as simulating a
Feynman--Kac model. We then describe variations of sequential Monte Carlo
samplers that allow us to simultaneously estimate the parameters and the latent
posterior distribution of this model at a tractable computational cost. Using
various synthetic and real-world datasets we show that our proposed training
scheme outperforms the state of the art in terms of predictive performance. | Zheng Zhao, Sebastian Mair, Thomas B. Schön, Jens Sjölund | 2023-10-30T15:03:15Z | http://arxiv.org/abs/2310.19608v3 | # On Feynman-Kac training of partial Bayesian neural networks
###### Abstract
Recently, partial Bayesian neural networks (pBNNs), which only consider a subset of the parameters to be stochastic, were shown to perform competitively with full Bayesian neural networks. However, pBNNs are often multi-modal in the latent-variable space and thus challenging to approximate with parametric models. To address this problem, we propose an efficient sampling-based training strategy, wherein the training of a pBNN is formulated as simulating a Feynman-Kac model. We then describe variations of sequential Monte Carlo samplers that allow us to simultaneously estimate the parameters and the latent posterior distribution of this model at a tractable computational cost. We show on various synthetic and real-world datasets that our proposed training scheme outperforms the state of the art in terms of predictive performance.
## 1 Introduction
Bayesian neural networks (BNNs) are an important class of machine learning models for quantifying uncertainty. However, computing BNN posterior distributions is an open challenge (Izmailov et al., 2021) since many standard statistical inference methods such as Markov chain Monte Carlo (MCMC) are poorly suited to the combination of a high-dimensional parameter space and a massive number of data points (Papamakrou et al., 2022). Many approximate solutions have been proposed to overcome this computational challenge, for example, MCMC with stochastic likelihoods (Andrieu and Roberts, 2009; Welling and Teh, 2011; Chen et al., 2014; Zhang et al., 2020), parametric approximations of the posterior distributions in the form of variational Bayes (Blundell et al., 2015; Gal and Ghahramani, 2016) or stochastic averaging methods (Izmailov et al., 2018; Maddox et al., 2019). In a similar vein, Izmailov et al. (2020) conduct variational inference in a subspace of the parameter space. Other ways include, for instance, restricting the inference only on the last layer (Ober and Rasmussen, 2019; Kristiadi et al., 2020).
In this paper, we focus on the training of partial Bayesian neural networks (pBNNs), a family of BNNs where only part of the model parameters are random variables. This is motivated by the recent work of, for example, Daxberger et al. (2021) and Sharma et al. (2023), which show that pBNNs and standard BNNs are comparable in terms of prediction performance despite the dimensionality of the random variable in the pBNNs being much smaller than standard BNNs. While both approaches make posterior inference much easier, it comes at the cost of making the pBNN a latent-variable model that requires estimation of both the deterministic parameters and posterior inference.
To be more precise, let \(x\mapsto f(x;\phi,\psi)\) be a neural network parametrised by a deterministic parameter \(\psi\in\mathbb{R}^{w}\) and a random variable \(\phi\in\mathbb{R}^{d}\) that follows a given prior distribution \(\pi(\phi)\). Suppose that we have a dataset \(\{(x_{n},y_{n})\}_{n=1}^{N}\) with an associated likelihood conditional density function \(p(y_{n}\mid\phi;\psi)\)1, and that the observation random variables are independent conditioned on \(\phi\). The goal of training is twofold. First, to learn the deterministic parameter \(\psi\) from the dataset, and second, to compute the posterior distribution \(p(\phi\mid y_{1:N};\psi)\), where \(y_{1:N}\coloneqq\{y_{1},y_{2},\ldots,y_{N}\}\). This is a classical parameter estimation problem in a latent variable model (Cappe et al., 2005). The main benefit of pBNNs is that when the dimension \(d\ll w\) is relatively small, the second goal of computing the posterior distribution with fixed \(\psi\) is generally tractable. However, the first goal of estimating the parameter \(\psi\) remains to be solved.
Footnote 1: Throughout the paper we omit the data covariate \(x_{n}\) of \(y_{n}\) to make the notation cleaner.
For practitioners, a popular approach is to jointly optimise \(\phi\) and \(\psi\) according to the maximum a posteriori
(MAP) objective \((\phi,\psi)\mapsto\log p(y_{1:N}\mid\phi;\psi)+\log p(\phi)\), see, for example, Daxberger et al. (2021) and Sharma et al. (2023). Then, the estimated \(\psi\) is used to compute the posterior distribution over \(\phi\) by any applicable Bayesian inference method (e.g., MCMC). The advantages of this method lie in the simplicity and fast computation. Moreover, there is a plethora of optimisers (e.g., stochastic gradient descent and its variants) specifically designed for the setting of large \(w\) and \(N\). However, for multi-modal distributions the MAP estimated \(\psi\) and \(\phi\) are prone to be restricted on a single mode.
In contrast, statisticians generally favour maximum likelihood estimation (MLE) for estimating \(\psi\). Compared to the aforementioned MAP approach, the MLE objective takes all possible outcomes of the random variable \(\phi\) into account by integrating out \(\phi\), and the estimator can be consistent (Cappe et al., 2005). However, the MLE method is not directly applicable to pBNNs since the marginal likelihood \(p(y_{1:N};\psi)=\int p(y_{1:N}\mid\phi;\psi)\,\pi(\phi)\,\mathrm{d}\phi\) is in general intractable. Instead, various lower bounds on the marginal likelihood are used by, for instance, brute-force Monte Carlo, expectation maximisation (EM, Bishop, 2006), and variational Bayes (VB, Blei et al., 2017; Sjolund, 2023). However, EM and VB often compromise in using parametric families of distributions, and they often incur Monte Carlo computation overheads to approximate expectations. Another common approach is to transform the (gradient of) MLE into an expectation with respect to the posterior distribution by Fisher's identity (see, e.g., Cappe et al., 2005, Chap. 10), and then use an MCMC sampler to approximate the expectation (at the cost of demanding computations).
In this paper, we study how to apply and adapt sequential Monte Carlo (SMC) methods to train pBNNs, by representing the training as a simulation of a Feynman-Kac model (Chopin and Papaspiliopoulos, 2020). More specifically, the Feynman-Kac model is composed of a sequence of potential functions given by the likelihood model and a sequence of invariant Markov kernels that so as to anneal to the target posterior distribution \(p(\phi\mid y_{1:N};\psi)\). Computing the target posterior distribution then amounts to a sequential simulation of the Feynman-Kac model, and SMC is the natural sampling framework for the model. The motivation for studying SMC samplers to train pBNNs is that SMC samplers are able to simultaneously produce consistent MLEs and to sample from the posterior distribution while retaining a tractable computational cost. Compared to most MCMC-based methods, SMC samplers are immediately parallelisable (Lee et al., 2010) by leveraging graphics processing units, and are easier to calibrate. On the other hand, parallelising MCMC is still an open problem (see, e.g., Jacob et al., 2020).
Our contributions are as follows. (i) We show and discuss how to apply and adapt SMC samplers to train pBNNs (Section 2). (ii) We propose approximate SMC samplers that are scalable in the number of data points and are therefore better suited to pBNNs (Section 3). (iii) We benchmark the proposed samplers on a variety of synthetic and real-world datasets, and the results show that the proposed training scheme is state of the art in terms of predictive performance (Section 4).
## 2 Training via Feynman-Kac
Recall that we aim to find a maximum likelihood estimate for the deterministic part of the pBNN and then to compute the posterior distribution \(p(\phi\mid y_{1:N};\psi)\) of the stochastic part of the pBNN based on the learnt parameter \(\psi\). In this section, we recap a sequential online algorithm to recursively compute the target posterior distribution via a Feynman-Kac model and how to jointly estimate the parameter \(\psi\).
To ease the exposition of the idea, let us for now suppose that the deterministic variable \(\psi\) in the pBNN is fixed. Due to the conditional independence of the observations, the posterior distribution admits a recursion
\[p(\phi\mid y_{1:n};\psi)=\frac{p(y_{n}\mid\phi;\psi)}{z_{n}(\psi)}\,p(\phi\mid y _{1:n-1};\psi) \tag{1}\]
for \(n=1,2,\ldots,N\), where \(z_{n}(\psi)\coloneqq p(y_{n}\mid y_{1:n-1};\psi)\) is the normalising constant, and the initial is defined by \(p(\phi\mid y_{1:0};\psi)\coloneqq\pi(\phi)\). If we have computed the distribution \(p(\phi\mid y_{1:n-1};\psi)\) for some \(n\), we can then compute the next \(p(\phi\mid y_{1:n};\psi)\) by Equation (1). Ultimately, we continue the iteration until \(n=N\) to reach the target posterior distribution. This recursion is the gist of sequential online learning or annealing.
As in most Bayesian inference problems, the challenge lies in the computationally intractable normalising constant \(z_{n}(\psi)\). Hence, in practice, we often use a tractable sequence of approximations \(Q_{N}\coloneqq\{q_{n}(\phi\mid y_{1:n};\psi)\colon n=1,2,\ldots,N\}\) such that \(q_{n}(\phi\mid y_{1:n};\psi)\propto q_{n-1}(\phi\mid y_{1:n-1};\psi)\) is computable for all \(n\)'s. A convenient choice is a Gaussian \(q_{n}\) and then to use, for example, Taylor expansions, variational Bayes (Opper, 1999), or Gauss quadrature methods (Golub and Meurant, 2010) to run the sequence. However, in the context of BNNs, Gaussian approximations can result in large errors (Foong et al., 2020) particularly when the true posterior distribution is non-Gaussian (e.g., multi-modal ones). This motivates us to use a Monte Carlo (MC)-based method to come up with such an approximate sequence.
### Sequential Monte Carlo sampling
Sequential Monte Carlo (SMC) samplers (Del Moral et al., 2006; Chopin and Papaspiliopoulos, 2020) are natural Monte Carlo (MC) methods for approximating the target posterior distribution in the sequential learning framework. Specifically, we choose \(Q_{N}=\{S_{n}^{J}\colon n=1,2,\ldots,N\}\), where \(S_{n}^{J}\coloneqq\{(w_{n,j},\phi_{n,j})\colon j=1,2,\ldots,J\}\) are \(J\) weighted Monte Carlo samples that approximately represent \(p(\phi\mid y_{1:n};\psi)\). Suppose that we are able to draw the initial samples \(S_{0}^{J}\sim\pi(\phi)\) from the given prior, then we can recursively compute \(S_{n}^{J}\) for any \(n=1,2,\ldots,N\) via
\[\phi_{n,j} =\phi_{n-1,j}, \tag{2}\] \[\overline{w}_{n,j} =w_{n-1,j}\,p(y_{n}\mid\phi_{n-1,j};\psi),\]
and normalisation \(w_{n,j}=\overline{w}_{n,j}\,/\,\sum_{i=1}^{J}\overline{w}_{n,i}\) for all samples \(j=1,2,\ldots,J\). This approximation is consistent in the sense that the resulting Dirac measure given by \(S_{n}^{J}\) converges weakly to that of \(p(\phi\mid y_{1:n};\psi)\) as \(J\to\infty\) for all \(n\)(Chopin and Papaspiliopoulos, 2020). Furthermore, the method is particularly favourable in the pBNN context, as the dimension \(d\) for the stochastic part of the pBNN is usually not large which in turn allows us to use significantly more MC samples compared to full BNNs. This method dates back to Neal (2001) who simulates static target distributions with a likelihood tempering (see also Chopin, 2002).
However, the SMC chain using Equation (2) rarely works in reality, since the samples \(\{\phi_{n,j}\}_{j=1}^{J}\) will become less informative and the weights \(\{w_{n,j}\}_{j=1}^{J}\) will degenerate as \(n\) increases (Del Moral et al., 2006). In practice, we additionally introduce a Markov transition kernel \(h_{n}(\cdot\mid\phi_{n-1})\) between each step \((n-1,n)\), so that the samples \(\{\phi_{n,j}\}_{j=1}^{J}\) are perturbed according to the kernel. Specifically, the update of samples in Equation (2) modifies to
\[\phi_{n,j}\mid\phi_{n-1,j}\sim h_{n}(\phi_{n,j}\mid\phi_{n,j-1}). \tag{3}\]
Then, we arrive at the so-called (marginal) Feynman-Kac model
\[q_{N}(\phi_{N}\mid y_{1:N};\psi) \tag{4}\] \[\coloneqq\frac{1}{l_{N}(\psi)}\int\prod_{n=1}^{N}p(y_{n}\mid\phi _{n};\psi)\prod_{n=1}^{N}h_{n}(\phi_{n}\mid\phi_{n-1})\] \[\qquad\qquad\qquad\times\pi(\phi_{0})\,\mathrm{d}\phi_{0:N-1},\]
where \(l_{N}(\psi)\) is the normalising constant (see the definition of Feynman-Kac in Chopin and Papaspiliopoulos, 2020, Chap. 5). Now, to guarantee that the terminal \(q_{N}(\phi_{N}\mid y_{1:N};\psi)\) exactly hits the target posterior distribution \(p(\phi\mid y_{1:N};\psi)\), we need to choose the Markov kernel in a way that \(h_{n}\) leaves the previous posterior distribution \(p(\phi\mid y_{1:n-1};\psi)\) invariant, viz., \(\int h_{n}(\phi\mid\phi_{n-1})\,\rho(\phi_{n-1})\,\mathrm{d}\phi_{n-1}=p(\phi \mid y_{1:n-1};\psi)\) for any operating distribution \(\rho\). Note that \(h_{n}\) indeed depends on \(y_{1:n-1}\) and \(\psi\), but we omit them for clean notation. This choice of Markov kernel also ensures that the marginal likelihood is given by \(l_{N}(\psi)=\prod_{n=1}^{N}z_{n}(\psi)=p(y_{1:N};\psi)\).
It is now clear that computing the target posterior distribution \(p(\phi\mid y_{1:N};\psi)\) amounts to solving the Feynman-Kac model in Equation (4). The SMC sampler simulates the Feynman-Kac model with the weighted samples \(S_{n}^{J}\) for \(n=0,1,\ldots,N\) using Equations (2) and (3). Moreover, recall that we need to estimate the parameter \(\psi\) via MLE
\[\operatorname*{arg\,min}_{\psi\in\mathbb{R}^{w}}-\log l_{N}(\psi).\]
It turns out that the SMC sampler produces a consistent estimate \(z_{n}(\psi)\approx\sum_{j=1}^{J}\overline{w}_{n,j}\), so that we can estimate the log-likelihood via \(\log l_{N}(\psi)\approx\sum_{n=1}^{N}\log\bigl{(}\sum_{j=1}^{J}\overline{w}_{ n,j}\bigr{)}\) as a by-product of the computation of the posterior distributions. Then, we can solve the optimisation problem above by using any optimiser for training neural networks. For a detailed exposition of SMC parameter estimators, see, for example, Johansen et al. (2008), Schon et al. (2011), and Kantas et al. (2015). We summarise the SMC training of pBNNs in Algorithm 1 together with a gradient descent-based optimisation of the MLE objective. Within the algorithm, we use the shorthand \(\ell_{n}(\cdot)\) for the approximation to the marginal log-likelihood \(\log l_{n}(\cdot)\).
```
0: Training data \(\{(x_{i},y_{i})\}_{i=1}^{N}\), number of samples \(J\), initial parameter \(\psi_{0}\), learning rate function \(r\)
0: The MLE estimate \(\psi_{i}\) and weighted posterior samples \(\{(w_{N,j},\phi_{N,j})\}_{j=1}^{J}\sim p(\phi\mid y_{1:N};\psi_{i})\) for\(i=1,2,\ldots\) untilconvergentdo
0: Draw \(\{\phi_{0,j}\}_{j=1}^{J}\sim\pi(\phi)\) \(w_{0,j}=1\,/\,J\) for all \(j=1,2,\ldots,J\) \(\ell_{0}(\psi_{i-1})=0\) for\(n=1\)to\(N\)do// Parallelise\(j\) Resample \(\{(w_{n-1,j},\phi_{n-1,j})\}_{j=1}^{J}\) if needed Draw \(\phi_{n,j}\mid\phi_{n-1,j}\sim h_{n}(\phi_{n,j}\mid\phi_{n,j-1})\) \(\overline{w}_{n,j}=w_{n-1,j}\,p(y_{n}\mid\phi_{n,j};\psi_{i-1})\) \(\ell_{n}(\psi_{i-1})=\ell_{n-1}(\psi_{i-1})-\log\bigl{(}\sum_{j=1}^{J} \overline{w}_{n,j}\bigr{)}\) \(w_{n,j}=\overline{w}_{n,j}\,/\,\sum_{k=1}^{J}\overline{w}_{n,k}\) end for
11:\(\psi_{i}=\psi_{i-1}-r(i)\,\nabla\ell_{N}(\psi_{i-1})\)
```
**Algorithm 1**SMC sampler for pBNN
Scalable sequential Monte Carlo samplers
We still have a few critical challenges left to solve in order to apply the SMC sampler in Algorithm 1 for training pBNNs. First, the gradient computation \(\nabla\ell_{N}\) might be biased due to the non-differentiability of the resampling and Markov transition steps. Second, it is not easy to design the Markov kernel \(h_{n}\) and also to compute the Markov moves. Third, the algorithm does not scale well in the number of data points \(N\). In particular, for each gradient descent step to compute \(\nabla\ell_{N}\), we need a complete SMC loop over the entire dataset. In what follows, we detail these issues and show how to tackle them.
### Compensating gradient biases
The non-differentiability is a notorious issue for sequential Monte Carlo samplers, since most resampling techniques (e.g., systematic and stratified) and MCMC algorithms incur discrete randomness. This in turns induces biases in the gradient computation \(\nabla\ell_{N}\). There are recent developments to tackle the differentiability issue by, for example, optimal transport-based smooth resampling (Corenflos et al., 2021) and coupled MCMC (Arya et al., 2022, 2023), but they come at the cost of introducing additional computation costs and calibrations. However, we can in fact avoid differentiating the SMC algorithm by invoking Fisher's identity (see, e.g., Cappe et al., 2005, Chap. 10), which states that \(\nabla\mathrm{log}\,l_{N}(\psi)=\int\nabla\mathrm{log}\,p(y_{1:N},\phi;\psi) \,p(\phi\mid y_{1:N};\psi)\,\mathrm{d}\phi\). Moreover, unlike particle filtering in the system identification context (see, e.g., Poyiadjis et al., 2011, Sec. 2.2), the model design facilitates \(\nabla\mathrm{log}\,p(y_{1:N},\phi;\psi)=\nabla\mathrm{log}\,p(y_{1:N}\mid \phi;\psi)\). Hence, we can modify the gradient computation in Algorithm 1 to
\[\nabla\ell_{N}(\psi)=\sum_{j=1}^{J}w_{N,j}\,\nabla\mathrm{log}\,p(y_{1:N}\mid \phi_{N,j};\psi), \tag{5}\]
which only requires differentiating the likelihood.
### Choosing the Markov kernel
Recall the definition of the Markov kernel \(h_{n}\): It is invariant with respect to the posterior distribution \(p(\phi\mid y_{1:n-1};\psi)\). Since the energy \(p(y_{1:n-1}\mid\phi;\psi)\,\pi(\phi)\) is usually analytically available, it is common to use any MCMC chain for \(h_{n}\)(Del Moral et al., 2006). In particular, when the problem dimension \(d\) is low, using a standard random walk Metropolis-Rosenbluth-Teller-Hasting MCMC suffices in practice. If the dimension is relatively high, we can also leverage the gradient information of the energy to define the kernel with, for instance, a Langevin dynamic. However, it is worth remarking that choosing the number of MCMC steps is tricky, and this still remains an open challenge in the SMC community, see, for example, the discussion in Chopin and Papaspiliopoulos (2020, Sec. 17.2) or a recent development in Dau and Chopin (2021).
The computational problem of using MCMC for the kernels is that at each \(n\), we need to load all the data before \(n\) to evaluate the energy function. For gradient-based MCMCs, we also need to compute \(\nabla_{\phi}\,\mathrm{log}\,p(\phi\mid y_{1:n-1};\psi)\). This computation becomes even more demanding when \(n\) approaches \(N\) (which is large), and hence the computational cost grows at least quadratically in \(n\). A straightforward remedy to this problem is to use pseudo-marginal MCMC samplers (see, e.g., Andrieu and Roberts, 2009; Welling and Teh, 2011; Chen et al., 2014; Zhang et al., 2020). However, these mini-batching MCMC samplers usually take long mixing steps and are hard to calibrate, which contradicts the gist of using MCMC kernels in SMC: An ideal SMC sampler should need as few mixing steps as possible. Furthermore, the variance of the estimator increases in \(n\), and it is likely that we need to adaptively increase the batch size to control the error. However, it turns out that we can rectify this problem by moving the stochastic mini-batching approximation outside of the Markov kernel to that of the gradient step as explained in the next section.
### Stochastic gradient sequential Monte Carlo
To make Algorithm 1 scalable in the number of data points \(N\), it is natural to approximate the marginal log-likelihood by a stochastic approximation, as in stochastic gradient descent. Let \(1\leq M\leq N\) denote the batch size and let \(S_{M}:=\{S_{M}(1),S_{M}(2),\ldots,S_{M}(M)\}\) be a sequence of independent random integers (uniformly distributed in \([1,N]\)) that represent the batch indices. We may approximate the marginal log-likelihood by \(\mathrm{log}\,p(y_{1:N};\psi)\approx N/\,M\,\mathrm{log}\,p(y_{S_{M}};\psi)\), where \(y_{S_{M}}:=\{y_{S_{M}(1)},y_{S_{M}(2)},\ldots,y_{S_{M}(M)}\}\) represents the corresponding subdataset. The same also goes for the gradient computation. More specifically,
\[\begin{split}&\nabla\mathrm{log}\,p(y_{1:N};\psi)\\ &\approx\frac{N}{M}\,\mathbb{E}\big{[}\nabla\mathrm{log}\,p(y_{S_{M }};\psi)\big{]}\\ &=\frac{N}{M}\int\mathbb{E}\big{[}\nabla\mathrm{log}\,p(\phi,y_{ S_{M}};\psi)\,p(\phi\mid y_{S_{M}};\psi)\big{]}\,\mathrm{d}\phi,\end{split} \tag{6}\]
where the expectation is taken over \(S_{M}\). However, due to the latent variable \(\phi\), the approximation \(N/\,M\,\mathrm{log}\,p(y_{S_{M}};\psi)\) is biased. Consequently, the gradient is also biased, and the bias scales in the difference
between \(p(\phi\mid y_{S_{M}})\) and \(p(\phi\mid y_{1:N})\).
Using the approximation in Equation (6) amounts to running Algorithm 1 on the subdataset \(y_{S_{M}}\) and then using the posterior samples of \(p(\phi\mid y_{S_{M}};\psi)\) to compute the gradient \(\int\nabla\!\log(\phi,y_{S_{M}};\psi)\,\mathrm{d}\phi\). We then arrive at the stochastic gradient SMC sampler summarised in Algorithm 2.
```
Inputs: The same as in Algorithm 1, and batch size \(M\) Outputs: The MLE estimate \(\psi_{i}\)
1for\(i=1,2,\ldots\) untilconvergentdo
2 Draw \(\{\phi_{0,j}\}_{j=1}^{J}\sim\pi(\phi)\)
3\(w_{0,j}=1\,/\,J\) for all \(j=1,2,\ldots,J\)
4 Draw subdataset \(y_{S_{M}}\subseteq y_{1:N}\)
5for\(n=1\)to\(M\)do// Parallelise \(j\)
6 Resample \(\{(w_{n-1,j},\phi_{n-1,j})\}_{j=1}^{J}\) if needed
7 Draw \(\phi_{n,j}\mid\phi_{n-1,j}\sim h_{n}(\phi_{n,j}\mid\phi_{n,j-1})\)
8\(\overline{w}_{n,j}=w_{n-1,j}\,p(y_{S_{M}}(n)\mid\phi_{n,j};\psi_{i-1})\)
9\(w_{n,j}=\overline{w}_{n,j}\,/\,\sum_{k=1}^{J}\overline{w}_{n,k}\)
10 end for
11\(g(\psi_{i-1})=\frac{N}{M}\sum_{j=1}^{J}w_{M,j}\,\nabla\!\log p(y_{S_{M}}\mid \phi_{M,j};\psi)\)
12\(\psi_{i}=\psi_{i-1}+r(i)\,g(\psi_{i-1})\)
13 end for
```
**Algorithm 2**Stochastic gradient SMC sampler for pBNN
Compared to Algorithm 1, the stochastic gradient version in Algorithm 2 does not have to load the entire dataset to compute the gradient. More importantly, this also facilitates the Markov kernel design dilemma in Section 3.2 in the following way. The kernel \(h_{n}\) is now chosen to be invariant with respect to \(p(\phi\mid y_{S_{M}(1)},\ldots,y_{S_{M}(n-1)})\) which is far easier to compute than the original when \(M\ll N\). Essentially, the algorithm is a direct application of the stochastic gradient method on a latent variable model by using SMC samplers to approximate the gradient.
### Open-horizon sequential Monte Carlo
Algorithm 2 defines an approximate flow of gradient that can be computed efficiently by applying SMC samplers in a closed data horizon. However, the algorithm does not directly output the target posterior distribution \(p(\phi\mid y_{1:N};\psi)\) unlike Algorithm 1. To compute the posterior distribution, we may need to run another Bayesian inference (e.g., by SMC or MCMC) based upon the estimated \(\psi\). Moreover, for every optimisation step, the SMC estimators are independent and cold-start from the prior \(\pi\). This causes a waste of computation, since Algorithm 2 does compute the posterior distributions \(p(\phi\mid y_{S_{M}};\psi_{i})\) at subdatasets which are approximations to the target posterior distribution. In light of this observation, we can make Algorithm 2 even more efficient by linking the posterior distribution estimates in conjunction with the gradient. More specifically, we modify Algorithm 2 by warm-starting each SMC sampler from the previous posterior distribution estimate and perform the gradient update in conjunction with the SMC sampler. We then arrive at an SMC sampler that simultaneously estimates the posterior distribution and the parameter, summarised in Algorithm 3.
```
Inputs: Same as in Algorithm 2 Outputs: Same as in Algorithm 1
1 Draw \(\{\phi_{0,j}\}_{j=1}^{J}\sim\pi(\phi)\)
2\(w_{0,j}=1\,/\,J\) for all \(j=1,2,\ldots,J\)
3for\(i=1,2,\ldots\) untilconvergentdo// Parallelise \(j\)
4 Draw subdataset \(y_{S_{M}^{i}}\subseteq y_{1:N}\)
5 Resample \(\{(w_{i-1,j},\phi_{i-1,j})\}_{j=1}^{J}\) if needed
6 Draw \(\phi_{i,j}\mid\phi_{i-1,j}\sim\hat{h}_{i}(\phi_{i,j}\mid\phi_{i,j-1})\)
7\(\overline{w}_{i,j}=w_{i-1,j}\,p(y_{S_{M}^{i}}\mid\phi_{i,j};\psi_{i-1})\)
8\(w_{i,j}=\overline{w}_{i,j}\,/\,\sum_{s=1}^{J}\overline{w}_{i,s}\)
9\(g(\psi_{i-1})=\frac{N}{M}\sum_{j=1}^{J}w_{i,j}\nabla\!\log p(y_{S_{M}^{i}} \mid\phi_{i,j};\psi_{i-1})\)
10\(\psi_{i}=\psi_{i-1}+r(i)\,g(\psi_{i-1})\)
11 end for
```
**Algorithm 3**Open-horizon SMC sampler
In Algorithm 3, we start from samples drawn from the given prior \(\pi\). Then, at each iteration \(i\), we randomly draw a subdataset \(y_{S_{M}^{i}}\) of \(y_{1:N}\) and compute the posterior distribution and gradient on the subdataset by using the previous estimate. Equivalently, the algorithm is an SMC sampler applied on a growing dataset with open horizon instead of \(y_{1:N}\) which has a fixed size (cf. Kantas et al., 2015, Eq. 5.6). Compared to Algorithms 1 and 2, this open-horizon SMC sampler is computationally more efficient, especially when \(N\) is large.
However, Algorithm 3 no longer targets the posterior distribution \(p(\phi\mid y_{1:N};\psi)\). To see what the algorithm does, let us fix the parameter \(\psi\), to find that the algorithm computes the following Feynman-Kac model
\[\begin{split}\frac{1}{\hat{l}_{P}(\psi)}\int\prod_{i=1}^{P}& p(y_{S_{M}^{i}}\mid\phi_{i};\psi)\prod_{i=1}^{P}\hat{h}_{i}(\phi_{i}\mid \phi_{i-1})\\ &\times\pi(\phi_{0})\,\mathrm{d}\phi_{0:P-1},\end{split} \tag{7}\]
where \(P\) is the number of iterations for the algorithm, and \(\hat{l}_{P}\) is the normalising constant. Evidently, Equation (7) is neither equal to the original Feynman-Kac model in Equation (4), nor to its expectation. It is, however, possible to use Poisson estimators (Beskos et al., 2006; Fearnhead et al., 2008; Jacob and Thiery,
2015) to make Equation (7) an unbiased estimator of the original Feynman-Kac model. To do so, we need to let the number of iterations \(P\sim\text{Poisson}(\lambda)\) be a Poisson random variable and modify the weight update in Algorithm 3 to
\[\overline{w}_{i,j}=w_{i-1,j}\,\Big{(}\frac{N}{M}\log p(y_{S_{M}^{i}}\mid\phi_{i, j};\psi_{i-1})+\log c\Big{)},\]
where \(c\) is a constant that guarantees the positivity of the weights almost surely. Then, we can show by the Poisson estimator that
\[\mathbb{E}\bigg{[}\frac{\mathrm{e}^{\lambda}}{c}\prod_{i=1}^{P} \frac{1}{\lambda}\,\Big{(}\frac{N}{M}\log p(y_{S_{M}^{i}}\mid\phi;\psi)+\log c \Big{)}\bigg{]}\] \[=\prod_{i=1}^{N}p(y_{i}\mid\phi;\psi),\]
where the expectation is taken on both \(P\) and \(S_{M}\). However, finding such a positive constant \(c\) is still an open but stimulating challenge in the statistic community (see, e.g., a recent progress in Jin et al., 2022).
For training pBNNs, it may not be essential to debias Algorithm 3. In practice, we use BNNs/pBNNs as powerful predictive models to quantify uncertainty. Hence, computing the exact posterior distribution is often unnecessary from a practical point of view (Wilson and Izmailov, 2020; Izmailov et al., 2021). As a consequence, the prior is no longer fixed but is a flexible model free to choose so as to aim for better predictive performance. To this end, we can further relax the invariance property of the Markov kernel \(\hat{h}\). This relaxation bridges a connection to the classical stochastic filtering approaches for optimisations (Bell, 1994; Gerber and Douc, 2021). As an example, the extended Kalman filter approximations to the algorithm are stochastic natural gradient descents with information matrices determined by the Markov kernel (Olivier, 2018; Martens, 2020). As another example, if we choose the Markov kernel to be that of a Brownian motion, then the resulting algorithm is akin to using stochastic filters for training neural networks with uncertainties (Singhal and Wu, 1988; De Freitas et al., 2000; Chang et al., 2022).
It is worth remarking a minor practical advantage of Algorithm 3 over Algorithm 2. The Markov kernel in Algorithm 2 is a function that has dynamic input sizes. This means that implementing the algorithm in commonly used automatic differentiation libraries which assume static input shapes (e.g., JAX and Tensorflow) is not trivial. On the other hand, Algorithm 3 has no such an issue.
## 4 Experiments
In this section, we evaluate our proposed Algorithms 2 and 3 which we call SGSMC and OHSMC, respectively, in several ways. First, we test the methods on a synthetic model to see if they can recover the true parameters and posterior distribution. Then, we train pBNNs using the methods for regression and classification tasks on synthetic, UCI, and MNIST datasets. Our implementation is publicly available at [https://github.com/zgbkdlm/pbnn](https://github.com/zgbkdlm/pbnn).
BaselinesWe compare against (i) the maximum a posteriori (MAP) method for estimating the parameter and use Hamiltonian Monte Carlo (MAP-HMC) to compute the posterior distribution based on the learnt parameter (Sharma et al., 2023, Sec. 6), (ii) stochastic weight averaging Gaussian (SWAG, Maddox et al., 2019) with the MAP objective function, and (iii) stochastic mean-field Gaussian variational Bayes (VB, Hoffman et al., 2013). In addition, SGSMC-HMC refers to sampling the posterior distribution by HMC and estimating the parameters by SGSMC.
SettingsWe use Adam for all methods with a learning rate of 0.01 unless stated otherwise. As for the prior, we consistently employ a standard Gaussian distribution. Whenever sampling from the posterior distribution is needed, we use \(J=1,000\) samples. We use the same amount for evaluations. For SGSMC, we use an MCMC random walk kernel, whereas for OHSMC, we use a random walk kernel (with variance 0.01). VB uses 100 MC samples to approximate the evidence lower bound. Further details (e.g., batch sizes, number of epochs, and pBNN structures) for all the experiments are found in the supplementary materials.
### Synthetic parameter estimation
Consider a model
\[\begin{bmatrix}\phi_{0}\\ \phi_{1}\end{bmatrix} \sim\mathrm{N}\bigg{(}\begin{bmatrix}0\\ 0\end{bmatrix},\begin{bmatrix}2&0\\ 0&1\end{bmatrix}\bigg{)}, \tag{8}\] \[y_{n}\mid\phi \sim\mathrm{N}\Big{(}\frac{\phi_{1}}{\psi}+\frac{1}{2}\,(\phi_{0} ^{2}+\psi^{2}),1\Big{)},\]
where we generate a sequence of 100 independent data points \(y_{1:100}\) under the true parameter \(\psi=1\) and the realisation \(\phi=\begin{bmatrix}0&0\end{bmatrix}\). The goal is to estimate the parameter \(\psi\) and also to compute the posterior distribution which has a non-Gaussian crescent shape. For this model, we can conveniently compute a tight lower bound of the MLE objective by brute-force Monte Carlo with 10,000 samples, which we call MC. However, for training pBNNs, the MC approach is in general not applicable.
In Figure 1, we see that the MC estimate is closest to the truth followed by OHSMC, VB, and SGSMC. The MAP estimator on the other hand, significantly diverges from the truth. Moreover, from Figure 2 we see that the OHSMC samples are close to the true density, although the samples do not explore the tails of the true distribution. The VB estimate is correct in the mean approximation, but the Gaussian approximation does not fit the shape of the true distribution. The SWAG method, which uses the results based on the MAP estimate, produces a covariance matrix whose diagonal is numerically zero.
### Synthetic regression and classification
RegressionNext, we benchmark the algorithms on a synthetic regression problem where we have access to the true underlying function. The training, validation, and test data are generated as per \(y_{n}=f(x_{n})+\xi_{n}\), for \(n=1,2,\ldots,100\), where \(f(x)\coloneqq x\sin(x\tanh(x))\), and i.i.d. noise \(\xi_{n}\sim\mathrm{N}(0,1)\). We use a three-layer pBNN (with output sizes 20, 10, and 1), where the stochastic part is on the second hidden layer. We repeat the experiment 100 times and report the averaged negative log predictive density (NLPD) on the test data, and root mean-square error (RMSE) on the true function including their standard deviations.
Table 1 shows that the proposed OHSMC method performs best in both metrics. Furthermore, Figure 3 shows that the predictive samples drawn from the pBNNs learnt by OHSMC best capture the local optima of the true function. Moreover, OHSMC extrapolates the regression problem best.
ClassificationAs for classification, we test the methods on a synthetic two-moon dataset. Apart from NLPD, we also report the expected calibration error (ECE, Guo et al., 2017) and accuracy. The utilised neural network has four dense layers (with output sizes 100, 20, 5, and 1), where the stochastic part is the third layer.
Table 2 shows that our OHSMC either performs better than its peers or on par in terms of NLPD. According to the ECE and accuracy metrics, OHSMC is best.
### UCI regression and classification
We now move from synthetic experiments to real-world UCI data (Kelly et al., Accessed 2023). Throughout, we use a neural network with four dense layers (with output sizes 50, 20, 5, and \(C\), where \(C\) is the number of labels), and place the stochastic part on the third layer. All experiments are repeated ten times and we report averaged metrics.
Table 3 shows the results on two regression and two classification UCI datasets. As before, our proposed OHSMC method either outperforms or performs on par with other methods in terms of NLPD for all datasets. As for regression, the RMSE is significantly lower for OHSMC. Moreover, our methods are marginally better than the others when evaluating ECE for the classification tasks. Results on additional datasets can be found in the supplementary materials.
### MNIST classification
We also consider classification on MNIST data (LeCun et al., Accessed 2023) by using a neural network with two convolutional layers followed by two dense layers. The stochastic part is on the first convolution layer. Due to their high computational cost, HMC methods do not apply here. Table 4 shows that our OHSMC
\begin{table}
\begin{tabular}{l c c c} \hline \hline Method & NLPD (std.) & ECE (std.) & Acc. (std.) \\ \hline MAP-HMC & **0.28** (0.06) & 0.07 (0.01) & 0.87 (0.02) \\ OHSMC & **0.28** (0.07) & **0.06** (0.01) & **0.88** (0.02) \\ SGSMC-HMC & 0.32 (0.08) & 0.08 (0.01) & 0.86 (0.03) \\ SWAG & 0.31 (0.06) & 0.07 (0.01) & 0.86 (0.03) \\ VB & 0.29 (0.05) & 0.08 (0.01) & 0.86 (0.03) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Classification results in two-moon data.
method significantly outperforms other methods according to all the three metrics.
In this MNIST task, OHSMC (with 100 samples), SWAG, and VB (with 100 samples) take around 0.80, 0.26, and 0.33 seconds, respectively, for running 1,000 iterations with batch size 20. Note that the time for OHSMC includes _both_, the parameter estimation and posterior sampling, while for SWAG the time is only for parameter estimation. The times are profiled on an NVIDIA A100 40GB GPU.
## 5 Conclusions
In this paper, we have shown a sequential Monte Carlo (SMC) routine to efficiently train partial Bayesian neural networks (pBNNs). Specifically, we have proposed two approximate SMC samplers that are suitable for estimating the parameters and posterior distributions of pBNNs. The proposed training scheme either outperforms, or performs on par with state-of-the-art methods on a variety of datasets.
Limitations and future workThe proposed training scheme is immediately parallelisable on GPUs but at the cost of a high memory usage. An interesting future work is to theoretically analyse the convergence
\begin{table}
\begin{tabular}{c c c c} \hline \hline Method & NLPD (std.) & ECE (std.) & Acc. (std.) \\ \hline OHSMC & **2.87** (0.24) & **0.35** (0.06) & **99.02\%** (0.07) \\ SWAG & 4.27 (0.83) & 1.17 (0.43) & 97.76\% (0.71) \\ VB & 4.53 (0.28) & 0.44 (0.08) & 98.51\% (0.09) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Classification results on MNIST. Within the table, NLPD and ECE are scaled by \(10^{-2}\).
Figure 4: Visualisation of the two-moons classifications. The scatter points are the test data with hollow/solid representing the label. The grey lines represent the classification hyperplanes sampled from the trained pBNNs.
\begin{table}
\begin{tabular}{r c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{yacht} & \multicolumn{2}{c}{energy} & \multicolumn{2}{c}{satellite} & \multicolumn{2}{c}{ionosphere} \\ \cline{2-9} & NLPD & RMSE & NLPD & RMSE & NLPD & ECE & NLPD & ECE \\ \hline MAP-HMC & 0.95 (0.01) & 0.27 (0.04) & 0.94 (0.00) & 0.24 (0.02) & 0.28 (0.03) & 0.04 (0.00) & **0.21** (0.09) & 0.09 (0.03) \\ OHSMC & **0.92** (0.00) & **0.11** (0.04) & **0.92** (0.00) & **0.09** (0.00) & **0.27** (0.03) & **0.03** (0.00) & **0.21** (0.13) & **0.08** (0.03) \\ SGSMC-HMC & 0.93 (0.00) & 0.17 (0.02) & **0.92** (0.00) & 0.14 (0.02) & 0.84 (0.70) & 0.10 (0.10) & 0.24 (0.19) & **0.08** (0.04) \\ SWAG & 0.95 (0.02) & 0.25 (0.08) & 0.95 (0.00) & 0.25 (0.02) & 0.35 (0.07) & 0.07 (0.03) & 0.36 (0.12) & 0.14 (0.03) \\ VB & 0.94 (0.00) & 0.20 (0.02) & 0.93 (0.00) & 0.19 (0.03) & 0.28 (0.04) & **0.03** (0.00) & 0.28 (0.16) & 0.10 (0.04) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results on two regression and two classification UCI datasets. Results on more UCI datasets are provided in the supplementary material.
Figure 3: Visualisation of the synthetic regression. The scatter points represent the test data and the dashed line depicts the true function. The grey lines are predictive samples from their learnt pBNNs.
of Algorithm 3 as a general method for inference in latent variable models, since the algorithm carries a joint flow of both gradient and distribution.
## Acknowledgements
The authors would like to thank Adrien Corenflos for his comments. This work was partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation, and by the Kjell och Marta Beijer Foundation. The computations/data handling were enabled by the supercomputing resource Berzelius provided by National Supercomputer Centre at Linkoping University and the Knut and Alice Wallenberg foundation.
|
2304.08856 | Stochastic spin-orbit-torque device as the STDP synapse for spiking
neural networks | Neuromorphic hardware as a non-Von Neumann architecture has better energy
efficiency and parallelism than the conventional computer. Here, with numerical
modeling spin-orbit torque (SOT) device using current-induced SOT and Joule
heating effects, we acquire its magnetization switching probability as a
function of the input current pulses and use it to mimic the
spike-timing-dependent plasticity learning behavior like actual brain working.
We further demonstrate that the artificial spiking neural network (SNN) built
by this SOT device can perform unsupervised handwritten digit recognition with
the accuracy of 80% and logic operation learning. Our work provides a new clue
to achieving SNN-based neuromorphic hardware using high-energy efficiency and
nonvolatile spintronics nanodevices | Haotian Li, Liyuan Li, Kaiyuan Zhou, Chunjie Yan, Zhenyu Gao, Zishuang Li, Ronghua Liu | 2023-04-18T09:41:39Z | http://arxiv.org/abs/2304.08856v1 | # Stochastic spin-orbit-torque device as the STDP synapse
###### Abstract
Neuromorphic hardware, as a non-Von Neumann architecture, has better energy efficiency and parallelism than the conventional computer. Here, with the numerical modeling spin-orbit torque (SOT) device using current-induced SOT and Joule heating effects, we acquire its magnetization stochastic switching probability as a function of the interval time of input current pulses and use it to mimic the spike-timing-dependent plasticity learning behavior like actual brain working. We further demonstrate that the artificial spiking neural network (SNN) built by this SOT device can perform unsupervised handwritten digit recognition with an accuracy of 80% and logic operation learning. Our work provides a new clue to achieving SNN-based neuromorphic hardware using high-energy efficiency and nonvolatile spintronics nanodevices.
**Spin-orbit torque, Neuromorphic hardware, Spiking neural network, Stochastic magnetization reversal**
**PACS number(s):** 75.76.+j, 84.35.+i, 75.78.Jp, 85.70.Ay
**Citation:** LiH T, Li L Y, Zhou K Y, et al. Stochastic spin-orbit-torque device as the STDP synapse for spiking neural networks, Sci. China-Phys. Mech. Astron., O. doi:
## 1 Introduction
For the last two decades, artificial intelligence (AI) techniques have been exerting extraordinary influence over the way we live[1, 2]. Nowadays, the most widely used algorithm of AI is the convolutional neural network [3], but the tremendous amount of computation is hard to be satisfied with the traditional computer under Von Neumann architecture. Recently, inspired by the human brain, the spiking neural networks (SNNs) using neuromorphic hardware have been proposed for AI applications (e.g., efficient signal processing and speech recognition), and show significantly lower energy consumption and higher data process efficiency than comparable classical artificial neural networks (ANNs) [4, 5, 6]. SNNs are a special class of ANNs, where the neuronal units communicate using discrete spike sequences analogous to a biological neuron [7]. In the SNNs, the inputs to a spiking neuron are discrete spikes through synapses. The spikes can change the membrane voltage of postsynaptic neurons with a dynamic synaptic weight. The synaptic weight is the strength of a connection between two neurons, and it can be modulated pre- or postsynaptically. After a series of discrete spikes inputs, the spiking neuron will produce an output spike if the membrane voltage reaches a certain firing threshold. Thus the learning process corresponds to the synaptic weight adaptation during training to fit the desired behavior by temporal order and interval time between the spikes of presynaptic and postsynaptic neurons, also called spike-timing-dependent plasticity (STDP) [8].
In recent years, some reports suggest that the spintronic
nanodevices with short-term memory and subnanosecond scale nonlinear magnetodynamics are the promising candidates to mimic neurons and synapses in SNN because of their low power consumption, non-volatility, and stochastic behavior [9, 10, 11, 12]. In spintronics, SOT devices, without requiring a large working current passing through the actual layer of the device, can achieve high endurance and ultrafast write and read operations [13, 14, 15, 16], which are expected to have better performance than the conventional spin-transfer torque devices [17, 18, 19].
In this work, we report that a SOT device implements an artificial synapse based on current-dependent magnetization switching probability. We find that the temporal correlations between the spikes of presynaptic and postsynaptic neurons, which are the key to STDP, can be simulated by a pair of former pulse and latter pulse with opposite polarity and the adjustable interval time via the current-driven Joule heating effect in a SOT device. The magnetization switching probability as a function of the input current pulses can be used to mimic the STDP learning behavior. Furthermore, we build two physical SNNs using SOT devices as artificial synapses and achieve unsupervised handwritten digit recognition and logic operation learning tasks.
## 2 Experiments and simulations
### Experiments of SOT-induced magnetization switching
Figure 1(a) shows the schematic of a SOT Hall bar device and the definition of the coordinate. The multi-layer stack is deposited on an oxidized Si wafer by dc magnetron sputtering with the following sequence: substrate/Ta(2)/Pt(5)/Co(0.8)/Ta(2), where the number in parentheses is the thickness in nm. The Ta(2)/Pt(5) bottom layers are first patterned into 3 \(\mu\)m \(\times\) 10 \(\mu\)m cross Hall bar, and then a 2 \(\mu\)m \(\times\) 2 \(\mu\)m square Co(0.8)/Ta(2) bilayer is fabricated at the center of the cross Hall bar by combining electron beam lithography and sputtering. As shown in Fig. 1(b), the square Hall resistance hysteresis loop with the out-of-plane magnetic field reveals good perpendicular anisotropy of the magnetic film. Then we measure the current-induced magnetization switching of the device under an external magnetic field \(H_{x}\) parallel to the current flow (the \(x\)-axis). Figures 1(c) and 1(d) show examples of magnetization switching by applying \(I\) under \(\mu_{0}H_{x}\) = +200 mT and -200 mT. The magnetization begins to switch its direction when the applied dc current \(I\) is over the critical value \(I_{c}\) = 2 mA, and the switching polarity is determined by the direction of \(\mu_{0}H_{x}\) and \(I\)[20, 21, 22].
### Simulations for STDP curves
To demonstrate the SOT device with the capability of STDP for the application of SNN, we perform simulations with the experimental parameters to obtain the current-driven magnetization switching probability at different temperatures. Firstly, we can obtain current-induced SOT-driven magnetization switching properties of the Co layer by using open-source micromagnetic simulation software mumax[3] based on the generalized Landau-Lifshitz-Gilbert equation [23, 24, 25]:
\[\dot{\mathbf{m}}=-\gamma\mathbf{m}\times(\mathbf{H}_{eff}+a_{j}(\mathbf{m} \times\boldsymbol{\sigma})+b_{j}\boldsymbol{\sigma})+\alpha\mathbf{m}\times \dot{\mathbf{m}} \tag{1}\]
where \(\mathbf{m}\) is the reduced magnetization, \(\gamma\) is the gyromagnetic ratio, \(\alpha\) is the Gilbert damping constant, \(\boldsymbol{\sigma}\) is the unit vectors of the spin polarization direction in the nonmagnetic heavy metal layers, \(\mathbf{H}_{eff}\) is the effective field including the external magnetic field \(H\), the effective perpendicular anisotropy field \(H_{k}\), and a fluctuating thermal field \(H_{t}\)[26, 27]. \(H_{t}\) is the Gaussian-distributed random fluctuation field with mean = 0 and the standard deviation = \(\sqrt{2\alpha k_{B}T/(\gamma M_{S}U\delta t)}\), where \(k_{B}\) is Boltzmann constant, \(T\) is the temperature of the ferromagnetic layer, \(M_{s}\) is the saturation magnetization, \(U\) is the volume of the ferromagnetic layer, and \(\delta t\) is the integration time step [28]. In our Pt/Co/Ta-based Hall bar device, the spin polarization of the generated spin current in the bottom Pt and top Ta layers by the spin Hall effect is along the \(y\)-axis, perpendicular to the current flow direction. \(a_{j}\) and \(b_{j}\) correspond to the damping-like and field-like terms, respectively.
The input current pulse with width \(\tau\) = 1.5 ns, amplitude
Figure 1: (a) Schematic of the structure of the SOT Hall bar device and the experimental setup. (b) Hall resistance \(R_{B}\)_vs._ out-of-plane magnetic field \(\mu_{0}H_{z}\) loop with a small current \(I\) = 1 mA. (c)-(d) Current-induced magnetization switching of SOT device under an in-plane magnetic field parallel to the current flow direction \(\mu_{0}H_{x}\) of (c) +200 mT and (d) -200 mT.
\(I_{p}=1.2\) mA and 100 repeat cycles are used to determine the switching probability of the Co magnetization due to the spin current exerting spin torques. The spin current can be calculated by \(I_{p}\) and spin Hall angle \(\alpha_{H}\). The parameters used in the micromagnetic simulation are listed in Table 1. The initial direction of magnetization is defined parallel to the \(z\)-axis (\(M_{z}=+1\)), and the \(M_{z}\) will be reset to the initial state before each new repeat pulse. Figure 2(a) shows a representative \(M_{z}\)_vs._ time \(t\) curve (symbols) under a square current pulse (red line) and the external in-plane magnetic field \(H_{x}\) = 10 kA/m. One can see that the magnetization \(M_{z}\) exhibits the probabilistic flip event at \(I_{p}\) = 1.2 mA, slightly smaller than the critical current \(I_{c}\), because the considered fluctuating thermal field \(H_{t}\) causes the stochastic magnetization switching [29]. The switching probability can be described by \(P_{sw}=1-\exp(-f_{0}\exp(-E_{B}/k_{B}T))\), where \(f_{0}\) is the attempt frequency, \(\tau\) is the current pulse width, \(T\) is the temperature of the magnetic system, and \(E_{B}\) is the energy barrier [30].
Next, the temperature of the ferromagnetic Co layer as a function of the time after the input current pulse can be accurately simulated by the electromagnetic heat module of COMSOL Multiphysics\({}^{\circledR}\). The actual Ta(2)/Pt(5)/Co(0.8)/Ta(2) cross Hall bar-based SOT device is modeled. The substrate SiO\({}_{2}\) is used as the heat sink with a reference temperature \(T_{ref}=293.15\) K. As injecting a current pulse into the SOT device, the Joule heat generated by the input current is localized in the metallic films due to the limited size of the cross Hall bar and interfacial thermal resistance between the bottom Ta and the heat sink SiO\({}_{2}\). As a result, the temperature of metallic films will dramatically increase and ultimately reach a saturation temperature after the thermal equilibrium process. For the inverse process, the temperature of metallic films will sharply drop to \(T_{ref}\) when the current pulse ends. Based on the previous report [31], the gap heat conductance \(h_{g}\) between the bottom Ta and the heat sink SiO\({}_{2}\) is adopted as 600 MW/(m\({}^{2}\cdot\)K). All parameters used in the COMSOL calculation are also summarized in Table 1. Figure 2(b) shows the representative temperature evolution of the SOT device with the input square current pulse, well consistent with the previously reported results [31].
To illustrate the STDP of our Hall bar-based SOT device, we need two square current pulses with different amplitudes and polarities in one cycle. The preceding negative pulse \(I_{reset}\) far higher than \(I_{c}\) is used to reset the \(M_{z}\) to the initial state with 100% switching probability and produces a significant Joule heat-induced temperature rise, while the following positive pulse \(I_{p}\) with an amplitude below \(I_{c}\) causes the stochastic switching and a tiny Joule heating. We define \(\Delta t\) as the interval time between the negative initial pulse and positive stochastic switching pulse, in analogy to the interval time between the spike event of neurons. Therefore, the temperature variation of the Co layer caused by the current-induced Joule heating effect is a function of \(\Delta t\).
Now, we can obtain the switching probability as the function of \(\Delta t\) by micromagnetic simulation with Eq.1, including Gaussian-distributed random fluctuation thermal field \(H_{t}\). For a specific \(\Delta t\), the temperature of the Co layer corresponding to \(H_{t}\) can be obtained by the COMSOL calculation above. The magnetization switching probability as a function of \(T\) is further determined by micromagnetic simulation. Figure 2(c) shows two representative curves of the switching probability as the function of \(\Delta t\) at \(I_{p}=1\) mA and 1.2 mA. One can see that the smaller \(\Delta t\), corresponding to the shorter interval time between the positive stochastic switching pulse and the initial pulse, exhibits the higher magnetization switching probability \(P_{SW}\) due to the larger Joule heat-induced temperature rise. These two curves can be well fitted by the exponential function \(P_{sw}=1-\exp(-f_{0}\exp(-E_{B}/k_{B}T))\)[30]. The fitting curves will be used as the STDP rules of SNN discussed below.
\begin{table}
\begin{tabular}{c c c} \hline Parameters & Description & Default Values \\ \hline \(M_{z}\) & Saturation magnetization (A/m) & \(1\times 10^{6}\) \\ \(a_{H}\) & Spin Hall angle & 0.07 \\ \(\alpha\) & Gilbert damping factor & 0.03 \\ \(H_{x}\) & External magnetic field (A/m) & \(1\times 10^{4}\) \\ \(A_{ex}\) & Exchange coefficient (A\(\cdot\) m) & \(1\times 10^{-11}\) \\ \(H_{k}\) & Perpendicular anisotropy field (A/m) & \(1\times 10^{5}\) \\ \(T_{ref}\) & Reference temperature (K) & 293.15 \\ \(h_{g}\) & Gap heat conductance (W/(m\({}^{2}\cdot\)K)) & \(6\times 10^{8}\) \\ \(C_{Ph}\) & Heat capacity of Pt (J/(kg\(\cdot\)K)) & 500 \\ \(C_{Co}\) & Heat capacity of Co (J/(kg\(\cdot\)K)) & 420 \\ \(C_{Ta}\) & Heat capacity of Ta (J/(kg\(\cdot\)K)) & 140 \\ \(\rho_{Th}\) & Resistivity of Pt (\(\mu\Omega\) cm) & 30 \\ \(\rho_{Co}\) & Resistivity of Co (\(\mu\Omega\) cm) & 40 \\ \(\rho_{Ta}\) & Resistivity of Ta (\(\mu\Omega\) cm) & 200 \\ \hline \end{tabular}
\end{table}
Table 1: Simulations parameters.
## 3 Neuronal dynamics theory of SNN
### Leaky integrate-and-fire model of neurons
The SNNs, as a type of ANNs, consist of a couple of layers of spiking neurons interconnected by synapses with the STDP rule. The information transferred between different neurons is carried by electrical spikes(_i.e._ spiky pulses) through synapses. As illustrated in Fig. 3(a) for a simplified neuronal system, the grey circle represents a neuron (e.g., postsynaptic neuron), which connects with three other neurons (e.g., presynaptic neurons) via synapses. The difference between the interior and exterior electric potentials of a neuron is called membrane potential \(V\). It usually remains constant as the resting membrane potential \(V_{reset}\) until the neuron is stimulated by spikes. As shown in Fig. 3(b), the membrane potential \(V\) as a function of time can be described by leaky-integrate-and-fire (LIF) model [32]:
\[\tau\frac{dV}{dt}=(V_{rest}-V)+g_{e}(V_{exc}-V)+g_{i}(V_{inh}-V) \tag{2}\]
where \(V_{rest}\) is the resting membrane potential, \(V_{exc}\) and \(V_{inh}\) are the equilibrium potentials of excitatory and inhibitory synapses, \(g_{e}\) and \(g_{i}\) the conductances of excitatory and inhibitory synapses, and \(\tau\) is a time constant, respectively. In the process of transferring information, when a neuron receives spikes from other neurons through synapses, its membrane potential jumps abruptly and then decreases exponentially with time [top half of Fig. 3(b)], as described by Eq.2. Especially, a neuron emits a spike(_i.e.,_ a neuron fires) when its membrane potential is higher than the threshold voltage \(V_{threshold}\), \(V\) is reset to \(V_{reset}\), and then the neuron can not be stimulated over a refractory period.
### STDP rule of synapses
In a neural network, a postsynaptic neuron connects with others through a number of synapses. The neuron will fire when its membrane potential reaches the threshold after received continuous spikes from other connected neurons through synapses, as shown in Fig. 3(b). The effect of spikes on the postsynaptic neuron is determined by the strength of synapses, \(i,e.\) weight. In other words, the generated variation of \(V\) for the postsynaptic neurons by spikes from the presynaptic neurons is proportional to the connecting weights of synapses. The learning process is the synaptic weight adaptation following the discussed STDP rule below. If the firing time of presynaptic neurons(\(t_{pre}\)) is before the firing time of postsynaptic neurons(\(t_{post}\)), the weight of this synapse will increase, or else the weight will decrease. This firing time sequence expresses the causality of the neural network. In general, the value of the updated synaptic weight \(\Delta w\) is a function of the relative timing between presynaptic and postsynaptic neurons firing [33, 34]:
\[\Delta w=\begin{cases}A^{+}e^{\frac{\Delta w}{\tau}},&\Delta t<0\\ A^{-}e^{\frac{\Delta w}{\tau}},&\Delta t\geqslant 0\end{cases} \tag{3}\]
where \(\Delta t=t_{post}-t_{pre}\), \(A_{+}>0\) and \(A_{-}<0\) are constant, and time constant \(\tau_{+}\), \(\tau_{-}>0\). The top and bottom terms in Eq.3 are long-term potentiation(LTP) and long-term depression terms(LTD), which correspond to the top-right curve and bottom-left curve in Fig. 3(c), respectively. To implement this STDP rule of synapses by using a physical device or material, we adopt the concept of "trace" to express the value of the synaptic weight \(\Delta w\), the same as the previous works used the python package Brian 2 [35]. As shown in Fig. 3(d), the two traces \(a_{pre}\) and \(a_{post}\) correspond to presynaptic activity and postsynaptic activity, respectively. Suppose that a presynaptic neuron fires, \(a_{pre}\) increases by \(A_{+}\) abruptly and then declines exponentially with time. While \(a_{post}\) will decrease by \(|A_{-}|\) and fades exponentially with time when the postsynaptic neuron fires. As a result, the update rule of synaptic weight with time can be implemented by two physical "trace" curves \(a_{pre}\) and \(a_{post}\). In other words, magnetization switching probability \(P_{SW}\) of SOT device is the function of the interval time \(\Delta t\) in Fig. 2(c).
Figure 2: Micromagnetic and COMSOL simulation results. (a) The magnetization \(M_{z}\) evolution of the SOT device with input current pulse. The duration width of the current pulse is 1.5 ns, and the period is 2.5 ns. To easily illustrate device switching probability, \(M_{z}\) is reset to the initial state before each new repeat pulse. (b) The temperature of the device increases to 563 K and then decreases exponentially to 293 K in 0.25 \(\mu\)s when a square current pulse with \(I_{reset}\) = -12 mA is applied to the device. (c) Magnetization switching probability \(P_{sw}\) as the function of the interval time \(\Delta t\), defined in the text, at \(I_{p}\) = 1 mA and 1.2 mA. The solid lines are the exponential function fitting curves.
Therefore, we can use the obtained switching probability _vs._\(\Delta t\) curves of the SOT device as the STDP rule of the SNNs. As shown in Fig. 3(c), for the standard STDP curve, the \(\Delta t\) can be in analogy to the interval time of the fired neurons, and the weight change \(\Delta W\) is reflected by the switching probability of the SOT device, which is in proportion to its electrical conductance. Below we use two tasks of the unsupervised handwritten digits recognition and logic operation learning as an example to test and verify the performance of the SNN built with the SOT device.
## 4 Results and discussion
### SOT devices as synapses for handwritten digital recognition
We build the physical SNN with SOT devices under the STDP rule to test its performance in the standard handwritten numbers recognition task using the Mixed National Institute of Standards and Technology (MNIST) dataset [36, 37]. Figure 4(a) shows the process flow diagram of the SNN for handwritten digits recognition, which consists of an input layer containing 28\(\times\)28 neurons corresponding to the 28\(\times\)28 pixels of a handwritten digit image, an excitatory layer (EL) of 100 neurons, and an inhibitory layer (IL) with corresponding 100 neurons providing lateral inhibition to 100 neurons in the excitatory layer. In the physical SNN, many previous reports mentioned that the neurons in SNN can be implemented by CMOS, memristors or other spintronic nanodevices [38, 39, 40, 41]. The neurons of the input layer are connected to that of the EL with synapses based on SOT devices in an all-connection fashion. For an MNIST digit image, e.g., digital number "4", each neuron of the input layer encodes the pixel value of the image into the form of Poisson-distributed spike temporal sequences of 350 \(\mu\)s, and the firing rate of the neuron is proportional to the pixel value. Then the spike temporal sequences controlled by transistors pass through the STDP synapses and act on the EL. Figure 4(b) is a sketch of the SNN architecture. In the training phase, since all synaptic weights have random initial values, the EL neurons will be fired with different rates for a particular image "4" as the input. The neuron with the highest firing rate is assigned to the output digital number "4" represented by the input image. Meanwhile, the neuron marked as "4" sends a spike to the corresponding neuron in the IL and fires it to inhibit all other 99 neurons in the EL and realizes the winner-take-all mechanism in the SNN.
In principle, higher accuracy needs more the EL neurons to carry the features corresponding to the different handwritten digit images representing the same digital number. However, considering the trade-off between the benefits of the recognition accuracy and the cost of computation, we adopt 100 EL neurons and corresponding 100 IL neurons for this test case. Figure 4(c) shows the obtained training weights of all-connected synapses between the input layer with 28\(\times\)28 neurons and the 100 EL neurons, which are rearranged into 10\(\times\)10 grids and each grid contains 28\(\times\)28 weight values. These weights only can have two specific values of 0.01 and 1, which can be implemented using two Hall resistance states of SOT devices. The Hall resistance states are determined by the STDP rules based on switching probability. After the training process, all synaptic weights (corresponding to magnetization configurations of SOT device arrays) are determined and fixed. In the test phase, a new test image is fed into the SNN with the determined weights. The EL neurons with the same marked digital number have the highest average firing rate in the EL as the output result. Figure 4(d) shows the comparison between the predicted results and the desired outputs for 10 digital numbers. One can see that the numbers "7" and "9" exhibit a relatively high error rate of recognition due to their high shape similarity in the form of random handwritten digit images. Figure 4(e) shows that this SNN with 100 EL neurons can reach a better than 80% accuracy in the unsupervised handwritten digit recognition task when the number of training samples is over 4000.
Figure 3: (a) Schematic representation of the neuron (grey circle) and three synapses(colored arrows) are stimulated by the input spike trains (pre\({}_{i}\) (\(i\)=0, 1, and 2)). (b) The temporal course of the membrane potential of a neuron in (a). The neuron receives the spike sequences from presynaptic neurons, increasing membrane potential. Once the membrane potential is over the threshold \(V_{threshold}\), the neuron will emit a spike with a short refractory period. (c) Weight change curves for the standard STDP. The curve in the top right error represents the LTP corresponding to the firing curve of \(I_{p}=1.2\) mA in Fig. 2(d) without constant item, while the LTD curve in the bottom left corner corresponds to the fitting curve of \(I_{p}=1\) mA in Fig. 2(d). (d) The \(a_{pre}\) and \(a_{post}\) are the ”traces” of presynaptic and postsynaptic activity. The \(a_{pre}\) or \(a_{post}\) increases abruptly when the presynaptic or postsynaptic neurons fire and then decrease slowly with time. At the firing point of the postsynaptic neurons, the weight change of the synapse is equal to \(a_{post}\) - \(a_{pre}\).
### SOT devices as synapses for logic operation learning
Besides the handwritten digit recognition task above, we also choose logic operation learning to explore the additional function of our artificial SNN with SOT devices [42]. XOR is one of the sixteen possible binary operations of Boolean operations. Here, we choose the XOR learning as an example to illustrate how the SNN works. Other fifteen logic operations (e.g., NAND and NOR) are also able to be easily achieved in a similar way by our proposed physical SNN. Figure 5(a) shows the SNN architecture for the binary logic operation
Figure 4: (a) Cross-bar array of SOT devices for pattern recognition based on SNN. The architecture contains three kinds of neurons and synapses implemented by SOT devices. The intensity values of the \(28\times 28\) pixels MNIST image (e.g., number 4) are converted to Poisson-spike with the firing rate proportional to the intensity of the corresponding pixel. Then the SOT devices, as the synapses, receive the input pulses signal through controlled transistors and transmit the weighted pulses to excitatory neurons. (b) The schematic of SNN for pattern recognition. (c) Table of handwritten digital numbers with \(10\times 10\) grids, where each grid displays a digital number with \(28\times 28\) pixels, as the learning result of synaptic weights \(w\) between the neurons in the input and excitatory layer. The synaptic weight \(w\) only can take the specific two values \(0.01\) (represented by white) and \(1\) (represented by black), which correspond to the two magnetization states of SOT devices. (d) Handwritten digit recognition accuracy rates with sampling 10,000 MNIST testset images. The \(x\)-axis is the desired digits and the \(y\)-axis is the predicted results. (e) The recognition accuracy rates as a function of the training sample counts.
Figure 5: (a) SNN hierarchical architecture for logic operation learning. The input layer converts the logic variables to spikes for subsequent learning, and the guide layer contains excitatory and inhibitory neurons, which control the membrane voltage of output neurons in the training phase. (b) The used data for XOR operation in the training phase. (c) The updated process of the synapse weights \(W_{0}\) – \(W_{7}\) between the hidden and output layers with feeding 2000 training samples. All eight weights are determined after the training samples N \(\geqslant\) 1500, indicating that the SNN completes its learning or training procedure. (d) 20 pairs of random logic numbers as inputs and the desired results of XOR operation in the testing phase. (e) The membrane voltage of all neurons in the hidden layer with input sequences. The neuron fires only when both fired input neurons are connected to it. (f) The membrane voltage of all output neurons in the test phase. When the output is 0, the corresponding neuron-0 will emit a spike as the result and vice versa.
learning task, which consists of an input layer containing 2\(\times\)2 neurons corresponding to two Boolean inputs and each with two different states "True" and "False", a hidden layer (HL) with four hidden neurons, and an output layer (OL) with two output neurons labeled "1" and "0" corresponding the binary results of a Boolean output. Different from the conventional binary encoding method in SNN where the neuron emits a spike usually represents the logic "True" and without spike indicates the logic "False", here, we define the specific neurons in the input and output layers as "True" or "False" neurons beforehand, as labeled using "1" and "0", respectively [Fig. 5(a)]. For example, if the neuron labeled "0" ("1") in the dotted oval emits a spike, the input is a logic "False" ("True"). It is the same for the output layer. The function of two excitatory neurons ("\(e_{0}\)","\(e_{1}\)") and two inhibitory neurons ("\(i_{0}\)", "\(i_{1}\)") in the guide layer (GL) is to train the synaptic weights between the HL and OL in the training phase. The firing time of the GL neurons "\(e_{0,1}\)" and "\(i_{0,1}\)" are controlled by an externally supervised signal; therefore, the STDP function can be realized by adjusting the spike temporal sequences generated from the HL and GL. Here we will illustrate this process by a specific example of both "False" ("0") inputs and XOR "False" output, as shown in Fig. 5(b). The XOR "False" output means that neuron-0 needs to be fired in the OL. Therefore, all weights of neuron-0 with the HL need to be enhanced and all weights from the HL to neuron-1 need to be reduced. To achieve this desired adjustment of weights, the neuron \(e_{0}\) (\(i_{1}\)) is controlled by a supervise signal to emit a spike behind (before) the spike fired by the HL, corresponding to the STDP curve at \(\Delta t>0\) (\(\Delta t<0\))[Fig. 3(d)]. We use \(N\) to represent the number of input samples. Figure 5(c) shows the evolution of the eight synaptic weights with increasing the training samples. One can see that all weights become stable (0.01 or 1) after \(N\geqslant\)1500. In the testing phase, as shown in Fig. 5(d), we choose 20 pairs of random logic numbers as the input to test the SNN with the determined weights. Figures 5(e) and 5(f) show the membrane voltage of all neurons in the HL and OL as a function of input sequences. In the hidden layer, the neuron fires when both inputs are connected to it. The labeled logic variable of the fired neuron of the output layer is the testing result of the XOR operation, e.g., the given result is 0 if neuron-0 of the output layer fires. It is observed that the sequence of the fired neurons in the OL, shown in Fig. 5(f), is well consistent with the desired output in Fig. 5(d), proving the validity of the logic operation learning with our SOT-device-based SNN.
## 5 Summary
In summary, we propose an artificial synapse numerical model based on Pt/Co/Ta-based SOT device and use it to build the physical SNNs. In these physical SNNs, the adaption process of synaptic weight follows the STDP behavior of the SOT device, obtained by micromagnetic simulation of the current pulse and its interval time tunable magnetization stochastic switching probability of the SOT device due to current pulse-induced SOT and Joule heating effects. Our physical SNNs exhibit good performance in unsupervised hand-written digit recognition with over 80% accuracy and logic operation learning. Our work offers a new clue for spintronic hardware implementation of neuromorphic computing systems.
_This work was supported by the National Natural Science Foundation of China (Grant Nos. 12074178), and the Open Research Fund of Jiangsu Provincial Key Laboratory for Nanotechnology._
|
2304.09590 | Parallel Neural Networks in Golang | This paper describes the design and implementation of parallel neural
networks (PNNs) with the novel programming language Golang. We follow in our
approach the classical Single-Program Multiple-Data (SPMD) model where a PNN is
composed of several sequential neural networks, which are trained with a
proportional share of the training dataset. We used for this purpose the MNIST
dataset, which contains binary images of handwritten digits. Our analysis
focusses on different activation functions and optimizations in the form of
stochastic gradients and initialization of weights and biases. We conduct a
thorough performance analysis, where network configurations and different
performance factors are analyzed and interpreted. Golang and its inherent
parallelization support proved very well for parallel neural network simulation
by considerable decreased processing times compared to sequential variants. | Daniela Kalwarowskyj, Erich Schikuta | 2023-04-19T11:56:36Z | http://arxiv.org/abs/2304.09590v1 | # Parallel Neural Networks in Golang
###### Abstract
This paper describes the design and implementation of parallel neural networks (PNNs) with the novel programming language Golang. We follow in our approach the classical Single-Program Multiple-Data (SPMD) model where a PNN is composed of several sequential neural networks, which are trained with a proportional share of the training dataset. We used for this purpose the MNIST dataset, which contains binary images of handwritten digits. Our analysis focusses on different activation functions and optimizations in the form of stochastic gradients and initialization of weights and biases. We conduct a thorough performance analysis, where network configurations and different performance factors are analyzed and interpreted. Golang and its inherent parallelization support proved very well for parallel neural network simulation by considerable decreased processing times compared to sequential variants.
Keywords:Backpropagation Neuronal Network Simulation Parallel and Sequential Implementation MNIST Golang Programming Language
## 1 Introduction
When reading a letter our trained brain rarely has a problem to understand its meaning. Inspired by the way our nervous system perceives visual input, the idea emerged to write a mechanism that could "learn" and furthermore use this "knowledge" on unknown data. Learning is accomplished by repeating exercises and comparing results with given solutions. The neural network studied in this paper uses the MNIST dataset to train and test its capabilities. The actual learning is achieved by using backpropagation. In the course of our research, we concentrate on a single sequential feed forward neural network (SNN) and upgrade it into building multiple, parallel learning SNNs. Those parallel networks are then fused to one parallel neural network (PNN). These two types of networks are compared on their accuracy, confidence, computational performance and learning speed, which it takes those networks to learn the given task.
The specific contribution of the paper is twofold: on the one hand, a thorough analysis of sequential and parallel implementations of feed forward neural
network respective time, accuracy and confidence, and on the other hand, a feasibility study of Golang [9] and its tools for parallel simulation.
The structure of the paper is as follows: In the next section, we give a short overview of related work. The parallelization approach is laid out in section 4 followed by the description of the Golang implementation. A comprehensive analysis of the sequential and parallel neural networks respective accuracy, confidence, computational performance and learning speed is presented in section 5. Finally, the paper closes with a summary of the findings.
## 2 Related Work and Baseline Research
Artificial neural networks and their parallel simulation gained high attention in the scientific community. Parallelization is a classic approach for speeding up execution times and exploiting the full potential of modern processors. Still, not every algorithm can profit from parallelization, as the concurrent execution might add a non-negligible overhead. This can also be the case for data parallel neural networks, where accuracy problems usually occur, as the results have to be merged.
In the literature a huge number of papers on parallelizing neural networks can be found. An excellent source of references is the survey by Tal Ben-Nun and Torsten Hoefler [1]. However, only few research was done on using Golang in this endeavour.
In the following only specific references are listed, which influenced the presented approach directly. The authors of [8] presented a parallel backpropagation algorithm dealing with the accuracy problem only by using a MapReduce and Cascading model. In the course of our work on parallel and distributed systems [16, 2, 14] we developed several approaches for the parallelization of neural networks. In [6], two novel parallel training approaches were presented for face recognizing backpropagation neural networks. The authors use the OpenMP environment for classic CPU multithreading and CUDA for parallelization on GPU architectures. Aside from that, they differentiated between topological data parallelism and structural data parallelism [15], where the latter is focus of the presented approach here. [10] gave a comparison of different parallelization approaches on a cluster computer. The results differed depending on the network size, data set sizes and number of processors. Besides parallelizing the backpropagation algorithm for training speed-up, alternative training algorithms like the Resilient Backpropagation described in [13] might lead to faster convergence. One major difference to standard backpropagation is that every weight and bias has a different and variable learning rate. A detailed comparison of both network training algorithms was given in [12] in the case of spam classification.
## 3 Fundamentals
In the following we present the mathematical fundamentals of neural networks to allow for easier understanding and better applicability of our implementation approach described afterwards.
#### 3.0.1 Forwardpropagation
To calculate an output in the last layer, the input values need to get propagated through each layer. This process is called forward propagation and is done by applying an activation function on each neuron's corresponding input sum. The input sum \(z\) for a neuron \(k\) in the layer \(l\) is the sum of each neuron's activation \(a\) from the last layer multiplied with the weight \(w\):
\[z_{k}^{l}=\sum_{j}(w_{kj}^{l}a_{j}^{l-1}+b_{k}^{l}) \tag{1}\]
The additional term \(+b\) stands for the bias value, which allows the activation function to be shifted to the left or to the right. For better readability, the input sums for a whole layer can be stored in a vector \(z\) and defined by:
\[z^{l}=W^{l}x^{l-1}+b^{l} \tag{2}\]
Here, \(W^{l}\) is a weight matrix storing all weights to layer \(x^{l}\). To obtain the output of a layer, or, in case of the last layer \(x^{L}\), the output of a neural network, an activation function \(\varphi\) needs to be applied:
\[x^{l}=\varphi(z^{l})=\varphi(W^{l}x^{l-1}+b^{l}) \tag{3}\]
Activation functions do not have to be unique in a network and can be combined. The implementation presented in this paper uses the rectifier activation function
\[\varphi_{rectifier}(z)=\begin{cases}0&\text{if }z<0\\ z&\text{if }z\geq 0\end{cases} \tag{4}\]
for hidden neurons and the softmax activation function
\[\varphi_{softmax}(z_{i})=\frac{e^{z_{i}}}{\sum_{j}e^{z_{j}}} \tag{5}\]
for output neurons. For classification, each class is represented by one neuron in the last layer. Due to the softmax function, the output values of those neurons sum up to 1 and can therefore be seen as the probabilities of being that class.
#### 3.0.2 Backpropagation
For proper classification the network has to be trained beforehand. In order to do that, a cost function tells us how well the network performs, like the cross entropy error with expected outputs \(e\) and actual outputs \(x\),
\[C=-\sum_{i}e_{i}log(x_{i}) \tag{6}\]
The aim is to minimize the cost function by finding the optimal weights and biases with the gradient descent optimization algorithm. Therefore, a training instance gets forward propagated through the network to get an output. Subsequently, it is necessary to compute the partial derivatives of the cost function with respect to each weight and bias in the network:
\[\frac{\partial C}{\partial w_{kj}}=\frac{\partial C}{\partial z_{k}}\frac{ \partial z_{k}}{\partial w_{kj}} \tag{7}\]
\[\frac{\partial C}{\partial b_{k}}=\frac{\partial C}{\partial z_{k}}\frac{ \partial z_{k}}{\partial b_{kj}} \tag{8}\]
As a first step, \(\frac{\partial C}{\partial z_{k}}\) needs to be calculated for every neuron \(k\) in the last layer \(L\):
\[\delta^{L}_{k}=\frac{\partial C}{\partial z^{L}_{k}}=\frac{\partial C}{ \partial x^{L}_{k}}\varphi^{\prime}(z^{L}_{k}) \tag{9}\]
In case of the cross entropy error function, the error signal vector \(\delta\) of the softmax output layer is simply the actual output vector minus the expected output vector:
\[\delta^{L}=\frac{\partial C}{\partial z^{L}}=x^{L}-e^{L} \tag{10}\]
To obtain the errors for the remaining layers of the network, the output layer's error signal vector \(\delta^{L}\) has to be propagated back through the network, hence the name of the algorithm:
\[\delta^{l}=(W^{l+1})^{T}\delta^{l+1}\odot\varphi^{\prime}(z^{l}) \tag{11}\]
\((W^{l+1})^{T}\) is the transposed weight matrix, \(\odot\) denotes the Hadamard product or entry-wise product and \(\varphi^{\prime}\) is the first derivative of the activation function.
#### 2.2.2 Gradient Descent
Knowing the error of each neuron, the changes to the weights and biases can be determined by
\[\Delta w^{l}_{kj}=-\eta\frac{\partial C}{\partial w^{l}_{kj}}=-\eta\delta^{l }_{k}x^{l-1}_{j} \tag{12}\]
\[\Delta b^{l}_{k}=-\eta\frac{\partial C}{\partial b_{k}}=-\eta\delta^{l}_{k} \tag{13}\]
The constant \(\eta\) is used to regulate the strength of the changes applied to the weights and biases and is also referred to as the learning rate, \(x^{l-1}_{j}\) stands for the output of the \(j^{th}\) neuron from layer \(l-1\). The changes are applied by adding them to the old weights and biases. Depending on the update frequency, a distinction is made between stochastic gradient descent, batch gradient descent and mini-batch gradient descent. In the case of the first-mentioned, the weights and biases are updated after every training instance (by repeating all of the aforementioned steps instance-wise). In contrast, batch gradient descent stands for updating only once after accumulating the gradients of all training samples. Mini-batch gradient descent is a combination of both. The weights and biases are updated after a specified amount, the _mini-batch size_, of training instances. As with batch gradient descent, the gradients of all instances are averaged before the updates.
## 4 Parallel Neuronal Networks
This section describes the technology stack, the parallelization model and implementation details of the provided PNN.
### Technology Stack
Go, often referred to as Golang, is a compiled, statically typed, open source programming language developed by a team at Google and released in November 2009. It is distributed under a BSD-style license, meaning that copying, modifying and redistributing is allowed under a few conditions.
As Andrew Gerrand, who works on the project, states in [9], Go grew from a dissatisfaction with the development environments and languages that they were using at Google. It is designed to be expressive, concise, clean and efficient. Hence, Go compiles quickly and is as easy to read as it is to write. This is partly because of gofmt, the go source code formatter, that gives Go programmes a single style and relieves the programmers from discussions like where to set the braces. As uniform presentation makes code easier to read and therefore to work on, gofmt also saves time and affects the scalability of programming teams [11]. The integrated garbage collector offers another great convenience and takes away the time consuming efforts on memory allocation and freeing known from C/C++. Despite the known overhead and criticism about Java's garbage collector, the author of [11] claims that Go is different, more efficient and that it is almost essential for a concurrent language like Go because of the trickiness that can result from managing ownership of a piece of memory as it is passed around among concurrent executions. That being said, built-in support for concurrency is one of the most interesting aspects of Go, offering a great advantage over older languages like C++ or Java. One major component of Go's concurrency model are goroutines, which can be thought of as lightweight threads with a negligible overhead, as the cost of managing them is cheap compared to threads. If a goroutine blocks, the runtime automatically moves any blocking code away from being executed and executes some code that can run, leading to high-performance concurrency [9]. Communication between goroutines takes place over channels, which are derived from "Communicating Sequential Processes" found in [5]. A Channel can be used to send and receive messages from the type associated with it. Since receiving can only be done when something is being sent, channels can be used for synchronization, preventing race conditions by design.
Another difference to common object oriented programming languages can be found in Go's object oriented design. Its approach misses classes and type-based inheritance like subclassing, meaning that there is no type hierarchy. Instead, Go features polymorphism with interfaces and struct embedding and therefore encourages the composition over inheritance principle. An Interface is a set of methods, which is implemented implicitly by all data types that satisfy the interface [11].
For the rest, files are organized in packages, with every source file starting with a package statement. Packages can be used by importing them via their unique path. If a package path in the form of an URL refers to a remote repository, the remote package can be fetched with the _go get_ command and subsequently imported like a local package. Additionally, Go will not compile, if unused packages are being imported.
### Parallelization Model
For the parallelization of neural network operations we apply the classical Single-Program Multiple-Data (SPMD) approach well known from high-performance computing [3]. It is a programming technique, where several tasks execute the same program but with different input data and the calculated output data is merged to a common result. Thus, based on the fundamentals of single feed forward neural network we generate multiple of these networks and set them up to work together in parallel manner.
The parallel-design is visualized in figure 1. On the bottom it shows the dataset which is divided into as many slices as there are networks, referred to as child-networks (CN). Each child-network learns only a slice of the dataset. Ultimately the results of all parallel child-networks are merged to one final parallel
Figure 1: Design of a Parallel Neural Network
neural network (PNN). The combination of those CNs can be done in various ways. In the presented network the average of all weights, calculated by each parallel CN by a set number of epochs, is used for the PNNs weights. For the biases the same procedure is used, e.g. averaging all biases for the combined biases value.
In Golang it is important to take into consideration that a program, which is designed parallel does not necessarily work in a parallel manner, as a concurrent program can be parallel, but doesn't have to be. This programming language offers a goroutine, which "is a function executing concurrently with other goroutines in the same address space" and processes with Go runtime. To start a goroutine a \(govfunc\) is called. It can an be equipped with a WaitGroup, that ensures that the process does not finish until all running processes are done. More about the implementation is explained in the next section.
### Implementation Details
The main interface to which any trainable network binds is the \(TrainableNetwork\) interface. This interface is used throughout the whole learning and testing process. Parallel - as well as simple neural networks implement this interface. This allows for easy and interchangeable usage of both network types throughout the code. Due to the fact that a parallel neural network is built from multiple sequential neural networks (SNN) we start with the implementation of an SNN. The provided implementation of an SNN allows for a flexible network structure. For example, the number of layers and neurons, as well as the activation-functions used on a layer, can be chosen freely. All information, required for creating a network is stored within a \(NeuroConfig\) struct on a network instance. These settings can easily be adjusted in a configuration file, the default name is \(config.yaml\), located in the same directory as the executable.
A network is built out of layers. A minimal network is at least composed of an input layer and an output layer. Beyond this minimum, the hidden depth of a network can be freely adjusted by providing a desired number of hidden layers. Internally layers are represented by the \(NeuroLayer\) struct. A layer holds weights and biases which are represented by matrices. The Gonum package is used to simplify the implementation. It provides a matrix implementation as well as most necessary linear algebraic operations.
In the implementation, utility functions are provided for a convenient creation of new layers with initialized weights and biases. The library \(rand\) offers a function \(NormFloat64\), where the variance is set 1 and the mean 0 as default. Weights are randomly generated using that normal distribution seeded by the current time in nanoseconds.
The provided network supports several activation functions. The activation function is defined on a per layer basis which enables the use of several activations within one network.
A PNN is a combination of at least two SNN. The \(ParallelNetwork\) struct represents the PNN in the implementation. As SNNs are trained individually before being combined with the output network of a PNN, it is necessary to
keep the references to the network managed in a slice. In the context of a PNN the SNNs are referred to as child networks (CN).
In a PNN the training process is executed on all CNs in parallel using goroutines. First, the dataset is split according to the amount of CNs. Afterwards, the slices of the training dataset and CNs are called with a goroutine. The goroutine executes minibatches of every CN in parallel. Within those minibatches, another mutexed concurrent goroutine is started for forwarding and backpropagating. Installing a mutex ensures safe access of the data over multiple goroutines.
The last step of training is to combine those CNs to one PNN. The provided network uses as combination function the "average" approach. After training the CNs for a set number of epochs, weights, and biases are added onto the PNN. Ultimately these weights and biases are scaled by the number of CNs. The result is the finished PNN.
## 5 Performance Evaluation
At first a test with one PNN, consisting of 10 CNs, and an SNN are tested using different activation functions on the hidden layer, while always using the softmax function on the output layer. After deciding on an activation function, network configurations are tested. While the number of neurons is only an observation, but not thoroughly tested, the number of networks is evaluated on different sized PNNs. Finally, the performance of both types of networks are compared upon time, accuracy, confidence and costs.
### MNIST Dataset
For our analysis, we use the MNIST dataset which holds handwritten numbers and allows supervised learning. Using this dataset the network learns to read handwritten digits. Since learning is achieved by repeating a task, the MNIST dataset has a "training-set of 60,000 examples, and a test-set of 10,000 examples" [7]. Each dataset is composed of an image-set and a label-set, which holds the information for the desired output and makes it possible to verify the networks output. All pictures are centered and uniform by 28x28 pixels. First, we start the training with the training-set. When the learning phase is over the network is supposed to be able to fulfill its task [8]. To evaluate it's efficiency it is tested by running the neural network with the test-set since the samples of this set are still unknown. It is important to use foreign data to test a network since it is more qualified to show the generalization of a network and therefore its true efficiency. We are aware that MNIST is a rather small data set. However, it was chosen on purpose, because it is used in many similar parallelization approaches and allows therefore for relatively easy comparison of results.
### Activation Functions in Single- and Parallel Neuronal Networks
To elaborate which function performes best in terms of accuracy for the coded single- and parallel neural network a test using the same network design and
settings for each network is performed while changing only the function used on the hidden layer. Used settings were one hidden layer built out of 256 neurons, working with a batchsize of 50 and a learningrate \(\eta\) of 0.05 and an output layer calculated with softmax. This is used on a single FNN and a PNN each consisting of 10 child-networks. Figure 2 presents the performance results of the activation functions. Each networks setup is one hidden layer on which either the tangent hyperbolic-, leaky ReLU-, ReLU- or sigmoid-function was applied.
In this comparison the single neural network that learned using the ReLU-function, closely followed by TanH-function, has reached the best result within 20 epochs. While testing different configurations it showed that most activation functions reached higher accuracy when using small learning rates. Sigmoid is one function that proved itself to be most efficient when the learningrate is not too small. By raising the learningrate to 0.6 the sigmoid-functions merit grows significantly on both network types. In the process of testing ReLU on hidden layers in combination with Softmax for the output layer has proven to reliably deliver good results. That is why in further sections ReLU has applied on all networks hidden layers and on the output layer Softmax.
### Network Configurations
#### 5.3.1 Number of Neurons.
Choosing an efficient number of neurons is important, but it is hard to identify. There is no calculation which helps to define an effectively working number or range of neurons for a certain configuration of a
Figure 2: Compare Accuracy of a parallel- vs simple-NN with different activation functions and a softmax function for the output layer. The networks have one hidden layer with 256 neurons and the training was performed with a learningrate of 0.05 and a batchsize of 50 over 20 epochs.
neural network. Varying the number of neurons between 20 to 600 delivered great accuracy. These are only observations and need to be studied with a more sophisticated approach.
#### 3.2.1 Number of Networks.
To evaluate the performance of PNNs in terms of accuracy, PNNs with different amounts of CNs are composed and trained. The training runs over 20 epochs with a learning rate of 0.1 and a batchsize of 50. All CNs are built with one hidden layer consisting of 256 neurons. On the hidden layer the ReLU-function and on the output layer the Softmax-function is used. After every epoch, the networks are tested with the test-dataset. The results are visualized in figure 3.
Figure 3 illustrates a clear loss in accuracy of PNNs with a growing number of CNs. The 94.5% accuracy, for example, is reached by a PNN with 2 CNs after only one epoch, while a PNN with 30 CNs achieves that after 12 epochs. In respect to the number of networks this graph shows that more is not always better. Considering, that this test was only performed over a small number of epochs, it is not possible to read the potential of a PNN with more CNs. To find out how good a PNN can perform, a test was run with three PNNs running 300 epochs:
Table 1 shows a static growth until 200 epochs. After that, there is only a small fluctuation of accuracy, showing that a local minimum has been reached. Over the runtime of 300 epochs the difference of the performance regarding the accuracy of PNNs has been reduced significantly. Still the observation of the ranking of the PNNs has not been changed. The PNNs built out of a smaller
Figure 3: Accuracy of PNNs, built with different amount of CNs, over 20 epochs
number of CNs perform slightly better. Since the provided PNNs are built by using averaging of weights and biases it also seemed interesting to compare the average accuracy of the CNs with the resulting PNN, to grade the used combination function. The results are illustrated in figure 4.
It shows that the efficiency of an average function grows with the number of CNs. The first graph drawn with 2 CNs shows, that the resulting PNN is performing worse than the average of the CNs, it has been built from. By growing the number of CNs to 10, the average of CNs approximates towards the PNN. The last graph of this figure shows that a PNN composed of 20 CNs outperforms the average of its CNs after 200 epochs, and after 300 epochs levels with it. It has to be noted that the differences in accuracy are very small, as it is only a range of 0.1 to 0.2 percent. Overall it can be said that this combination function is working efficiently.
### Comparing the Performances
#### 5.4.1 Time.
Time is the main reason to have a network working in parallel. To test the effect of parallelism on the time required to train a PNN, the provided neuronal
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline CNs of PNN & \multicolumn{6}{|c|}{Accuracy after...} \\ \cline{2-7} & 20 Epochs & 100 Epochs & 150 Epochs & 200 Epochs & 250 Epochs & 300 Epochs \\ \hline
2 & 97.76 & 98.08 & 98.13 & 98.16 & 98.14 & 98.17 \\ \hline
10 & 96.58 & 97.43 & 97.96 & 98.03 & 98.09 & 98.05 \\ \hline
20 & 95.69 & 97.50 & 97.71 & 97.92 & 97.90 & 97.97 \\ \hline \end{tabular}
\end{table}
Table 1: Accuracy behaviour for different epochs
Figure 4: Compare the average accuracy of all CNs, out of which the final PNN is formed, with that PNNs accuracy
network is tested on three systems. The first system is equipped with 4 physical and 4 logical cores, an Intel i7-3635QM processor working with a basic clock rate of 2.4GHz, the second system holds 6 physical cores and 6 logical cores working with 2.9GHz and an Intel i9-8950HK processor and last the third system works with an AMD Ryzen Threadripper 1950X with 16 physical and 16 logical cores, which work with a clock rate of 3.4GHz. The first, second and third systems are referred to as 4 core, 6 core and 16 core in the following.
In figure 5 the benefit in terms of time using parallelism is clearly visible. The results illustrated show the average time in seconds needed by each system for training a PNN consisting of one CN per goroutine. For the block diagram in 5 the percental time requirements in comparison with the time needed using one goroutine are listed in table 2.
The time in figure 5 starts on a high level and decreases with an increasing amount of goroutines for all three systems. Especially in the range of 1 to 4 goroutines, a formidable decrease in training time is visible and only starts to level out when reaching a systems physical core limitation. This means that the 4 core starts to level out after 4 goroutines, the 6 core after 6 goroutines and the 16 core after 16 goroutines, even though all systems support hyper threading. After reaching a systems core number the average time necessary for training a neural network decreases further with more goroutines. This should be due to the ability to work in parallel and in concurrency as one slot finishes and a
Figure 5: Time in seconds, that was needed to train a PNN with a limited amount of one Goroutine per composed CN.
waiting thread can start running immediately, without waiting for the rest of the running threads to be finished. All three systems show high time savings by parallelizing the neural networks. While time requirements decreased in every system, the actual time savings differ greatly as the 16 core system decreased 91 percent on average from 1 goroutine to 64 goroutines. In comparison, the 4 core system only took 65 percent less time. As the 16 core system is a lot more powerful than the 4 core system, it can perform an even greater parallel task and therefore displays a positive effect of parallelism upon time requirements. Based upon figure 5 and its table 2 parallelism within neural networks can be seen as a useful feature.
#### 4.2.2 Accuracy and Confidence of Networks.
In this section the performance in terms of accuracy and confidence is compared betwe
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline & \multicolumn{6}{|c|}{Time required compared to 1 goroutine} \\ \hline System/Goroutines & 1 & 2 & 4 & 6 & 8 & 12 & 16 & 32 & 64 \\ \hline
4 & 100\% & 58\% & 38\% & 38\% & 37\% & 37\% & 36\% & 35\% \\ \hline
6 & 100\% & 61\% & 31\% & 24\% & 24\% & 23\% & 23\% & 23\% & 22\% \\ \hline
16 & 100\% & 51\% & 26\% & 18\% & 14\% & 11\% & 9\% & 10\% & 9\% \\ \hline \end{tabular}
\end{table}
Table 2: Average time required to train a PNN in comparison to one goroutine, which represents 100 percent
Figure 6: Compare Accuracy and Confidence of a PNN composed of 10 CNs and an SNN with one Hidden Layer which holds 256 Neurons
For the test, illustrated by figure 6, both types of networks have been provided with the same random network to start their training. They have the exact same built, except that one is trained as SNN and the other is cloned 10 times to build a PNN with 10 CNs.
In figure 6 the SNN performs better than the PNN in both accuracy and confidence. While the SNNs accuracy and confidence overlap after 8 epochs, the PNN has a gap between both lines at all times. This concludes that the SNN is "sure" about its outputs, while the PNN is more volatile. The SNNs curve of confidence is a lot steeper than the PNNs and quickly approximates towards the curve of accuracy. Both curves of accuracy start off almost symmetric upwards the y-axis, but the PNN levels horizontally after about 90 percent while the SNN still rises until about 94 percent. After those points both accuracy curves run almost horizontally and in parallel towards the x-axis. The gap stays constantly until the end of the test. Even small changes within the range of 90 to 100 percent are to be interpreted as significant. This makes the SNN perform considerable more efficient in terms of accuracy and costs than the PNN.
#### 4.2.2 Cost of Networks.
To see how successful the training of different PNNs are, the costs of 3 parallel networks with a varying number of CNs have been recorded for 300 epochs. The results are illustrated in figure 7.
Figure 7: Average Costs of PNNs over 300 epochs. The vertical lines show the lowest cost for each PNN.
It shows that the costs of all three PNNs sink rapidly within the first 50 epochs. Afterwards, the error decreases slower, drawing a soft curve that flats out towards a line, almost stagnating. Apparently, all PNNs training moves fast towards a minimum at the beginning, then slows down and finally gets stuck, while only moving slightly up and down the minimums borders. Similar to earlier tests, a PNN built with less CNs performs better. More CNs leave the graph further up the y-axis, as the 2-PNN outperforms both the 10- and 20-PNN. It also reaches its best configuration, e.g. the point where costs are lowest, significantly earlier than the other tested PNNs. Whereas the 10- and 20-PNNs work out their best performance regarding the costs at a relatively close range of epochs, they reach it late compared to the 2-PNN. Figure 7 clearly shows a decrease in quality with PNNs, formed with more CNs. This indicates that the combination function needs optimization to achieve a better graph. In the long term,F costs behave the same as accuracy. After 300 epochs the difference has almost leveled.
## 6 Findings and Conclusion
This paper presents and analyses PNNs composed of several sequential neural networks. The PNNs are tested upon time, accuracy and costs and compared to an SNN.
The parallelization approach on three different multicore systems show excellent speedup (the time necessary for training a PNN reduces constantly by increasing the number of CNs e.g. number of goroutines).
With all three tested systems the time necessary for training a PNN decreased constantly by increasing the number of CNs e.g. number of goroutines. While the difference in time was significant within the first few added goroutines it leveled out after reaching the systems number of cores. A PNN with 2 CNs takes 40% to 50% less time than a SNN and a PNN with 4 CNs takes 60% to 70% less time.
While time is a strong point of the PNN, accuracy is also dependent on the number of CNs a PNN is formed from. While a few CNs resulted in longer training times it generated better accuracy in fewer epochs. More CNs made the training time faster but the learning process slower. After 20 epochs a PNN composed of 2 CNs reached an accuracy of almost 98%, while a PNN composed of 20 CNs only slightly overcame the 96% line. When both PNNs were trained for a longer period this difference shrank dramatically. Trained for 300 epochs the accuracy only differed by 0.2% in favor of the PNN made out of 2 CNs. While this proved the ability to learn with a small data set it also demonstrated that bigger data sets deliver a better result faster. the PNNs can improve by 0.41% and 2.28% when training for a longer period. These results were achieved by using averaging as combination function. The chances of achieving an even better accuracy by improving the combination function are high. The costs of a PNN also depends on the number of CNs. It has the same behavior as accuracy and can also be improved by an optimized combination function. However, a
thorough analysis on the effects of improved combination functions is planned for future work and is beyond the scope of this paper.
Summing up, PNNs proved to be very time efficient but are still lacking in terms of accuracy. As there are plenty of other optimizations, e.g. adjusting learning rates [4], a PNN proved to be more time efficient than an SNN. However, until the issue of accuracy has been taken care of, the SNN surpasses the PNN in practice.
We close the paper with a final word on the feasibility of Golang for parallel neural network simulation: Data parallelism proved to be an efficient parallelization strategy. In combination with the programming language Go, a parallel neural network implementation is coded as fast as a sequential one, as no special efforts are necessary for concurrent programming thanks to Go's concurrency primitives, which offer a simple solution for multithreading.
|
2305.07119 | Graph Neural Network for Accurate and Low-complexity SAR ATR | Synthetic Aperture Radar (SAR) Automatic Target Recognition (ATR) is the key
technique for remote sensing image recognition. The state-of-the-art works
exploit the deep convolutional neural networks (CNNs) for SAR ATR, leading to
high computation costs. These deep CNN models are unsuitable to be deployed on
resource-limited platforms. In this work, we propose a graph neural network
(GNN) model to achieve accurate and low-latency SAR ATR. We transform the input
SAR image into the graph representation. The proposed GNN model consists of a
stack of GNN layers that operates on the input graph to perform target
classification. Unlike the state-of-the-art CNNs, which need heavy convolution
operations, the proposed GNN model has low computation complexity and achieves
comparable high accuracy. The GNN-based approach enables our proposed
\emph{input pruning} strategy. By filtering out the irrelevant vertices in the
input graph, we can reduce the computation complexity. Moreover, we propose the
\emph{model pruning} strategy to sparsify the model weight matrices which
further reduces the computation complexity. We evaluate the proposed GNN model
on the MSTAR dataset and ship discrimination dataset. The evaluation results
show that the proposed GNN model achieves 99.38\% and 99.7\% classification
accuracy on the above two datasets, respectively. The proposed pruning
strategies can prune 98.6\% input vertices and 97\% weight entries with
negligible accuracy loss. Compared with the state-of-the-art CNNs, the proposed
GNN model has only 1/3000 computation cost and 1/80 model size. | Bingyi Zhang, Sasindu Wijeratne, Rajgopal Kannan, Viktor Prasanna, Carl Busart | 2023-05-11T20:17:41Z | http://arxiv.org/abs/2305.07119v1 | # Graph Neural Network for Accurate and Low-complexity SAR ATR
###### Abstract
Synthetic Aperture Radar (SAR) Automatic Target Recognition (ATR) is the key technique for remote sensing image recognition. The state-of-the-art works exploit the deep convolutional neural networks (CNNs) for SAR ATR, leading to high computation costs. These deep CNN models are unsuitable to be deployed on resource-limited platforms. In this work, we propose a graph neural network (GNN) model to achieve accurate and low-latency SAR ATR. We transform the input SAR image into the graph representation. The proposed GNN model consists of a stack of GNN layers that operates on the input graph to perform target classification. Unlike the state-of-the-art CNNs, which need heavy convolution operations, the proposed GNN model has low computation complexity and achieves comparable high accuracy. The GNN-based approach enables our proposed _input pruning_ strategy. By filtering out the irrelevant vertices in the input graph, we can reduce the computation complexity. Moreover, we propose the _model pruning_ strategy to sparsify the model weight matrices which further reduces the computation complexity. We evaluate the proposed GNN model on the MSTAR dataset and ship discrimination dataset. The evaluation results show that the proposed GNN model achieves 99.38% and 99.7% classification accuracy on the above two datasets, respectively. The proposed pruning strategies can prune 98.6% input vertices and 97% weight entries with negligible accuracy loss. Compared with the state-of-the-art CNNs, the proposed GNN model has only 1/3000 computation cost and 1/80 model size.
Synthetic aperture radar, automatic target recognition, graph neural network, low computation complexity, model pruning
## I Introduction
Synthetic aperture radar (SAR) is capable of high-resolution remote sensing and independent of weather conditions to observe the targets on the earth ground. SAR automatic target recognition (ATR) is the crucial technique to classify the target in the SAR images and has been used in many real-world applications, such as agriculture [1][2], civilization [3][4], etc. SAR devices are typically mounted on moving platforms, such as aircraft, spacecraft, and small/micro satellites [5, 6, 7, 8, 9]. These moving platforms usually have limited computation resources and power budgets (e.g., 80-180W [10]). The state-of-the-art works [11, 12, 13, 14, 15] develop complex convolutional neural networks (CNNs) for SAR ATR to achieve high classification accuracy. However, complex CNNs suffer from high computation costs and large memory footprints, making them unsuitable to be deployed on resource-limited platforms. For example, to achieve real-time image classification using CNNs, GPU is widely used. The power consumption of a state-of-the-art GPU device (e.g., NVIDIA RTX3090 has a power consumption of 450W) can exceed the power budget of the small/micro satellites.
We identify that CNNs have high computation costs due to (1) heavy convolution operations and (2) CNNs do not exploit the data sparsity in SAR images because CNNs need to use the whole image as input. As shown in Figure 1, an object in a SAR image usually has a small number of pixels, and most pixels are irrelevant for classification. Recently, Graph Neural Networks (GNNs) are proposed to operate on graph data structure and have been successfully applied to many graph classification tasks [16, 17, 18], such as point cloud classification. [19] has proven that GNN can classify a graph based on its graph structural information and vertex features. Motivated by that, we propose to use GNN for SAR ATR. First, we extract the image pixels of the target object. We use these pixels to build a graph by constructing the edge connections among the pixels. We exploit GNN to operate on the input graph for target classifying. The proposed GNN-based approach achieves significantly less computation cost and comparable accuracy compared with state-of-the-art CNNs. Moreover, we propose attention mechanisms, including vertex attention and feature attention, to improve the model's accuracy. Our main contributions are:
* We propose a novel GNN model for SAR ATR with attention mechanisms, including vertex attention and feature attention, to achieve high accuracy with low computation complexity.
* We propose the input pruning strategy and the weight pruning strategy to further reduce the computation complexity with negligible accuracy loss.
* We perform detailed ablation studies to evaluate (1) various connectivity for constructing the input graph, (2) various types of GNN layers, (3) the effect of the attention
Figure 1: The objects in the SAR images
mechanism, and (4) the impact of the proposed pruning strategies.
* We evaluate the proposed approach on MSTAR and ship discrimination datasets. The evaluation results show that the proposed GNN model achieves 99.38% and 99.7% classification accuracy on the above two datasets, respectively. Compared with the state-of-the-art CNNs, the proposed GNN model has only 1/3000 computation cost and 1/80 model size.
The rest of the paper is organized as follows: Section II presents the proposed GNN model for SAR ATR; Section III describes the proposed pruning strategies for reducing computation complexity; Section IV demonstrates the evaluation results.
## II Proposed Model
Figure 2 depicts the overview of the proposed approach. In Section II-A, we introduce the basics of the graph neural network. In Section II-B, we cover the proposed graph representation for the SAR images. In Section II-C, we introduce the proposed GNN model architecture.
### _Graph Neural Network_
We define GNN notations in Table I. Graph Neural Networks (GNNs) [20, 21, 22] are proposed for representation learning on graph \(\mathcal{G}(\mathcal{V},\mathcal{E},\mathbf{X}^{0})\). GNNs can learn from the structural information and vertex features and embed this information into low-dimension vector representation/graph embedding (For example, \(\mathbf{h}_{i}^{L}\) is the embedding of vertex \(v_{i}\)). The vector representation can be used for many downstream tasks, such as node classification [21][20], link prediction [23], graph classification [24], etc. As shown in Figure 3, GNNs follow the message-passing paradigm that vertices recursively aggregate information from the neighbors.
### _Graph Representation_
We transform the input SAR image into a graph representation \(\mathcal{G}(\mathcal{V},\mathcal{E},\mathbf{X}^{0})\), where each pixel in the SAR image is mapped to a vertex \(v\in\mathcal{V}\) in the graph. The SAR signal value of the pixel becomes the feature of the vertex. Each pixel is connected to its neighbors as the edge connections \(\mathcal{E}\). As shown in Figure 4, we propose the following two ways of connecting a pixel to its neighbors and evaluate them in the experiments:
* **4-connectivity**: Each pixel is connected to the four neighbors: up (\(p2\)), down (\(p8\)), left (\(p4\)), and right (\(p6\)).
* **8-connectivity**: Each pixel is connected to the eight neighbors: \(p1\), \(p2\), \(p3\), \(p4\), \(p6\), \(p7\), \(p8\), \(p9\).
### _Model Architecture_
The proposed model architecture is shown in Figure 5, which consists of a stack of layers, including Graph Neural Network layers, graph pooling layers, and attention layers. The final Multi-layer Perceptron (MLP) generates the classification result. For simplicity, \(v_{i,j}\) denotes the vertex/pixel that locates at \(i^{\text{th}}\) row and \(j^{\text{th}}\) column in original SAR image. The input to layer \(l\) (\(1\leqslant l\leqslant L\)) is the vertex feature vectors \(\{\mathbf{h}_{i,j}^{l-1}:v_{i,j}\in\mathcal{V}_{l-1}\}\) and edges \(\{e:e\in\mathcal{E}_{l-1}\}\) that defines
Figure 4: Two types of connectivity for constructing input graph
Figure 3: GNN Computation Abstraction
Figure 2: Overview of the proposed approach
the connectivity of the vertices in \(\mathcal{V}_{l-1}\). The output of layer \(l\) is the vertex feature vectors \(\{\mathbf{h}^{l}_{i,j}:v_{i,j}\in\mathcal{V}_{l}\}\).
**Graph neural network (GNN) layer**: A GNN layer follows the _Aggregate-Update_ paradigm as shown in Algorithm 3. Using the Aggregate() function, each vertex aggregates the feature vectors from the neighbors (line 3 of Algorithm 3). Then, each feature vector is updated by the Update() function to generate the updated feature vector (line 4 of Algorithm 3). There are some representative Graph Neural Network layers, such as GCN [20], GraphSAGE [21], GIN [19], and SGC [25].
**Graph pooling layer**: It downscales the input graph \(\mathcal{V}_{l-1}\) into a smaller output graph \(\mathcal{V}_{l}\). The pooling operaton is similar to the pooling in the 2-D images:
\[\mathbf{h}^{l}_{i,j}=\max(\mathbf{h}^{l-1}_{2i,2j},\mathbf{h}^{l-1}_{2i+1,2j},\mathbf{h}^{l-1}_ {2i,2j+1},\mathbf{h}^{l-1}_{2i+1,2j+1}) \tag{1}\]
where \(v^{l}_{i,j}\in\mathcal{V}_{l}\), and \(v^{l-1}_{2i,2j},v^{l-1}_{2i+1,2j},v^{l-1}_{2i,2j+1},v^{l-1}_{2i+1,2j+1}\in \mathcal{V}_{l-1}\).
**Attention layer**: We exploit the attention mechanism to improve the accuracy. The attention layer consists of _feature attention_ that calculates the attention scores for each vertex feature, and _vertex attention_ that calculates the attention scores for each vertex. The feature attention is calculated by:
\[\mathbf{F}_{\text{fh}}=\text{sigmoid}(\text{mean}(\{\mathbf{h}_{i,j}:v_{i,j}\in \mathcal{V}\})\mathbf{W}^{\text{mean}}_{\text{fa}}+ \tag{2}\] \[\text{sum}(\{\mathbf{h}_{i,j}:v_{i,j}\in\mathcal{V}\})\mathbf{W}^{\text{ sum}}_{\text{fa}})\]
where \(\mathbf{h}_{i,j},\mathbf{F}_{\text{fa}}\in\mathbb{R}^{c}\), \(\mathbf{W}^{\text{mean}}_{\text{fh}},\mathbf{W}^{\text{sum}}_{\text{fi}}\in\mathbb{R} ^{c\times c}\), and \(c\) denotes the length of feature vector. \(\text{fa}[i]\) is the attention score for \(i^{\text{th}}\) feature. The vertex attention score is calculated using a GNN layer:
\[\{\alpha_{i,j}:v_{i,j}\in\mathcal{V}_{l}\}=\text{sigmoid}(\text{GNNL}(\{\mathbf{ h}_{i,j}:v_{i,j}\in\mathcal{V}_{l-1}\})), \tag{3}\]
Where \(\alpha_{i,j}\) is the attention score for vertex \(v_{i,j}\). Then, the output of the attention layer is calculated by:
\[\{\mathbf{h}^{\text{out}}_{i,j}:\mathbf{h}^{\text{out}}_{i,j}=(1+\alpha_{i,j})\mathbf{h}^{ \text{in}}_{i,j}+\mathbf{h}^{\text{in}}_{i,j}\otimes\mathbf{F}_{\text{fh}}\} \tag{4}\]
where \(\otimes\) is element-wise multiplication.
**Multi-layer Perceptron (MLP)**: After a sequence of layers, all the feature vectors are flattened into a single vector, which is sent to the MLP for classification. MLP has a stack of fully connected (FC) layers.
## III Pruning
This section covers the proposed pruning techniques, including, input pruning (Section III-A), and weight pruning (Section III-B).
### _Input Pruning_
The key benefit of using GNN is that GNN is flexible in accepting any graph structure as the input. Thereby, we are able to exploit input pruning to reduce the computation complexity. Theoretically, in a SAR image (See Figure 1), the pixels not in the target do not affect the classification results. As studied in [26], by properly setting up a constant threshold \(I_{v}\), we can filter out most irrelevant pixels since the pixels that do not belong to the target usually have negligible SAR signal magnitude. After constructing the input graph from the SAR image, we prune the vertices that have a magnitude smaller than \(I_{v}\). The magnitude of a vertex is calculated by \(\sqrt{x_{1}^{2}+x_{2}^{2}+...+x_{np}^{2}}\) where \(np\) denotes the number of polarization of the SAR signal. For example, a quad-polarization system has four kinds of polarization - horizontal-horizontal (HH), vertical-vertical (VV), horizontal-vertical (HV), and vertical-horizontal (VH). After pruning the vertices, all the edges connected to the pruned vertices are also pruned. Due to the input pruning, the graph pooling operation (Equation 1) is slightly modified:
\[\mathbf{h}^{l}_{i,j}=\max(\mathbb{1}^{l-1}_{2i,2j}\cdot\mathbf{h}^{l-1}_{2i,2j}, \mathbb{1}^{l-1}_{2i+1,2j}\cdot\mathbf{h}^{l-1}_{2i+1,2j}, \tag{5}\] \[\mathbb{1}^{l-1}_{2i,2j+1}\cdot\mathbf{h}^{l-1}_{2i,2j+1},\mathbb{1}^ {l-1}_{2i+1,2j+1}\cdot\mathbf{h}^{l-1}_{2i+1,2j+1}),\]
where \(\mathbb{1}_{i,j}\in\{0,1\}\) is the indicator that indicates the existence of vertex \(v_{i,j}\). After input pruning, we can skip the computation for the pruned vertices, which greatly reduces the total computation complexity.
### _Weight Pruning_
As analyzed in [27, 28], the weight matrices in GNNs have redundancy, and some weight entries can be pruned without affecting the classification accuracy. Therefore, to reduce the total computation complexity, we perform weight pruning by training the model using lasso regression [29]. We add the L1 penalty to the loss function:
\[\text{loss}=l(y,y^{\prime})+\lambda\sum_{w}^{W}|w| \tag{6}\]
Figure 5: Diagram of model architecture
where \(l(y,y^{\prime})\) is the classification loss, and \(\lambda\sum_{w}^{W}|w|\) is the L1 penalty term parameterized by \(\lambda\). The L1 penalty leads to weight shrinkage during training. Thereby, some model weights become zeros and can be eliminated from the model. After training, we set a threshold \(I_{w}\), and the model weights with absolute values smaller than \(I_{w}\) are pruned.
## IV Evaluation
We evaluate our approach on two widely used datasets:
* **MSTAR**: The setting of the MSTAR dataset follows the state-of-the-art work [11][14][15][12]. MSTAR contains the SAR images of ten classes of ground vehicles, with 2747 images in the training set and 2427 images in the testing set.
* **Ship discrimination [30]:** For the ship discrimination dataset, we follow the setting in [31], which is a binary classification task that identifies if a given SAR image has a ship or not. The dataset contains 1596 positive image samples and 1596 negative image samples.
### _Evaluation on MSTAR Dataset_
#### Iv-A1 Experimental Setting
For the MSTAR dataset, we use the following setting. The proposed model consists of 12 layers. We develop the proposed model using Pytorch Geometric. We use the cross-entropy loss as the classification loss (Equation 6). We train the model using the Adam optimization algorithm. The training batch size is set as \(20\), and the initial learning rate is \(0.02\). \(\lambda\) (for lasso regression) is set as \(0.002\). The L2 weight decay is set as \(0.08\). We train the model for \(150\) epochs, and the learning rate is multiplied by \(0.5\) for every \(10\) epoch. We use the \(8\)-connectivity to build the input graph. We evaluate the three widely used GNN layers in the proposed model - GCN layer [20], GraphSAGE layer [21], and GAT [22]. We train the proposed model using one NVIDIA RTX A6000 GPU.
**Performance metrics**: We evaluate the proposed approach using the following metrics: _classification accuracy_, _computation complexity_, and _number of parameters_.
#### Iv-A2 Classification Accuracy
The accuracy of the proposed model (under various GNN layer types and connectivity) is shown in Table II. We observe that using the GraphSAGE layer as the GNN layer leads to the highest training/testing accuracy. Using the GraphSAGE layer also leads to the lowest training time. For the GraphSAGE layer, using 8-connectivity to build the input graph can result in higher accuracy but slightly higher training time than 4-connectivity. Table III shows that the proposed GNN model achieves higher accuracy compared with the state-of-the-art CNNs [11, 12, 14, 15] with negligible computation complexity for inference.
#### Iv-A3 Ablation Study
We perform an ablation study to evaluate the impact of the attention mechanism (using GraphSAGE layer and 8-connectivity). The result is shown in Table IV. Without vertex and feature attention, the model achieves only \(93.77\%\) accuracy. With only vertex attention, the model achieves \(99.26\%\) accuracy. With only feature attention, the model achieves \(98.51\%\) accuracy. With both vertex and feature attention, the model achieves \(99.38\%\) accuracy. The evaluation result demonstrates that the attention mechanism can improve classification accuracy without significantly increasing computation complexity.
#### Iv-A4 Evaluation on the Pruning Strategy
We evaluate the proposed input pruning and weight pruning strategies. We use GraphSAGE layer and 8-connectivity as the setting of the model.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**GNN Layer** & **Connectivity** & \begin{tabular}{c} **Training** \\ **Accuracy** \\ \end{tabular} & \begin{tabular}{c} **Testing** \\ **Accuracy** \\ \end{tabular} &
\begin{tabular}{c} **Training** \\ **Time** \\ \end{tabular} \\ \hline \hline GCN & \(4\) & \(99.16\%\) & \(90.06\%\) & \(3.0\) hours \\ & \(8\) & \(95.44\%\) & \(83.82\%\) & \(4.0\) hours \\ \hline GAT & \(4\) & \(99.53\%\) & \(92.21\%\) & \(1.8\) hours \\ & \(8\) & \(82.71\%\) & \(71.33\%\) & \(1.9\) hours \\ \hline GraphSAGE & \(4\) & \(100.00\%\) & \(97.81\%\) & \(52\) min \\ & \(8\) & \(100.00\%\) & \(99.38\%\) & \(55\) min \\ \hline \hline \end{tabular}
\end{table} TABLE II: THE ACCURACY ON MSTAR DATASET
\begin{table}
\begin{tabular}{c c c c c} \hline & Type & Accuracy & \# of FLOPs & \# of Para. \\ \hline \hline
[11] & CNN & 92.3\% & \(\frac{1}{12}\times\) & \(0.5\times 10^{6}\) \\ \hline
[14] & CNN & 97.97\% & \(\frac{1}{10}\times\) & \(0.65\times 10^{6}\) \\ \hline
[15] & CNN & 98.52\% & \(\frac{1}{3}\times\) & \(2.1\times 10^{6}\) \\ \hline
[12] & CNN & 99.3\% & \(\frac{1}{(6.94\) GFLOPs)} & \(2.5\times 10^{6}\) \\ \hline This work [after pruning] & \multirow{2}{*}{GNN} & \multirow{2}{*}{99.1\%} & \(\frac{1}{3000}\times\) & \multirow{2}{*}{\(0.03\times 10^{6}\)} \\ (GraphSAGE layer, 8-connectivity) & & & & \\ \hline \hline \end{tabular}
\end{table} TABLE III: COMPARISON WITH THE STATE-OF-THE-ART CNNS ON MSTAR DATASET
\begin{table}
\begin{tabular}{c c c c c} \hline
**Vertex** & **Feature** & **Training** & **Testing** & **Training** \\
**Attention** & **Attention** & **Accuracy** & **Accuracy** & **Time** \\ \hline \hline ✗ & ✗ & \(99.67\%\) & \(93.77\%\) & 31 min \\ ✗ & ✓ & \(100.0\%\) & \(98.51\%\) & 40 min \\ ✓ & ✗ & \(100.0\%\) & \(99.26\%\) & 41 min \\ ✓ & ✓ & \(100.0\%\) & \(99.38\%\) & 55 min \\ \hline \hline \end{tabular}
\end{table} TABLE IV: THE IMPACT OF THE ATTENTION MECHANISM (USING GRAPHSAGE LAVER AND 8-CONNECTIVITY)
Fig. 6: The distribution of the SAR signal magnitude in the training/testing set of MASTAR
**Input Pruning**: Figure 6 shows the data distribution of the SAR signal magnitude of the image pixels in the training/testing set. The SAR signal magnitude ranges from \(0\) to \(16\). Since most pixels have a magnitude between \(0\)-\(1\), Figure 6 only shows the range \(0\)-\(1\). For experiment, we set the pruning threshold \(I_{v}\) (See Section III-A) to be \(0\), \(0.1\), \(0.2\), \(0.3\) respectively. The image pixels that have a magnitude small than \(I_{v}\) are pruned.
**Weight Pruning**: The weights in weight matrices can be either negative or positive. We set the threshold \(I_{w}\) for weight pruning (See Section III-B). The weights that have an absolute value that is smaller than \(I_{w}\) are pruned. In the experiment, we set \(I_{w}\) to be between \(1\times 10^{9}\) and \(1\times 10^{1}\).
The evaluation results for the pruning strategy are shown in Figure 7. We have the following observations:
* Without weight pruning, when \(I_{v}\) = \(0.1\), \(93.4\%\) input vertices/pixels are pruned, the accuracy is dropped to \(99.1\%\); when \(I_{v}\) = \(0.2\), \(98.6\%\) input vertices/pixels are pruned, the accuracy is dropped to \(98.5\%\); when \(I_{v}\) = \(0.3\), \(99.1\%\) input vertices/pixels are pruned, the accuracy is dropped to \(96.5\%\).
* When weight pruning threshold \(I_{w}<10^{7}\), the accuracy does not change w.r.t. to \(I_{w}\). When \(I_{w}\) = \(10^{7}\), more than \(95\%\) weights are pruned. Therefore, most entries in the weight matrices are redundant.
Therefore, by setting proper threshold \(I_{v}\), \(I_{w}\) for input pruning and weight pruning, most input pixels and weights can be pruned without significantly dropping the accuracy. Figure 7 shows the evaluation results for the pruning strategy, \(97\%\) weight entries are pruned, and the accuracy is \(99.1\%\). By skipping the computation for the pruned vertices and weights, we can dramatically reduce the total computation complexity.
#### Iv-B5 Experimental Setting
For ship discrimination dataset, we follow the setting of [31] to conduct experiment for few-shot learning. Since the ship discrimination is a binary class task, the few-shot learning task can be formed as a 2-way-\(K\)-shot-classification problem, where \(K\) = \(\{1,2,..,10\}\) denotes the number of labeled training images for each class. We train the model using the Adam optimization algorithm. The training batch size is set as \(\frac{K}{2}\), and the learning rate is set as \(0.001*K\). The L2 weight decay is set as \(0.08\).
#### Iv-B6 Classification Accuracy
As shown in Figure 8, we compare our accuracy with [31] (baseline) for the few-shot learning on the ship discrimination dataset. Note that the baseline [31] uses a convolutional neural network (CNN), and the authors pretrained their CNN using the ship discrimination dataset on the Electro-Optical (EO) domain. We do not pretrain our network on any dataset. For various \(K\), the proposed model outperforms the baseline [31], which is a pretrained deep CNN model.
## V Conclusion and Future Work
In this paper, we proposed a novel GNN-based approach for SAR automatic target recognition. The proposed approach uses the GNN layer as the backbone and uses the attention mechanism to improve classification accuracy. We proposed
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline \(K\) & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ \hline \hline Baseline [31] & 86.3 & 86.3 & 82.8 & 94.2 & 87.8 & 96.0 & 91.1 \\ Our work & 93.8 & 93.1 & 97.9 & 94.0 & 99.7 & 97.4 & 97.8 \\ \hline \hline \end{tabular}
\end{table} TABLE V: COMPARISON OF ACCURACY (%)
Fig. 8: The accuracy on the ship discrimination dataset
Fig. 7: Evaluation of proposed pruning strategy
pruning strategies, including input pruning and weight pruning, to reduce the computation complexity. The evaluation results on the MSTAR and ship discrimination datasets show that the proposed model outperforms the state-of-the-art CNNs in classification accuracy and computation complexity. In [32], we designed a hardware accelerator for the proposed GNN model. In the future, we plan to extend the proposed GNN model to more SAR-related tasks, such as object detection.
## Acknowledgment
This work is supported by the National Science Foundation (NSF) under grants CCF-1919289 and OAC-2209563, and the DEVCOM Army Research Lab (ARL) under grant W911NF2220159.
|
2305.14165 | Impact of Light and Shadow on Robustness of Deep Neural Networks | Deep neural networks (DNNs) have made remarkable strides in various computer
vision tasks, including image classification, segmentation, and object
detection. However, recent research has revealed a vulnerability in advanced
DNNs when faced with deliberate manipulations of input data, known as
adversarial attacks. Moreover, the accuracy of DNNs is heavily influenced by
the distribution of the training dataset. Distortions or perturbations in the
color space of input images can introduce out-of-distribution data, resulting
in misclassification. In this work, we propose a brightness-variation dataset,
which incorporates 24 distinct brightness levels for each image within a subset
of ImageNet. This dataset enables us to simulate the effects of light and
shadow on the images, so as is to investigate the impact of light and shadow on
the performance of DNNs. In our study, we conduct experiments using several
state-of-the-art DNN architectures on the aforementioned dataset. Through our
analysis, we discover a noteworthy positive correlation between the brightness
levels and the loss of accuracy in DNNs. Furthermore, we assess the
effectiveness of recently proposed robust training techniques and strategies,
including AugMix, Revisit, and Free Normalizer, using the ResNet50 architecture
on our brightness-variation dataset. Our experimental results demonstrate that
these techniques can enhance the robustness of DNNs against brightness
variation, leading to improved performance when dealing with images exhibiting
varying brightness levels. | Chengyin Hu, Weiwen Shi, Chao Li, Jialiang Sun, Donghua Wang, Junqi Wu, Guijian Tang | 2023-05-23T15:30:56Z | http://arxiv.org/abs/2305.14165v1 | # Impact of Light and Shadow on Robustness of Deep Neural Networks
###### Abstract
Deep neural networks (DNNs) have made remarkable strides in various computer vision tasks, including image classification, segmentation, and object detection. However, recent research has revealed a vulnerability in advanced DNNs when faced with deliberate manipulations of input data, known as adversarial attacks. Moreover, the accuracy of DNNs is heavily influenced by the distribution of the training dataset. Distortions or perturbations in the color space of input images can introduce out-of-distribution data, resulting in misclassification. In this work, we propose a brightness-variation dataset, which incorporates 24 distinct brightness levels for each image within a subset of ImageNet. This dataset enables us to simulate the effects of light and shadow on the images, so as is to investigate the impact of light and shadow on the performance of DNNs. In our study, we conduct experiments using several state-of-the-art DNN architectures on the aforementioned dataset. Through our analysis, we discover a noteworthy positive correlation between the brightness levels and the loss of accuracy in DNNs. Furthermore, we assess the effectiveness of recently proposed robust training techniques and strategies, including AugMix, Revisit, and Free Normalizer, using the ResNet50 architecture on our brightness-variation dataset. Our experimental results demonstrate that these techniques can enhance the robustness of DNNs against brightness variation, leading to improved performance when dealing with images exhibiting varying brightness levels.
Keywords:DNNs Out-of-distribution data Brightness-variation dataset Light and shadow Effectiveness.
## 1 Introduction
Deep neural networks (DNNs) have revolutionized computer vision tasks such as image classification and object detection since the groundbreaking introduction
of AlexNet in 2012 [22]. These networks have exhibited remarkable accuracy and scalability on large-scale datasets, making them integral to a wide range of applications. However, recent investigations have unveiled the susceptibility of DNNs to intentional distortions and perturbations in input data, giving rise to concerns regarding their robustness and security. In response, researchers have embarked on a quest to develop more resilient network architectures, augmenting them with adversarial training [26] and other robust training strategies [16, 2], with the aim of fortifying the networks against potential attacks and improving their overall robustness.
The architecture of deep neural networks plays a crucial role in determining their robustness. The evolution of network architectures began with the introduction of AlexNet, one of the pioneering deep neural networks that leveraged convolutional layers to learn semantic information. AlexNet's outstanding performance, winning the top spot in the 2012 ImageNet [8] image classification challenge and surpassing traditional machine algorithms, triggered a surge of interest in deep neural networks. Subsequent researchers focused on advancing neural network architectures. VGG [31] introduced a modification to AlexNet by replacing the convolutional kernels with smaller ones. This not only reduced the computational cost but also improved the accuracy of the network. Szegedy et al. [34, 33] proposed the inception module in GoogleNet, which simulated sparse networks with dense construction. This module contributed to improved network performance. Light-weight architectures like MobileNets [29] struck a balance between model size and accuracy, making them suitable for resource-constrained environments. ResNet [14] introduced residual modules that addressed the challenge of learning identity maps, effectively overcoming the degradation problem in deep neural networks.
In recent years, the vulnerability of deep neural networks (DNNs) has gained significant attention [5, 36]. It has been observed that advanced DNNs are susceptible to adversarial examples, which are input data with subtle and often imperceptible perturbations, such as random noises and universal perturbations [32, 27]. However, most existing research has primarily focused on the impact of tiny, pixel-level perturbations on classification results, while paying limited attention to global, geometric, and structural transformations [1]. In this study, we specifically investigate the impact of brightness variation in input images on the performance of deep learning models. It is widely recognized that the performance of DNNs is closely related to the data distribution of the training dataset. When there is a shift in the distribution of the testing data compared to the training data, the accuracy of DNNs tends to decline, even if the semantic information of the input image remains unchanged. Changes in the color channels of input images can alter their data distribution, leading to incorrect output predictions by the networks. Surprisingly, very few studies have explored the influence of brightness variation in images, even in common image classification tasks.
Color images contain a wealth of visual information and are commonly used in high-level computer vision tasks. The role of color in conveying information
is crucial, but the underlying mechanisms of how deep neural networks perceive and process color information remain unclear. Despite the importance of color, there is a lack of research that comprehensively explains how deep networks perceive and utilize color information in their decision-making process. Recently, Kantipudi et al. [21] conducted a notable study focusing on the impact of color channel perturbations on popular deep network architectures such as VGG, ResNet, and DenseNet. They introduced a color channel perturbation attack, where the color channels of the input images were manipulated, and assessed the resulting effect on the network's accuracy. Their findings revealed that the accuracy of these architectures decreased on average by 41% when subjected to the color channel perturbation attack.
The current robustness benchmarking datasets like Imagenet-C, providing out-of-distribution with noise, blur, weather, cartoons, sketches distortions. Hendrycks et al. [17] and Lau F [23] respectively proposed a challenging dataset, Imagenet-A and NAO, which consist of real-world unmodified natural adversarial examples that most famous deep neural networks fail. As far as we know, there is few datasets designed for study the influence of brightness variation on deep networks. To help understand the impact of brightness variation, we propose an image dataset with different brightness variation on the RGB channels of images, generated from a subset of the Imagenet challenge dataset.
The main contributions of this work include the creation of a dataset related to brightness-variation images to understand their impact and then analysis the performance of most famous deep network architectures on image classification task on the proposed dataset. Then we use ResNet50 as an example to study the influence of robust learning techniques like Augmix [16], Revisiting [2] and Normalizer Free [4] on networks' capability to resist the impact of brightness variation. Finally, we examine the relation between the robustness of DNNs on brightness variation and their depth. The rest of the paper is organized as follows: Section 2 presents background information related to the existing literature, and Section 3 presents how we constructed the dataset and provides details of the experiments, followed by the results and findings and finally the conclusion and some discussions are given in Section 4.
## 2 Background
In this section we present a review of relevant literature as well as some background knowledge.
The quality of training data and the effect of perturbations on deep neural networks (DNNs) are crucial aspects that have been extensively studied in recent years. Researchers have discovered that DNNs learn both feature representations and semantic information from the data distribution in their training data. However, when the images are perturbed by geometric transformations, deletions, blurring, or other distortions, the data distribution changes, which can lead to misclassification by the neural network. The concept of adversarial attacks was introduced by Szegedy et al. [35], highlighting the phenomenon of
error amplification rather than nonlinearity or overfitting as the reason behind the success of attacks. Dodge and Karam [9] investigated the impact of adversarial samples on DNN performance and explored how different types of distortions and perturbations affect the classification paradigm of DNNs. Their experiments involved the use of the Imagenet dataset and models such as Caffe Reference [20], VGG16 [31], and GoogleNet [34]. To improve the robustness of DNNs against various types of distortions, Dodge and Karam [10, 11] proposed ensemble methods based on a mixture of experts. This approach combines multiple expert models in a weighted manner to enhance the network's resilience. Additionally, they compared the classification performance of humans and DNNs on distorted images. The findings showed that humans generally outperform DNNs in classifying distorted images, even when the DNNs are retrained with distorted data.
Zhou et al. [39] demonstrated in their work that by fine-tuning and re-training DNNs, their performance in classifying distorted images can be enhanced. This highlights the importance of adapting the network to the specific characteristics of the perturbations. Borkar and Karam [3] proposed a criterion to evaluate the impact of perturbations, such as Gaussian blur and additive noise, on the activations of pre-trained convolutional filters. By ranking the most noise vulnerable convolutional filters in commonly used convolutional neural networks (CNNs), they aimed to identify the filters that could benefit the most from correction to achieve the highest improvement in classification accuracy. Hossain et al. [18] conducted an analysis of the performance of VGG16 when influenced by various types of perturbations, including Gaussian white noise, scaling Gaussian noise, salt & pepper noise, speckle, motion blur, and Gaussian blur. To improve the network's robustness against these distortions, they employed discrete cosine transform during the training process.
### Impact of Colour
Dosovitskiy and Brox [12] were the first to show that manipulating the color of an object in a way that deviates from the training data has a negative impact on classification performance. Engilberge et al. [13] managed to identify the colour-sensitive units that processed hue characteristics in the VGG-19 and AlexNet.
Kantipudi et al. [21] proposed a colour channel perturbation attack to fool deep networks and defense it by data augmentation. Le and Kayal [24] compared various models to show that the robustness of edge detection is an important factor contributing to the robustness of models against color noise. Kanjar et al. [7] analyzed the impact of colour on robustness of widely used DNNs. They performed experiments on their proposed dataset with hue space based distortion. Hendrycks et al. [20] have also used the validation set of the Imagenet database as the base database and augmented different colour distorted images from these images. Their works inspire us to further study the impact of brightness variation on DNNs.
### Architectures
Alexnet [22] is one of the origins of most common neural network architectures. It pioneered to take the use of GPUs to accelerate the training of neural networks, reducing the training time of the neural network to an acceptable range. The success of AlexNet on Imagenet has motivated more work on the development of DNNs' architecture. VGG [31] took advantage of Alexnet and used several consecutive \(3\times 3\) convolution kernels instead of the larger ones in AlexNet (\(11\times 11\), \(7\times 7\), \(5\times 5\)) to improve the performance. They pointed out that for a given receptive field, using consecutive smaller convolution kernels is better than a larger convolution kernel, as it makes the network deeper and more efficient. Another key innovation in deep network architectures is the inception module, which consists of \(1\times 1\), \(3\times 3\) and \(5\times 5\) convolutions. Inception approximate and covers the best local sparse structure of the convolutional network by easily accessible dense components. With the rapid development of the architecture of deep networks, the models have become much deeper, which caused vanishing gradients and degradation problems. The ResNet [14] architecture was proposed to tackle these issues, and is still one of the widely used backbones in computer vision tasks nowadays. ResNet proposed a structure called residual block, which skips connections between adjacent layers, enabling the network to learn identity mapping easier. It ensures that the deeper network at least perform as good as smaller ones. Based on ResNet architecture, DenseNet [19] proposes a more radical intensive connectivity architecture, which connects all the layers to each other and each layer takes all the layers before it as its input. DenseNet needs fewer parameters compared to the other traditional convolutional DNNs by reducing the need to learn redundant features. As the networks become deeper, their drawbacks of being computationally intensive and require a lot of GPU memory become more significant, which makes them unsuitable for mobile devices. To deploy models in such portable devices a group of lightweight networks were proposed, and Mobilenet [29] is one of the most famous architectures among them. It proposed the concepts of depth wise separable convolutions and inverted residuals, which achieve similar performance to traditional networks with less computational cost.
### Robustness
Evaluating robustness of deep neural networks is still a challenging and ongoing area of research [25]. Some works use data augmentation to improve the robustness of networks [30]. Papernot [28] et al. first pointed out some limitations of deep learning in adversarial settings. They proposed forward derivative attack to fool DNNs by only alter a minority of input. Hendrycks et al. [15] defined some benchmark metrics of robustness of DNNs to some common perturbations like additive noise, blur, compression artifacts, etc. They proposed a variant of Imagenet referred to as Imagenet Challenge (Imagenet-C). Imagenet-C contains 15 types of automated generated perturbations, on which many well-known DNNs perform poorly. Kanjar et al. [7] analyzed the impact of colour on robustness
of widely used DNNs. Recent studies have indicated that convolutional DNNs pre-trained on Imagenet dataset are vulnerable to texture bias [16], while the impact of scaling in images is not deeply studied. Xiao et al. [37] have formalized the scaling attack, illustrating its goal, generation algorithms, and optimization solution. Zheng et al. [38] provide a basis for robustness evaluation and conduct experiments in different situations to explore the relationship between image scaling and the robustness of adversarial examples. With the development of adversarial attack techniques, many studies focus on defensing against these attacks and try to find feasible training strategies to improve the robustness of models [6]. Augmix [16] is a simple training strategy, which uses several augmentation techniques together with Jenson-Shannon divergence loss to enforce a common embedding for the classifier. Brock et al. [4] proposed a normalized family of free networks called NF-Nets to prevent the gradient explosion by not using batch normalization. Tan et al. [2] recently showed that network models can effectively improve the classification performance of ResNet models by using some scaling strategies and developed a set of models called ResNet-RS.
Figure 1: Examples of generating brightness-variation images.
## 3 Methodology and experiments
### Dataset generation
Typically, DNNs trained on Imagenet dataset have 1000 different labels. Our proposed dataset is derived from a subset of Imagenet Challenge dataset, and we call it ImageNet-BrightnessVariation (ImageNet-BV). Firstly, 50 images were randomly selected from each category of Imagenet Challenge to generate a clean sample dataset, which totally contains 50,000 raw images. Secondly, the whole dataset of 1,200,000 images is generated by adjusting 24 different brightness levels for each image. As shown in Figure 1, one original image (\(B=0\)) is augmented into 24 images with different brightness levels. Here, our method of generating noise samples is expressed as follows:
\[X_{B}=Clip(X\otimes(1+B/10)) \tag{1}\]
where \(X\) represents the clean sample, \(X_{B}\) represents the noise sample, \(X\otimes n\) represents the value of R, G and B channels of \(X\) is multiplied with \(n\), \(Clip\) represents the value greater than 255 or less than 0 is set to 255 and 0, respectively.
### Impact on widely used network architectures
In this section, we conduct experiments to evaluate the accuracy of six widely utilized deep network architectures [19, 14, 31, 34, 29, 22] using the generated ImageNet-BV dataset. The classification accuracy of the images without any brightness perturbation is represented by the sixth row (\(B=0\)), serving as a reference for comparison.
The experimental results presented in Table 1 provide several key findings:
**1)** Various levels of brightness variation consistently lead to a decrease in the top-1 classification accuracy of advanced deep neural networks (DNNs). This observation holds true across all evaluated network architectures; **2)** Upon closer examination of the influence of different architectures, ResNet50 demonstrates the highest classification accuracy among the models. DenseNet and VGG19 exhibit similar performance in terms of classification accuracy, surpassing MobileNet and GoogleNet. Notably, AlexNet consistently performs the worst in terms of classification accuracy across all brightness variations; **3)** With the exception of GoogleNet, a clear pattern emerges in the classification accuracy of DNNs with respect to brightness variation: When \(B\) is negative (\(B<0\)), the classification accuracy increases as \(B\) increases; when \(B\) is zero (\(B=0\)), the accuracy reaches its maximum; when \(B\) is positive (\(B>0\)), the accuracy decreases as \(B\) increases.
These observations provide valuable insights into the impact of brightness variation on DNNs, highlighting the varying performance of different architectures and the influence of brightness levels on classification accuracy.
It indicates that although the visual information of the images is barely changed, the brightness variation on images still has significant impact on the
performance deep neural networks. Despite the tested DNNs all contains convolutional layers, which gives them the property of geometric invariance, they are still vulnerable to brightness variation.
### Recent advances in efficient and robust models
**Augmix and ResNet-RS-50.** Hendrycks et al. [16] introduced AugMix as a data processing technique aimed at enhancing the robust performance of deep neural networks. AugMix employs various augmentation methods, such as rotation, translation, and channel mixing, to augment the input data. In our experiments, we evaluate the classification performance of ResNet50 with AugMix data processing technology using our dataset. We compare the classification performance of a pretrained ResNet50 model with that of a pretrained ResNet50 model with AugMix technology, and the results are illustrated in Figure 2. The findings reveal that when input images with brightness variation are utilized, the ResNet50 model with AugMix technology achieves better classification performance compared to the ordinary pretrained ResNet50 model. Additionally,
\begin{table}
\begin{tabular}{l l l l l l l} \hline \multicolumn{1}{c}{\(B\)} & DenseNet & ResNet50 & VGG19 & GoogleNet & MobileNet & AlexNet \\ \hline -5 & 78.99 & 79.99 & 76.14 & 72.24 & 73.85 & 58.34 \\ \hline -4 & 80.99 & 82.79 & 79.06 & 74.21 & 76.71 & 63.67 \\ \hline -3 & 81.81 & 84.17 & 80.51 & 75.11 & 78.58 & 67.32 \\ \hline -2 & 82.08 & 84.98 & 81.39 & 75.71 & 79.60 & 69.56 \\ \hline -1 & 82.06 & 85.49 & 82.00 & **76.01** & 80.35 & 70.86 \\ \hline \multicolumn{1}{c}{0} & **82.15** & **85.75** & **82.52** & 75.85 & **80.70** & **71.61** \\ \hline \multicolumn{1}{c}{1} & 81.56 & 85.28 & 82.03 & 75.60 & 80.13 & 70.97 \\ \hline \multicolumn{1}{c}{2} & 80.89 & 84.39 & 81.01 & 75.02 & 79.13 & 69.97 \\ \hline \multicolumn{1}{c}{3} & 79.90 & 83.28 & 79.60 & 74.34 & 77.83 & 68.27 \\ \hline \multicolumn{1}{c}{4} & 78.61 & 82.16 & 78.09 & 73.53 & 76.40 & 65.99 \\ \hline \multicolumn{1}{c}{5} & 77.32 & 80.57 & 76.19 & 72.78 & 74.53 & 63.49 \\ \hline \multicolumn{1}{c}{6} & 76.03 & 78.95 & 74.25 & 71.64 & 72.77 & 60.56 \\ \hline \multicolumn{1}{c}{7} & 74.38 & 77.24 & 72.23 & 70.35 & 70.79 & 57.69 \\ \hline \multicolumn{1}{c}{8} & 72.81 & 75.50 & 70.03 & 69.02 & 68.76 & 54.64 \\ \hline \multicolumn{1}{c}{9} & 71.10 & 73.51 & 67.76 & 67.69 & 66.61 & 51.51 \\ \hline \multicolumn{1}{c}{10} & 69.31 & 71.64 & 65.62 & 66.45 & 64.44 & 48.60 \\ \hline \multicolumn{1}{c}{11} & 67.51 & 69.74 & 63.30 & 64.84 & 62.38 & 45.88 \\ \hline \multicolumn{1}{c}{12} & 65.55 & 67.65 & 60.99 & 63.33 & 60.27 & 43.21 \\ \hline \multicolumn{1}{c}{13} & 63.64 & 65.68 & 58.57 & 61.94 & 58.07 & 40.79 \\ \hline \multicolumn{1}{c}{14} & 62.02 & 63.46 & 56.29 & 60.41 & 56.00 & 38.33 \\ \hline \multicolumn{1}{c}{15} & 60.11 & 61.59 & 53.90 & 58.85 & 53.95 & 35.84 \\ \hline \multicolumn{1}{c}{16} & 58.41 & 59.60 & 51.49 & 57.26 & 51.94 & 33.77 \\ \hline \multicolumn{1}{c}{17} & 56.63 & 57.64 & 49.33 & 55.83 & 49.99 & 31.95 \\ \hline \multicolumn{1}{c}{18} & 54.89 & 55.83 & 47.34 & 54.44 & 48.06 & 30.17 \\ \hline \multicolumn{1}{c}{19} & 53.21 & 53.92 & 45.39 & 53.00 & 46.31 & 28.52 \\ \hline \end{tabular}
\end{table}
Table 1: Classification Top-1 accuracy (%) of well-known DNNs on ImageNet-BV.
Bello et al. [2] recently demonstrated the effectiveness of scaling network models in improving classification performance. They developed a set of models known as ResNet-RS. Here, we present the classification performance of ResNet-RS-50 on Imagenet-CV, with its Top-1 accuracy depicted in Figure 2. The results clearly indicate that ResNet-RS-50 exhibits superior robustness compared to the pretrained ResNet50 model.
The findings demonstrate the effectiveness of AugMix in enhancing the classification performance of ResNet50 and the improved robustness of ResNet-RS-50, particularly when faced with brightness variation in input images. In general, both ResNet50 with AugMix and ResNet-RS-50 exhibit greater robustness compared to the original pretrained ResNet50 model. Furthermore, Figure 2 provides additional insights: **1)** Similar to ResNet50, the Top-1 classification accuracy of ResNet-RS-50 and ResNet+AugMix reaches its maximum when \(B=0\); **2)** As the level of distortion added to the color channels of images increases, the classification accuracy of ResNet50 with AugMix and ResNet-RS-50 also decreases accordingly. This observation indicates that brightness still possesses a certain level of antagonism towards the improved ResNet50 models; **3)** When \(B<=4\), ResNet-RS-50 exhibits less robustness compared to ResNet+AugMix. However, when \(B>4\), ResNet-RS-50 showcases greater robustness compared to ResNet+AugMix.
**Normalizer Free ResNet50(NF-ResNet50).** Brock et al. [4] introduced a family of normalized free networks called NF-Nets, which differ from traditional networks by not employing batch normalization. During the training of NF-Nets, measures are taken to limit the size of gradients, effectively preventing gradient explosion and promoting training stability. Figure 3 illustrates the Top-1 accuracy of NF-ResNet50 on our dataset, revealing the following observations: **1)** NF-ResNet50 exhibits improved classification performance compared
Figure 3: Performance of ResNet50 vs. NF-ResNet50.
Figure 2: Performance of ResNet50 vs. ResNet-RS-50 vs. ResNet50+Augmix.
to the pretrained ResNet50 model. This enhancement highlights the effectiveness of the NF-Net architecture; **2)** Similar to ResNet50, NF-ResNet50 achieves its highest Top-1 classification accuracy when \(B=0\). As additional distortions are introduced to the color channels of images, the classification accuracy of NF-ResNet50 decreases correspondingly. This finding suggests that brightness still exerts a certain antagonistic effect on the performance of NF-ResNet50. These results provide insights into the performance characteristics of NF-ResNet50, showcasing its improved classification performance compared to the pretrained ResNet50 model, as well as its sensitivity to brightness variations in input images.
## 4 Conclusion and future work
The widespread usage of deep neural networks in various applications necessitates a thorough examination of their robustness and the development of techniques to enhance their resistance to perturbations. This work presents experimental studies that shed light on the impact of brightness variation on the performance of deep neural network architectures, particularly in relation to shifts in data distribution. The findings reveal a significant reduction in the performance of these networks as distortions are introduced to the color channels of images.
The study also investigates the effectiveness of data processing and augmentation techniques, such as AugMix and Revisit, in improving the robustness and optimization of deep network training. Notably, when considering brightness variation, ResNet50+AugMix and ResNet-RS-50 demonstrate greater robustness compared to ResNet-50. Furthermore, the study explores the performance of Normalizer free models, which have been found to exhibit enhanced robustness to brightness variation compared to pretrained ResNet50 models.
These observations underscore the substantial impact of brightness variation on the inference of advanced deep neural networks. The positive influence of data processing and augmentation techniques highlights the potential for further improving the robustness and accuracy of deep neural network models. These findings will serve as a driving force for future research on the brightness perception mechanisms of other architectural studies. The insights gained from this analysis will motivate researchers to consider the influence of brightness and other perturbations in the development of more accurate and robust deep neural network models.
|
2306.02525 | Experimentally Realizable Continuous-variable Quantum Neural Networks | Continuous-variable (CV) quantum computing has shown great potential for
building neural network models. These neural networks can have different levels
of quantum-classical hybridization depending on the complexity of the problem.
Previous work on CV neural network protocols required the implementation of
non-Gaussian operators in the network. These operators were used to introduce
non-linearity, an essential feature of neural networks. However, these
protocols are hard to execute experimentally. We built a CV hybrid
quantum-classical neural network protocol that can be realized experimentally
with current photonic quantum hardware. Our protocol uses Gaussian gates only
with the addition of ancillary qumodes. We implemented non-linearity through
repeat-until-success measurements on ancillary qumodes. To test our neural
network, we studied canonical machine learning and quantum computer problems in
a supervised learning setting -- state preparation, curve fitting, and
classification problems. We achieved high fidelity in state preparation of
single-photon (99.9%), cat (99.8%), and Gottesman-Kitaev-Preskill (93.9%)
states, a well-fitted curve in the presence of noise at a cost of less than 1%,
and more than 95% accuracy in classification problems. These results bode well
for real-world applications of CV quantum neural networks. | Shikha Bangar, Leanto Sunny, Kubra Yeter-Aydeniz, George Siopsis | 2023-06-05T01:18:41Z | http://arxiv.org/abs/2306.02525v2 | # Experimentally Realizable Continuous-variable Quantum Neural Networks
###### Abstract
Continuous-variable (CV) quantum computing has shown great potential for building neural network models. These neural networks can have different levels of quantum-classical hybridization depending on the complexity of the problem. Previous work on CV neural network protocols required the implementation of non-Gaussian operators in the network. These operators were used to introduce non-linearity, an essential feature of neural networks. However, these protocols are hard to execute experimentally. We built a CV hybrid quantum-classical neural network protocol that can be realized experimentally with current photonic quantum hardware. Our protocol uses Gaussian gates only with the addition of ancillary qumodes. We implemented non-linearity through repeat-until-success measurements on ancillary qumodes. To test our neural network, we studied canonical machine learning and quantum computer problems in a supervised learning setting - state preparation, curve fitting, and classification problems. We achieved high fidelity in state preparation of single-photon (99.9%), cat (99.8%), and Gottesman-Kitaev-Preskill (93.9%) states, a well-fitted curve in the presence of noise at a cost of less than 1%, and more than 95% accuracy in classification problems. These results bode well for real-world applications of CV quantum neural networks.
## I Introduction
Continuous-variable (CV) quantum computing (QC) takes advantage of the wave-like properties of particles. For example, it can be realized by photonic quantum hardware manipulating electromagnetic fields. Thus, CV QC is achievable in quantum optics by utilizing continuous quadratures of the quantized electromagnetic field [1] enabling the essential steps in quantum algorithms (preparation, unitary manipulation, and measurement of (entangled) quantum states). CV quantum algorithms have been developed for various applications, ranging from quantum field theory [2; 3; 4] to machine learning [5]. It has recently been shown that CV is also a good architecture for building quantum neural network (QNN) models on quantum computers [6]. The developed CV QNN architecture has been applied to various practical real-world problems, e.g., function fitting, fraud detection of credit card transactions, image classification of hand-written digits, data encryption and decryption in a secure cryptography algorithm [7], and entangled state detection [8]. In addition to these applications, CV QNN has been utilized as generator and discriminator in a CV quantum adversarial network (CV QGAN) [9] to reproduce the data outputs of the calorimeters for data collected in high energy physics experiments at CERN.
In the CV QNN model discussed in [6], a non-Gaussian Kerr gate provides the non-linearity required for neural networks. However, experimentally realizing these non-Gaussian operations is challenging due to their weakly interacting nature. Therefore, we developed an alternative CV QNN model in which non-linearities are introduced through measurements on ancillary qumodes following the proposal in [10] based on repeat-until-success measurements. In this paper, we propose a variational hybrid quantum-classical circuit implementing a neural network that solves well-known problems such as function fitting, state preparation, binary classification, and image recognition. The quantum circuit uses only Gaussian gates that can be implemented with optical elements, such as beam splitters and squeezers. Thus, our CV hybrid neural network can be realized experimentally using current photonic quantum hardware.
Similar measurements on ancillary qumodes as a means towards creating desired quantum states have been considered before. Single- and two-mode quantum gates acting on photonic qubits were generated with single photon sources, linear optical elements and measurements of ancillary modes [11]. To generate the cubic phase gate, nonlinear quadrature measurements were implemented using ancillary states, homodyne measurements, and nonlinear feedwarbors based on the measurement results [12]. Photon-number-resolving (PNR) measurements were used for the probabilistic production of multimode Gaussian states, including cat (superpositions of coherent states), ON, Gottesman-Kitaev-Preskill (GKP) [13], NOON [14], and bosonic code states [15]. Cat and GKP states were also created experimentally within a CV cluster state by performing PNR measurements [16]. To the best of our knowledge, ours is the first attempt at applying non-linearities induced by measurements to CV QNNs.
The structure of this paper is as follows. We begin with a description of the CV neural network in Section II, followed by a detailed account of our proposal to introduce non-linearities that avoids non-Gaussian gates. In
Section III, we study several problems involving varying degrees of hybridization between quantum and classical neural networks. We conclude with a discussion of the potential of this work in Section IV.
## II The method
A general CV QNN was discussed in Ref. [6]. It included \(N\)-port linear optical interferometers consisting of beam splitter (\(\mathcal{BS}\)), rotation (\(\mathcal{R}\)), displacement (\(\mathcal{D}\)), and squeezing (\(\mathcal{S}\)) gates. It also featured a non-Gaussian Kerr gate (\(\Phi\)) introducing non-linearity to the neural network. In our setup, we have replaced this single-mode gate with a two-mode quantum circuit element consisting of Gaussian gates and a photon detector which is experimentally feasible with current technology (shown in Fig. 1).
In more detail, our nonlinear circuit element is implemented by adding an ancilla qumode in a coherent state \(\ket{\alpha}=\mathcal{D}(\alpha)\ket{0}_{\mathrm{anc}}\) to the primary qumode, where \(\alpha\in\mathbb{R}\). The two modes are then entangled by a controlled displacement (\(CX(s)\)) gate that uses the \(q\)-quadrature of the primary mode to shift the \(q_{\mathrm{anc}}\) quadrature by \(sq\), where \(s\in\mathbb{R}\). It is implemented with beam splitters and a two-mode squeezer of parameters
\[\cot 2\theta=\sinh r=-s. \tag{1}\]
If the incoming state of the primary mode is \(\ket{\psi}=\int dq\,\psi(q)\ket{q}\), then the final two-mode state is the entangled state
\[\int dq\,\psi(q)\ket{\alpha+sq}_{\mathrm{anc}}\otimes\ket{q}. \tag{2}\]
Then we use a photon detector to measure the photon number of the ancilla qumode. If the detector clicks, the process is considered successful, and the primary qumode proceeds to the next layer. For small \(\alpha\) and \(s\) parameters, the ancilla qumode decouples and the outgoing primary qumode is in the (unnormalized) state
\[\int dq\,(\alpha+sq)\psi(q)\ket{q}\, \tag{3}\]
showing that we have effectively applied the non-Gaussian gate \(\alpha+sq\). For larger values of the parameters, the expression for this effective gate is more complicated, but it is still a non-Gaussian gate.
If there is no click in the detector, the output state of the primary mode is approximately the same as the input state, \(\ket{\psi}\) for small \(\alpha\) and \(s\), changing slightly for larger values of the parameters (with insertions of factors of \(e^{-|\alpha+sq|^{2}/2}\)). We feed the state back into the input port and repeat the process shown in Fig. 1. This loop continues until the detector clicks and the primary qumode can advance (repeat-until-success process [10]). We fix the coherence parameter \(\alpha\) of the ancillary qumode, treating it as a hyperparameter and not a trainable parameter. In our simulations, we set \(\alpha=1\).
A detailed neural network architecture for single-mode and two-mode layers is shown in Fig. 2. We create a multiple-layer structure by arranging each layer as a building block of the neural network with the gate variables \((\theta,\phi,\chi,x,\alpha,\beta)\) being free parameters, collectively denoted by \(\tilde{\zeta}\). We want to find \(\tilde{\zeta}\) such that the value of the cost function (\(C(\tilde{\zeta})\)) is minimum. This can be done using various optimization techniques available in deep learning (for example, gradient descent, stochastic gradient descent, and, most commonly, the Adam optimizer). The parameters are updated by the rule:
\[\vec{\zeta}\longrightarrow\vec{\zeta}+\eta\nabla_{\vec{\zeta}}C(\vec{\zeta})\, \tag{4}\]
where \(\eta\) is the step size, also known as the learning rate. This procedure continues until the model cannot modify
Figure 1: Two-qumode Gaussian quantum circuit element introducing non-linearity in a CV QNN. (a) The ancillary qumode, initially in the coherent state \(\ket{\alpha}_{\mathrm{anc}}=\mathcal{D}(\alpha)\ket{0}_{\mathrm{anc}}\), is entangled with the primary qumode with a controlled displacement (\(CX(s)\)) gate. The primary qumode goes through a feedback loop until the detector on the ancillary qumode clicks [10]. (b) The \(CX(s)\) gate with parameters given in Eq. (1).
Figure 2: Detailed CV neural network architecture for (a) single-mode and (b) two-mode layer.
the cost function further. Then, the parameters associated with the lowest observed cost function offer the best solution to the given task. Note that the free parameters correspond to a circuit; hence, the circuit is the solution.
In principle, only one layer (\(\mathcal{L}\)) is sufficient to parameterize every possible unitary affine transformation on \(N\) modes. However, deeper architectures provide increased expressive power, better learning capabilities, and a more efficient representation of complex transformations.
To test our neural network, we studied various machine learning and quantum computation problems, namely, state preparation, curve fitting, binary classification, and image recognition with varying degrees of hybridization in quantum and classical neural networks, as discussed in Section III.
We conducted our simulations using Strawberry Fields quantum computing software. The realization of the quantum circuit element implementing non-linearity depicted in Figure 1 in a QNN setting turned out to be challenging. To obtain a good approximation to our setup that could be implemented with software available to us, we adjusted the coherent parameter of the ancillary qumode so that a single photon would be detected in that mode with a high success probability. To estimate the number of ancillary qumode measurements (or feedback loops) necessary for the successful application of the quantum circuit element, we utilized Bosonic Qiskit software [17] for simulating an elementary circuit and tracking the number of measurements required for success in various applications.
Sample results are illustrated in Figure 3. Using the architecture of the binary classification circuit discussed in Section III.3 as a concrete example, we counted the number of repeated ancillary qumode measurements required for a successful pass of the primary qumodes during forward propagation through the network. We plotted the success rates per layer for setups utilizing various numbers of layers. Evidently, most successful measurements occur at the first photon detection of the ancillary qumode.
Due to the constraints in the Strawberry Fields library, we were compelled to perform post-selection of a single Fock state during the ancilla measurement step to collapse the wavefunction, thus effectively implementing measurements by photon-number-resolving detectors. It would also be interesting to simulate photon detectors that cannot resolve photon number and are widely available. This would alter the effective nonlinear operation slightly (depending on the choice of the laser intensity for the ancillary qumodes) but would simplify the experimental setup. To demonstrate that a high success rate can be achieved even without a photon-number-resolving detector, we trained two distinct models that performed multi-label classification on the MNIST handwritten digit data set discussed in Section III.4. The training and testing loss values for these classical-quantum hybrid models are presented in Fig. 4. Model 1 was designed to successfully select the required state in the ancilla measurement on the initial attempt whereas Model 2 was set up to fail the first measurement attempt and succeed on the second measurement. Even though the loss function and classification accuracy experienced a slight decline at the end of training from 97.25% to 96.49%, we were still able to train Model 2 successfully and achieve a reasonably high classification accuracy for the 4-class MNIST handwritten digit dataset classification.
## III Case studies
In this section, we develop CV QNN models for various applications, including quantum state preparation (Section III.1), curve fitting (Section III.2), binary classification of fraud and genuine credit card transactions (Section III.3), and multi-label classification of MNIST handwritten digits (Section III.4). In order to observe the effect of the classical and quantum neural network layers, we analyze both hybrid quantum-classical and fully quantum layers.
We simulated our CV QNN models using the Strawberry Fields software platform [18]. The quantum machine learning toolbox application is built on top of it with Tensorflow features [19]. We used the quantum circuit simulator, optimized the algorithm, and trained the
Figure 3: Number of measurements necessary on the ancillary qumode to achieve successful measurement for each quantum layer of sample hybrid networks for binary classification, consisting of two classical and (a) four or (b) eight quantum layers.
neural network to obtain the desired results.
### State Preparation
The CV QNN model for quantum state preparation trains a quantum circuit to generate a target quantum state. To this end, we provide a canonical input state \(\ket{\Psi_{0}}\) and target output state \(\ket{\Psi_{t}}\) and aim to find out the circuit \(U\) (a unitary transformation) such that
\[\ket{\Psi_{t}}=U\ket{\Psi_{0}}. \tag{5}\]
For simplicity, we fixed the input state to be the vacuum, \(\ket{\Psi_{0}}=\ket{0}\). We considered a basic single-mode architecture of a quantum neural network with a fixed number of layers, as shown in Fig. 5. As described earlier, our goal is to find the parameters \(\vec{\zeta}\) such that \(U(\vec{\zeta})\ket{0}=\ket{\Psi_{t}}\), or \(\ket{\bra{\Psi_{t}}U(\vec{\zeta})\ket{0}}=1\). For this case, we performed optimization by minimizing the cost function:
\[C(\vec{\zeta})=\left|\left|\bra{\Psi_{t}}U(\vec{\zeta})\ket{0}\right|^{2}-1 \right|. \tag{6}\]
To obtain \(C\), we perform homodyne tomography on the final state \(U(\vec{\zeta})\ket{0}\). We measure the quadrature \(X_{\phi}=\frac{1}{2}(e^{i\phi}a^{\dagger}+e^{-i\phi}a)\), and obtain a series of output pairs \((\phi_{k},x_{k})\) (\(k=1,\ldots,N\)). They allow us to obtain an estimate of the Wigner function of the final state, \(W_{U(\zeta)\ket{0}}(x,p)\). By comparing with the Wigner function of the desired state, \(W_{\ket{\Psi_{t}}}(x,p)\), we deduce the cost function \(C\) from the overlap
\[\left|\bra{\Psi_{t}}U(\vec{\zeta})\ket{0}\right|^{2}=2\pi\int dxdp\,W_{U(\zeta )\ket{0}}(x,p)W_{\ket{\Psi_{t}}}(x,p). \tag{7}\]
Although we focused on minimizing cost in the training process, we also calculated the fidelity between the target and optimized state. The fidelity measures how closely the optimized state matches the target state. It serves as another performance metric by providing another measure of accuracy with cost.
To test the performance of the quantum neural network, we prepared two different states, the single-photon state \(\ket{\Psi_{t}}=\ket{1}=a^{\dagger}\ket{0}\), and the cat state
\[\ket{\text{cat},\theta}=\frac{1}{\sqrt{2}}\left(\mathcal{D}(\alpha_{0})+e^{i \theta}\mathcal{D}(-\alpha_{0})\right)\ket{0} \tag{8}\]
where \(\theta,\alpha_{0}\in\mathbb{R}\).
Since we were using Strawberry Fields software, we could quickly get the Wigner function after performing the homodyne measurement. However, these Wigner functions are incompatible with the commonly used efficient TensorFlow framework to perform the optimization. Hence, we used the Python library - Scipy [20] to optimize cost as Scipy provided the Nelder-Mead optimization technique [21]. This technique is non-gradient based and designed for high-dimensional minimization, which worked best for our purposes as our goal was to minimize the cost built from 2D Wigner functions obtained via homodyne detection.
We noticed that just in a few steps, the model started to learn the state. For best results, we ran the model with a different number of layers, as shown in Fig. 6. In these simulations, we did not include any possible quantum hardware errors. A small number of layers yields a higher cost, but fewer layers require fewer gates and lead to fewer errors due to quantum hardware imperfections. Ignoring such errors, as the number of layers increases, the cost is lowered. Notice that the cost starts to increase again beyond a certain number of layers due to overfitting, e.g., for more than 12 layers in the case of the single-photon state. There is an optimum number of layers which would be important to determine by including a realistic model of quantum hardware.
Figure 4: Training and testing loss of MNIST classification models with successful measurement of the ancilla within (a) one loop (Model 1) and (b) two loops (Model 2).
Figure 5: CV quantum NN architecture for quantum state preparation. The input to the network is the vacuum state, and at the end, a homodyne measurement is performed.
For the single photon state, 6 quantum layers gave the best results with a fidelity of 99.9%. The cost achieved after optimization was 0.008 after 5000 steps. We used a cutoff dimension of 6, and the maximum number of steps was fixed at 5000. It should be noted that in the Nelder-Mead optimization, the number of steps is determined by the difference between two consecutive cost values. This difference is treated as another hyperparameter during the training process. The result of comparing other numbers of layers is shown in Fig. 7. We plotted the 2D Wigner function for 2 and 6 layers to demonstrate the importance of the optimal number of layers. We also showed the 1D Wigner function for different quantum layers. We integrated the 2D Wigner function over the momentum using the Scipy library to obtain the 1D Wigner function. The plot shows that the results improve up to 6 quantum layers, and beyond that, they worsen due to overfitting.
Turning to state preparation of the cat state (8), which is a superposition of two coherent states, we concentrated on the even cat state with \(\alpha_{0}=1.5\) and \(\theta=0\). As the cat state is more complicated than the single photon state, we had to increase the cutoff dimension to 10. We achieved high fidelity with 8 quantum layers. The comparison of the 2D Wigner functions of 2 and 8 layers is shown in Fig. 8. We obtained excellent fidelity of 99.8% with 8 layers, and after training, the cost was minimized to the value of 0.03 after 9800 steps. For comparison, with 2 layers, we obtained fidelity of 79% with cost at 2.64 after 2000 steps.
We also prepared a realistic GKP state [13]. Ideal GKP states are linear combinations of an infinite number of eigenstates of the \(q\)-quadrature. We concentrated on the state
\[\left|\mathrm{GKP}\right\rangle_{\mathrm{ideal}}=\sum_{n=-\infty}^{\infty} \left|q=2n\sqrt{\pi\hbar}\right\rangle \tag{9}\]
However, such states are not normalizable and impossible to create experimentally because they have infinite energy and each component would require an infinite amount of squeezing. For a realistic case, we applied an energy cutoff and defined the realistic GKP state [22]
\[\left|\mathrm{GKP}\right\rangle_{\mathrm{real}}=e^{-\epsilon a^{\dagger}a} \left|\mathrm{GKP}\right\rangle_{\mathrm{ideal}} \tag{10}\]
We chose \(\epsilon=0.1\). Since this state is more complex than a cat state, we had to increase the cutoff dimension even further to 15 and employ 15 layers. After 15000 steps,
Figure 8: Preparation of an even cat state (Eq. (8) with \(\alpha_{0}=1.5\), \(\theta=0\)). 2D Wigner function comparison for 2 and 8 quantum layers with fidelity 79% and 99.8%, respectively. A cutoff dimension of 10 was used in both cases. 2 (8) layers optimized in 2000 (9800) steps resulting in a cost of 2.64 (0.03).
Figure 6: Value of cost function for different number of layers for preparation of single-photon (blue circles) and cat (red squares) states. Values averaged over 5 independent runs. Cutoff dimension was set to 6 (10) for the single-photon (cat) state.
Figure 7: Study of the effect of changing the number of layers of the QNN for the preparation of a single-photon state. The maximum number of steps was kept fixed at 5000 and the cutoff dimension at 6.
we achieved a fidelity of 93.9% at a cost of 1.1. The comparison between 10 and 15 layers is shown in Fig. 9.
### Curve Fitting
Next, we build a CV QNN in a supervised learning setting to learn the relationship between input (\(x\)) and output (\(f(x)\)), also known as curve fitting. It is an essential part of data analysis and a classic machine-learning problem.
The architecture for our CV QNN is shown in Fig. 10. We encoded the classical input, \(x\), sampled from a noisy function, \(f(x)\), as the coherence parameter of the input qumode, \(\ket{x}=\mathcal{D}(x)\ket{0}\). The objective was to train the CV QNN to generate output states \(\ket{\psi_{x}}\) that have an expectation value of the quadrature, \(q\), close to \(f(x)\) (i.e., \(\bra{\psi_{x}}q\ket{\psi_{x}}\approx f(x)\)) for a given input \(x\). We studied the noisy sine function. The data were prepared as \(f(x)=f_{0}(x)+\Delta f\) where \(\Delta f\) is a normal distribution with zero mean and standard deviation \(\epsilon\). The parameter \(\epsilon\) determines the amount of error present in the training data. We chose the noisy sine function with \(f_{0}(x)=\sin x\) in the range of \(x\in(-2,2)\). We used 6 quantum layers in this process. The training was done on 1000 steps with a Hilbert-space cutoff dimension of 6. The training and test data were prepared as tuples \((x_{i},f(x_{i}))\), and \(x_{i}\) was chosen uniformly at random in the chosen interval. For training, we chose the cost function to be the mean square error (MSE) value between the circuit outputs and the desired function values,
\[C=\frac{1}{N}\sum_{i=1}^{N}\left[f(x_{i})-\bra{\psi_{x_{i}}}q\ket{\psi_{x_{i}}} \right]^{2}. \tag{11}\]
To learn about the performance of CV QNN, we studied how the cost would change if we increased the number of layers. This helped us determine the optimum number of layers required for the desired results. We started with a single quantum layer and increased the number of quantum layers up to 10. We kept the number of steps fixed at 1000 with 100 data points and a cutoff dimension of 6. We used the Adam optimizer to minimize the cost function value. Some interesting results of how testing data behaved with a changing number of layers are shown in Fig. 11. The final result of the study is summarized in the cost vs. the number of layers in Fig. 12. We found that the cost function value decreases as we increase the number of layers, making the curve fit better. However, this improvement saturates around 6 quantum layers. One also has to keep in mind that more layers correspond to more number of training parameters. Hence, finding a number that yields good results and keeps the training parameters manageable is important. In the case considered here, optimal results were obtained with 6 layers.
We also studied how the noise present in data affected our results by varying the error parameter \(\epsilon\) discussed above. We kept the number of steps fixed at 1000 with 100 data points, a cutoff dimension of 6, and 6 quantum layers. The results of \(\epsilon=0.2,0.5\) are shown in Fig. 13. The value of cost increases from 0.037 to 0.232 as we increase the error from \(\epsilon=0.2\) to \(\epsilon=0.5\), and the fitting worsens as we increase the noise; this is expected as the model is training on noisy data. Although the fitting is getting worse with increased noise, the CV QNN still performs well in learning the shape of the sine function. The complete study of the dependence of the cost function on data noise (\(\epsilon\)) is shown in Fig. 14.
Figure 11: The effect of changing the number of layers on curve fitting of a noisy sine function. 100 data points were used for training and testing in 1000 steps with error parameter \(\epsilon=0.1\) and cutoff dimension 6.
Figure 10: CV quantum NN architecture for curve fitting.
Figure 9: Preparation of a realistic GKP state (Eq. (10) with \(\epsilon=0.1\)). 2D Wigner function comparison for 10 and 15 quantum layers with fidelity 70.6% and 93.9%, respectively. A cutoff dimension of 15 was used in both cases. 10 (15) layers were optimized in 18000 (15000) steps resulting in a cost of 4.87 (1.1).
### Binary classification
For the third problem, we constructed a CV quantum-classical hybrid neural network to demonstrate its effectiveness in detecting fraudulent transactions on credit card purchase data. This is a binary classification problem, a canonical problem in machine learning. The main reason for including the classical layers in this problem is that we want to encode the data with the help of classical layers.
The credit card transaction data is taken from Kaggle [23], a publicly available database. Each transaction was flagged as either genuine or fraudulent with 28 features. Only 0.172% of transactions were fraudulent out of a total number of 284,807.
First, we split the data into training and testing parts. In the training data set, we under-sampled the genuine transactions by selecting them randomly and ensuring that the genuine-to-fraudulent transaction ratio was 3:1. All the remaining genuine transactions were added to the test data set. This data preparation is explained in detail in Fig. 15.
The network architecture is shown in Fig. 16. Four fully connected feed-forward classical layers are followed by five quantum layers with four modes (two are ancillary qumodes). The credit card data is fed into the first classical layer of size 10, followed by two hidden layers of the same size. The last classical layer of size 12 controls the gate parameters in the first quantum input layer. This layer marks the beginning of the quantum part of the neural network. And because we are letting the last classical layer control the gate parameters, the quantum layer is the encoding layer. We start with four vacuum qumodes. These layers contain two single-mode squeezing gates, \(S\), one interferometer gate, \(U\), two displacement gates, \(D\)
Figure 14: Plot of cost function value vs. noise in data determined by the parameter \(\epsilon\), obtained after learning on a QNN to fit the noisy sine function curve. Each data point in the scatter corresponds to the average of 5 independent runs. The training was done for 1000 steps with 100 data points, a cutoff dimension of 6, and using 6 quantum layers.
Figure 12: Cost function values vs. the number of layers plot obtained after performing learning on our CV quantum neural network model for fitting the noisy sine function curve. Each data point in the scatter plot corresponds to the 5 independent runs average. The training was done for 1000 steps with 100 data points (with \(\epsilon=0.1\)) and a cutoff dimension of 6.
Figure 13: The effect of increasing the error in the data on curve fitting of a noisy sine function. 100 data points were used for training and testing in 1000 steps with cutoff dimension 6 using 6 quantum layers.
Figure 15: The description of data preparation done for binary classification problem. Note that the smaller parts of fraudulent and genuine transactions are denoted by the letters F1 and F2 and G1 and G2, respectively. The size of G1 and G2 are equal but three times more than F1 and F2 because of 3:1 undersampling. In the end, the training dataset consists of F1 and G1, and the testing dataset consists of F2, G2, and G3.
and two CX gates, which provide non-linearity through measurement on the ancilla qumode. At the end of the encoding quantum layer, a photon number measurement is performed on the two ancillary qumodes. The two primary qumodes are allowed to advance when the detectors on the ancillary qumodes click. If they do not click, the main qumodes are fed back into the quantum circuit, as shown in Fig. 16. The feedback loops are repeated until the detectors on the ancillary qumodes click, thus implementing the desired non-linearity in the CV quantum neural network [10]. We repeat this process for four more hidden layers. Finally, we measure the photon number on the two output primary qumodes that emerge after the last quantum output layer. If we find the photon in the first qumode, we call it a genuine transaction; if we find it in the second qumode, it is a fraudulent transaction.
The training was performed using the Adam Optimizer with a batch size 24. We minimized the cost function defined by:
\[C=\Sigma_{i\in\mathrm{data}}\left(1-p_{i}\right)^{2}\, \tag{12}\]
where \(p_{i}\) is the probability of detecting a photon for input \(i\) in the correct mode. We used a cutoff dimension of 8 in each mode for 10,000 batches. Once the model was trained, we tested it by choosing a threshold probability closest to the optimal ROC, required for a transaction to be classified as genuine.
The confusion matrix and Receiver Operating Characteristic (ROC) curve are shown in Fig. 17. The accuracy of the model calculated from the confusion matrix came out to be 95.48. Also, the confusion matrix shows that the model predicts the genuine data correctly for more than 95%. The number in the second quadrant representing the False Negative (FN) appears to be high. However, credit card companies can alert their users about such transactions, and by verifying these transactions, the FN can be brought down. The essential quadrant to consider is the third one that represents False Positive (FP), i.e., the fraudulent transactions that are wrongly predicted as genuine. Fortunately, this number is very low for the trained model. Also, the testing data set (which was used to plot the confusion matrix) contains a tiny number of fraudulent transactions, all identified almost correctly, as the percentage of fraudulent traction in the testing data set matches the fourth quadrant. The circle dot in the ROC curve represents the ideal point, and the triangle is the closest point to the optimal within the chosen threshold. The area under the curve (AUC) is 0.90, which is close to the ideal value of 1. The AUC value is a good measure of the separability of the data being classified.
It should be pointed out that the number of features of the credit card transaction data we are interested in is equal to the number of parameters in the quantum circuit used for data encoding. Therefore, we investigated the role of the classical layer in this hybrid quantum-classical neural network architecture. We kept the classical layers constant at 2 and changed the quantum layer at an increment of 2. The results are plotted as accuracy calculated from the confusion matrices vs. different layers in Fig. 18. There is an optimal number of hybrid layer scenario which give out the best results. When we use 2 classical and 2 quantum layers, the accuracy is around 74%, meaning there are not enough layers for learning. However, when we increased the quantum layers to 8 then, also the accuracy went slightly down, indicating the overtraining for a simple case of binary classification. We also understand that hyperparameters play a major role in training, but we found this for the set of hyperparameters we chose.
### Multi-label classification
Extending the results for binary classification, we developed a CV quantum-classical hybrid NN to classify MNIST handwritten digits [24] into their respective classes. The MNIST dataset comprises 60,000 training images and 10,000 testing images, each normalized to \(28\times 28\) pixels in size and grayscale in color. Each data point is labeled with the corresponding digit (0-9). Our current hardware limitations allowed our model to train and classify images up to 4 classes (0-3).
The network architecture we used is illustrated in Fig. 19 with details of the encoding layer in Fig. 19(a) and the quantum layers in Fig. 19(b). The network consists of fully connected feed-forward classical neural network layers that take the input data, feeding into an encoding quantum layer, followed by regular quantum layers that can be repeated as needed. Each quantum layer comprises several primary qumodes, each representing a class of the MNIST dataset, as well as ancillary qumodes that implement non-linearity.
During training, we calculated the probability or accuracy of classification by considering the overlap of the final state of the circuit, which comprises only the primary qumodes, with the one-hot encoded ground truth value of the training data sample. We counted all probabilities corresponding to non-zero values of the Fock number of the correct class towards the accuracy, indicating a "click" or "non-click" on a detector. The loss was then calculated using Eq. (12). For training, we used the Adam optimizer with a batch size of 16 and a decaying learning rate beginning at 0.001 and decreasing by a factor of 0.9 every 5,000 steps. During validation or testing runs, we interpreted the probabilities corresponding to each primary qumode as logits. The predicted class was determined by selecting the logit with the highest value.
To assess the versatility of our model, we performed multiple experiments, exploring different configurations of classical and quantum layers in the hybrid model. We considered a range of ratios, starting from 1 classical layer and 5 quantum layers (1:5) up to 5 classical layers and 1 quantum layer (5:1). In total, we examined 8 different layer ratios, including ratios such as 2:2, 2:6, and 2:8. The classical layers comprised 128 nodes each, except for the final layer (or only layer in the case of a single classical
Figure 16: CV hybrid NN for fraudulent transaction detection in credit card data. The parameters of each gate in the encoding layer are obtained from the values of the last classical layer.
Figure 17: Results of fraud detection in credit card transactions. The CV hybrid NN had 2 classical and 4 quantum hidden layers. The accuracy from the confusion matrix is 95.48%, and the area under the ROC curve is 0.9.
Figure 18: Plot showing the effect of changing the number of classical and quantum layers in a hybrid CV NN on the accuracy of the results for binary classification.
Figure 19: CV hybrid NN architecture used for MNIST image classification.
layer model), which had the same number of nodes as required by the quantum encoding layer. The latter is determined by the formula \(7p-2\), where \(p\) denotes the number of classes or primary qumodes.
The loss function values over the epochs for select ratios of classical and quantum layers are displayed in Fig. 21. All of our models converged within 100 epochs and demonstrated successful training and testing, achieving testing accuracies of \(96.47\%\pm 0.86\%\). Interestingly, we did not observe a significant impact on changing the number of classical and quantum layers. Nevertheless, our hybrid networks demonstrated high levels of accuracy for the 4-class MNIST classification problem.
## IV Conclusion
We proposed CV QNN models that can be realized experimentally. We introduced a quantum circuit element that involves an ancillary qumode with a controlled-X, i.e. CX gate, on the primary qumode. For a good success rate, it relies on repeat-until-success measurements of the photon number of ancillary qumodes [10]. It offers a simple and feasible solution to introducing non-linearity using current photonic quantum hardware, considering the high complexity of implementing non-Gaussian operators experimentally. Our study demonstrated that our experimentally realizable circuit element could efficiently solve many machine-learning and quantum computation problems.
For instance, we created a CV quantum circuit that can prepare a single photon state with 99.9% fidelity, a cat state with 99.8% fidelity, and a GKP state of fidelity 93.9%. In [25], they performed the state preparation using CV QNNs and use the Kerr gate for non-linearity, making the entire process experimentally hard to achieve. However, our CV QNN can prepare the states in a way that can be realized experimentally. Although we could not achieve high fidelity for the GKP state because of
Figure 21: Training and testing loss plots for select ratios of classical and quantum layers, the first (second) number indicating the number of classical (quantum) layers.
Figure 20: MNIST hybrid NN. (a) Encoding layer of MNIST hybrid neural network that embeds the output of the classical layers into the quantum circuit. (b) Detailed architecture of quantum layer.
the high computational requirements, we would need a higher cutoff dimension with more layers and optimization steps, leading to numerous training parameters for preparing such a complex state. However, it can be done, in principle, using high computational resources. GKP states could be a key factor in creating a scalable photonic fault-tolerant quantum computer [26].
We also developed models (from CV QNNs) capable of fitting functions using noisy data sets. We thoroughly analyzed the layers and noise effects on the curve fitting. As we increased the noise 5 times, the accuracy only went down by 24%. Our model could still learn and accurately reproduce the curve's shape. This insight is valuable as our model can perform well even with noisy data sets. We encountered some challenges while attempting to approximate complex functions using our hybrid model for curve fitting. We discovered that these functions required a higher cutoff dimension and more layers to achieve a lower cost. This, in turn, necessitated additional computational resources for fitting more complex functions. Despite the issues, we successfully performed curve fitting on various functions like sine decay. It would be interesting to further study how quickly the QNN learns compared to its classical counterpart based on the complexity of the function and available data points for training. For the sine function we studied, the quantum and classical circuits reach the same accuracy after training, although there has been some work showing that with few training data, the quantum circuits could learn faster [27]. They do not provide an advantage over classical machine learning but it has been argued that quantum circuits can outperform their classical counterparts under certain assumptions. The importance of data in quantum machine learning has also been studied; see, e.g., Ref. [28] where it was discussed how one could achieve quantum advantage based on chosen data and other machine learning techniques.
The binary classification problem achieved high classification accuracy. The AUC score of 0.9 and accuracy of more than 95% on a highly unbalanced data set. Similar work has been done to study fraud detection in credit card transactions using Quantum Support Vector Machine [29]. They used a quantum-classical method to select the best features for the training process. They also focus on the importance of using quantum machine learning in selecting these features to improve the model's accuracy, which compliments the classical approach in finance.
Also, image recognition done on the MNIST classification model can classify handwritten digits with up to 97% accuracy. We did not observe a significant change in the results as we varied the mix of classical and quantum layers. Hence, future research could investigate the efficacy of quantum layers in hybrid neural networks and quantum neural networks in general. Quantum computation has the potential to provide exponential speedup over classical computation for certain problems, quantum layers can exploit this speedup to perform computations more efficiently than classical layers for tasks that can benefit from quantum algorithms. Quantum layers can also leverage the properties of superposition and entanglement to process and represent information in ways that are not possible with classical layers. Quantum layers can also be utilized as non-linear feature mappings that are challenging for classical layers. Quantum Feature Maps can be used to transform input data into higher dimensions for richer representation to later be utilized by classical neural networks, leading to potentially more accurate classification. Moreover, even though CV QGANs (quantum generative adversarial networks) have been previously studied (see, e.g., [9]), it would be interesting to explore the performance of CV QGANs utilizing our proposed experimentally feasible setup. The proposed prototype of QGANs in [9] requires the use of non-Gaussian gates within its quantum layers for both the quantum generator and the quantum discriminator. Since our prescribed quantum layer only requires Gaussian gates, we can simulate the effectiveness of an experimentally viable QGAN.
Another possible future direction would be comparing continuous- and discrete-variable (DV) quantum computing. A similar study has been done in [30], where the authors compared the expressibility of classical and quantum neural networks by calculating effective dimensions for different cases. They showed that quantum neural networks have higher effective dimensions and train faster than their classical counterparts. They also used the Fisher information spectrum to demonstrate the resilience of quantum neural networks in terms of barren plateaus and the problem of vanishing gradients. It would be interesting to perform a similar study with CV QNNs. One such study has been done on barren plateaus in bosonic variational circuits [31]. They used an energy-dependent circuit to prepare Gaussian and number states. It would be interesting to extend this to other problems in quantum machine learning by calculating effective dimensions for different models. CV QNNs have shown some advantage over their DV counterparts in terms of required resources, thus a study of the performance of CV vs. DV QNNs would be of interest.
In conclusion, with our proposed CV quantum algorithm, we have obtained promising results in solving a wide range of machine-learning problems. The nonlinear quantum circuit element we introduced, which was based on an earlier proposal for universal CV quantum computing [10], offers an experimentally feasible solution to introducing non-linearity using current photonic quantum hardware, avoiding the high complexity in experimentally realizing non-Gaussian operators.
###### Acknowledgements.
Research funded by the National Science Foundation under award DGE-2152168. A portion of the computation for this work was performed on the University of
Tennessee Infrastructure for Scientific Applications and Advanced Computing (ISAAC) computational resources. KYA was supported by MITRE's Quantum Horizon Program.1
Footnote 1: ©2023 The MITRE Corporation. ALL RIGHTS RESERVED. Approved for public release. Distribution unlimited PR_22\(-\)04067\(-\)3.
|
2304.09221 | Convergence of stochastic gradient descent under a local Lojasiewicz
condition for deep neural networks | We study the convergence of stochastic gradient descent (SGD) for non-convex
objective functions. We establish the local convergence with positive
probability under the local \L{}ojasiewicz condition introduced by Chatterjee
in \cite{chatterjee2022convergence} and an additional local structural
assumption of the loss function landscape. A key component of our proof is to
ensure that the whole trajectories of SGD stay inside the local region with a
positive probability. We also provide examples of neural networks with finite
widths such that our assumptions hold. | Jing An, Jianfeng Lu | 2023-04-18T18:20:52Z | http://arxiv.org/abs/2304.09221v2 | Convergence of stochastic gradient descent under a local Lajasiewicz condition for deep neural networks
###### Abstract
We extend the global convergence result of Chatterjee [7] by considering the stochastic gradient descent (SGD) for non-convex objective functions. With minimal additional assumptions that can be realized by finitely wide neural networks, we prove that if we initialize inside a local region where the Lajasiewicz condition holds, with a positive probability, the stochastic gradient iterates converge to a global minimum inside this region. A key component of our proof is to ensure that the whole trajectories of SGD stay inside the local region with a positive probability. For that, we assume the SGD noise scales with the objective function, which is called machine learning noise and achievable in many real examples. Furthermore, we provide a negative argument to show why using the boundedness of noise with Robbins-Monro type step sizes is not enough to keep the key component valid.
## 1 Introduction
The stochastic gradient descent (SGD) and its variants are widely applied in machine learning problems due to its computational efficiency and generalization performance. A typical empirical loss function for the training writes as
\[F(\theta)=\mathbb{E}_{\xi\sim\mathcal{D}}[f(\theta,\xi)], \tag{1.1}\]
where \(\xi\) denotes the random sampling from the training data set following the distribution \(\mathcal{D}\). A standard SGD iteration to train weight variables \(\theta\in\mathbb{R}^{n}\) is of the form
\[\theta_{k+1}=\theta_{k}-\eta_{k}\nabla f(\theta_{k},\xi_{k}). \tag{1.2}\]
Here, the step size \(\eta_{k}\) can be either a fixed constant or iteration-adapted, and \(\nabla f(\theta_{k},\xi_{k})\) is a unbiased stochastic estimate of the gradient \(\nabla F(\theta_{k})\), induced by the sampling of the gradient.
The convergence of SGD for convex objective functions has been well established, and we give an incomplete list of works [6, 26, 5, 25, 21, 16] here for reference. However, in practice, it is more realistic to study the SGD for non-convex optimization problems, because training tasks are usually combined with complex neural networks and the stochastic gradient algorithms perform particularly well in non-convex optimization [10, 17, 28]. Compared with convex optimization, the behavior of stochastic gradient algorithms over the non-convex landscape is unfortunately much less understood. Due to the noise, the trajectory of stochastic iterates is more difficult to track. Thus, it is natural to investigate whether stochastic gradient algorithms converge through the training, and what minimum they converge to in the non-convex problem. However, these questions are noticeably challenging in both qualitative and quantitative senses. Most available results are limited. For example, works
such as [1, 2, 14, 34] provide convergence guarantees to a local minimum in terms of quantifying the vanishing of \(\nabla F\), but little information is given on what critical points that SGD converges to. Many convergence results are based on global assumptions on the objective function, including the global Poylak Lajasiewicz condition [22, 23], the global quasar-convexity [15], or assumptions of weak convexity and global boundedness of iterates [12, 33]. Those global assumptions are often not realistic, at least they cannot cover general multi-modal landscapes.
Connected with deep neural network structures, most convergence results are built in the over-parametrized regime, which means that the number of neurons grow at least polynomially with respect to the sample size. For example, works including [8, 18, 32, 36] consider wide neural networks, which essentially linearize the problem by extremely large widths. Particularly in such settings, Poylak-Lajasiewicz type conditions are shown to be satisfied, and they thus prove convergence with linear rates [3, 23]. However, convergence results are very limited for neural networks with finite widths and depths, and we refer to [7, 19] for recent progresses in terms of the convergence of gradient descents in such scenario.
In particular, [7] constructs feedforward neural networks with smooth and strictly increasing activation functions, with the input dimension being greater than or equal to the number of data points. Such neural networks satisfy a local version of the Lajasiewicz inequality, and the convergence of gradient descent to a global minimum given appropriate initialization are fully analyzed. In this work, our goal is to extend the global convergence result in [7] to a stochastic version, with minimal additional assumptions added to the loss function \(F(\theta)\).
#### Notation
Throughout the note, \(|\cdot|\) denotes the Euclidean norm, \(B(\theta,r)\) denotes an Euclidean ball with radius \(r\) centered at \(\theta\). Unless otherwise specified, the expectation \(\mathbb{E}=\mathbb{E}_{\xi\sim\mathcal{D}}\), and the gradients \(\nabla=\nabla_{\theta}\).
## 2 Preliminaries
We name the major assumption used in [7] as the local Lajasiewicz condition. Let us first recall the definition as in [7]: Given the dimension \(d\), let \(F:\mathbb{R}^{d}\to[0,\infty)\) be a non-negative loss function. For any \(\theta_{0}\in\mathbb{R}^{d}\) and \(r>0\), we define
\[\alpha(\theta_{0},r):=\inf_{B(\theta_{0},r),F(\theta)\neq 0}\frac{|\nabla F( \theta)|^{2}}{F(\theta)}. \tag{2.1}\]
If \(F(\theta)=0\) for all \(\theta\in B(\theta_{0},r)\), then we set \(\alpha(\theta_{0},r)=\infty\). With that, the main assumption is
**Assumption 1** (local Lajasiewicz condition).: _We assume that for some \(r>0\),_
\[4F(\theta_{0})<r^{2}\alpha(\theta_{0},r). \tag{2.2}\]
**Assumption 2** (\(C_{l}\)-smoothness).: _Furthermore, we assume that for a compact set \(\mathcal{K}\), there exist a constant \(C_{L}>0\) such that_
\[|\nabla F(\theta_{1})-\nabla F(\theta_{2})|\leq C_{L}|\theta_{1}-\theta_{2}| \tag{2.3}\]
_for any \(\theta_{1},\theta_{2}\in\mathcal{K}\)._
Based on that, we can obtain a local growth control of \(|\nabla F|\). The following result is a local version of [35, Lemma B.1]. The proof is the same and omitted, because the only modification needed for the localization is by assuming the local boundedness of \(|\nabla F|\).
**Lemma 2.1**.: _Suppose that \(F:\mathbb{R}^{d}\to\mathbb{R}\) is a non-negative function. Assume \(\nabla F\) is Lipschitz continuous in a compact set \(\mathcal{K}\) with the constant \(C_{L}\), and there exists \(\bar{C}>0\) such that \(\max_{\theta\in\mathcal{K}}|\nabla F(\theta)|=\bar{C}\). Then there exists a compact set \(\tilde{\mathcal{K}}\subset\mathcal{K}\), where \(\text{dist}(\partial\mathcal{K},\partial\tilde{\mathcal{K}})\) depends on \(C_{L}\) and \(\bar{C}\), such that for any \(\theta\in\tilde{\mathcal{K}}\),_
\[|\nabla F(\theta)|^{2}\leq 2C_{L}F(\theta). \tag{2.4}\]
Throughout the note, we assume that \(B(\theta_{0},r)\subset\tilde{\mathcal{K}}\) for a given radius \(r>0\).
### Additional assumptions
Besides the local Lajasiewicz condition, we need an additional structural assumption on \(F\). Given a large radius \(R\) such that (2.2) is satisfied for all \(r\leq R\), it is natural to assume that the \(F(\theta)\) is bounded away from zero near the domain boundary.
**Assumption 3**.: _For some constant \(M_{0}>0\),_
\[F(\theta)\geq M_{0},\quad\text{for all }\theta\in B(\theta_{0},R)\setminus B( \theta_{0},R-1). \tag{2.5}\]
Here we set the annulus radius to be \(1\) for simplicity. The assumptions (2.5) can include more general \(F\) by applying geometric transformations to \(F\), or linear transformations to the dataset to change the annulus radius.
The assumption (2.5) is in order to ensure that the support of the global minimum \(F(\theta^{*})=0\) is away from the boundary \(\partial B(\theta_{0},R)\). In [7, Theorem 2.1], Chatterjee constructed a feedforward neural network with finite width and depth that satisfies the local Lajasiewicz condition (2.2). We will verify in the Appendix that such a neural network satisfies (2.5) as well.
### Stochastic gradients
To analyze the convergence of stochastic gradient algorithms, it is unavoidable to assume some structure of the gradient noise. We may write the stochastic gradient in the decomposition form
\[\nabla f(\theta,\xi)=\nabla F(\theta)+Z(\theta,\xi), \tag{2.6}\]
where the noise term \(Z(\theta,\xi)\) is unbiased
\[\mathbb{E}[Z(\theta,\xi)]=0. \tag{2.7}\]
In most papers, usually two kinds of structural assumptions are imposed on the noise. We take a brief review.
* If one plans to analyze SGD with adaptive step sizes \(\eta_{k}\), i.e., in the Robbins-Monro flavor [31] \[\sum_{k=1}^{\infty}\eta_{k}=\infty,\quad\sum_{k=1}^{\infty}\eta_{k}^{1+m/2}< \infty,\text{ with }m\geq 2,\] (2.8) then it is typical to assume the bounded moments of stochastic gradients, that is, for some constant \(c>0\) and \(m\geq 2\), \[\mathbb{E}[|Z(\theta,\xi)|^{m}]\leq c^{m}<\infty.\] (2.9) The bounded variance (\(m=2\)) assumption is standard in stochastic optimization, and we refer to classical books, lecture notes and papers [4, 27, 29, 20] on this setup. The noise boundedness and adaptive step sizes have been a popular combination to establish convergence, for example,
[24] provides a SGD convergence based on the local convexity assumption, [13] gives the convergence of SGD to the local manifold of minima by estimating \(\mathbb{P}(F(\theta_{k})-\inf_{\theta\in\mathbb{R}^{d}}F(\theta)\geq\epsilon)\). We also mention works such as [30] that introduces the stochastic variance reduction method, which is motivated by improving the convergence rate for \(\min_{0\leq k\leq N-1}\mathbb{E}[|\nabla F(\theta_{k})|^{2}]\).
* On the other hand, if one just wants to establish global convergence for a fixed step size \(\eta\), additional assumptions on how the stochastic gradient being related to the loss function can help. One option, according to [35], is that **Assumption 4**.: _For some constant \(\sigma>0\),_ \[\nabla f(\theta,\xi)=\nabla F(\theta)+\sqrt{\sigma F(\theta_{k})}Z_{\theta,\xi},\] (2.10) _with_ \[\mathbb{E}[Z_{\theta,\xi}]=0,\quad\mathbb{E}[|Z_{\theta,\xi}|^{2}]\leq 1.\] (2.11) Here \(\sqrt{\sigma F(\theta_{k})}Z_{\theta,\xi}\) is named as the machine learning noise, which scales with the function value. We refer to [35, Section 2.5] for many types of machine learning problems satisfying the assumption (2.10)-(2.11). A more relaxed and general version, considered in [11], is that there exists a monotonically increasing function \(\varrho:\mathbb{R}_{+}\to\mathbb{R}_{+}\) so that \[\mathbb{P}\big{(}|\nabla f(\theta,\xi)-\nabla F(\theta)|^{2}\geq t\varrho(F( \theta))\big{)}\leq e^{-t}.\] (2.12)
In this note, we will present the global convergence of SGD under the second type assumption (2.10)-(2.11). Moreover, we will discuss why convergence under assumptions (2.8)-(2.9) can fail for the local Lajasiewicz condition.
### Comparison to previous convergence results
Our motivation is in the same vein as [7]: we assume the local Lajasiewicz condition for the initialization so that the convergence result may apply to feedforward neural networks of bounded width and depth.
Several convergence results for SGD in non-convex optimization have been developed in recent years, and we list a few here and highlight their differences:
* [24]: Convergence to a local minima \(\theta^{*}\) that are Hurwicz regular, i.e., \(\nabla^{2}F(\theta^{*})\succ 0\). The key difference is that they assume that there exists \(a>0\) such that \(\langle\nabla F(\theta),\theta-\theta^{*}\rangle\geq a|\theta-\theta^{*}|^{2}\) for all \(\theta\) in a convex compact neighborhood of \(\theta^{*}\).
* [35]: It assumes that the Lajasiewicz condition holds in an \(\epsilon\)-sublevel set of \(F\). As a consequence, a convergence rate for \(F(\theta)\to 0\) is obtained, but the proof cannot track where the global minimizer \(x^{*}\) locates.
Similar to the result in [7] for gradient descents, we will show that under our assumptions (1-4) as in the setup, with some computable probability, stochastic iterates will convergence to the global minimum almost surely. We need to choose the initialization in a ball where (2.2) holds, and as a consequence, the global miminizer also lies in the same ball without assuming its existence a priori, similar to [7]. Compared to [35], our results is Euclidean as we can track the stochastic trajectories in the training.
Main result
Given the assumptions we describe in the previous section, we present the global convergence of SGD with a quantitative rate and a step-size bound. We point out that (2.10)-(2.11) assuming noise in SGD being proportional to the objective function play an important role here.
**Theorem 3.1**.: _We choose an initialization \(\theta_{0}\) with a radius \(R>1\) such that (2.2) holds. Let \(F\) be a loss function such that its gradient is Lipschitz continuous (2.3), and (2.5) holds for \(B(\theta_{0},R)\), Then for every \(\delta>0\), there exists \(\varepsilon>0\) such that if \(F(\theta_{0})\leq\varepsilon\), with probability at least \(1-\delta\), we have \(\theta_{k}\in B(\theta_{0},R-1)\) for all \(k\in\mathbb{N}\)._
_Conditioned on the event that \(\theta_{k}\in B(\theta_{0},R-1)\) for all \(k\in\mathbb{N}\), we have_
\[\lim_{k\to\infty}\beta^{k}F(\theta_{k})=0 \tag{3.1}\]
_almost surely for every \(\beta\in[1,\rho^{-1})\), where_
\[\rho=1-\eta\alpha+\frac{\eta^{2}}{2}C_{L}(2C_{L}+\sigma)\in(0,1), \tag{3.2}\]
_if the step size \(\eta\) satisfies \(0<\eta\leq\min\{\frac{1}{\alpha},\frac{\alpha}{4C_{L}(2C_{L}+\sigma)}\}\). Moreover, conditioned on the same event, \(\theta_{k}\) converges almost surely to a point \(\theta_{*}\) where \(F(\theta_{*})=0\)._
Proof.: We define the event where all the iterates up to \(k\)-th step stay in a ball \(B(\theta_{0},r)\):
\[E_{k}(r):=\bigcap_{j=0}^{k}\{|\theta_{j}-\theta_{0}|\leq r\}, \tag{3.3}\]
for some \(r\in(0,R]\) to be determined later. We denote \(\mathcal{F}_{k}\) as the filtration generated by \(\xi_{1},\cdots,\xi_{k-1}\). Take the interpolation \(\theta_{k+s}=\theta_{k}-s\eta\nabla f(\theta_{k},\xi_{k})\) for \(s\in[0,1]\), we have that
\[\begin{split} F(\theta_{k+1})&-F(\theta_{k})=\int_ {0}^{1}\frac{d}{ds}F\big{(}\theta_{k}-s\eta\nabla f(\theta_{k},\xi_{k})\big{)} ds\\ &=-\eta\int_{0}^{1}\nabla F(\theta_{k+s})\cdot\nabla f(\theta_{k},\xi_{k})ds\\ &=-\eta\int_{0}^{1}\nabla F(\theta_{k})\cdot\nabla f(\theta_{k},\xi_{k})ds-\eta\int_{0}^{1}(\nabla F(\theta_{k+s})-\nabla F(\theta_{k})) \cdot\nabla f(\theta_{k},\xi_{k})ds\\ &\leq-\eta\nabla F(\theta_{k})\cdot\nabla f(\theta_{k},\xi_{k})+ \eta C_{L}\int_{0}^{1}|\theta_{k+s}-\theta_{k}||\nabla f(\theta_{k},\xi_{k})| ds\\ &=-\eta\nabla F(\theta_{k})\cdot\nabla f(\theta_{k},\xi_{k})+ \eta^{2}C_{L}\int_{0}^{1}s|\nabla f(\theta_{k},\xi_{k})|^{2}ds\\ &=-\eta\nabla F(\theta_{k})\cdot\nabla f(\theta_{k},\xi_{k})+ \frac{\eta^{2}}{2}C_{L}|\nabla f(\theta_{k},\xi_{k})|^{2},\end{split} \tag{3.4}\]
and we apply the smoothness assumption (2.3) in the middle inequality. Rearrange terms and multiply with the indicate function on both sides, we have
\[F(\theta_{k+1})\mathbb{1}_{E_{k+1}(r)}\leq\Big{(}F(\theta_{k})-\eta\nabla F( \theta_{k})\cdot\nabla f(\theta_{k},\xi_{k})+\frac{\eta^{2}}{2}C_{L}|\nabla f (\theta_{k},\xi_{k})|^{2}\Big{)}\mathbb{1}_{E_{k+1}(r)}. \tag{3.5}\]
Now we want to replace \(\mathbbm{1}_{E_{k+1}(r)}\) by \(\mathbbm{1}_{E_{k}(r)}\) on the right side. Note that \(\mathbbm{1}_{E_{k}(r)}\geq\mathbbm{1}_{E_{k+1}(r)}\), and moreover,
\[F(\theta_{k})-\eta\nabla F(\theta_{k})\cdot\nabla f(\theta_{k},\xi_{k})+\frac{ \eta^{2}}{2}C_{L}|\nabla f(\theta_{k},\xi_{k})|^{2}\geq 0, \tag{3.6}\]
since the discriminant of this quadratic equation is \(|\nabla F(\theta_{k})|^{2}-2C_{L}F(\theta_{k})\leq 0\) by (2.4). We thus get
\[F(\theta_{k+1})\mathbbm{1}_{E_{k+1}(r)}\leq\Big{(}F(\theta_{k})-\eta\nabla F( \theta_{k})\cdot\nabla f(\theta_{k},\xi_{k})+\frac{\eta^{2}}{2}C_{L}|\nabla f( \theta_{k},\xi_{k})|^{2}\Big{)}\mathbbm{1}_{E_{k}(r)}. \tag{3.7}\]
We replace \(\nabla f\) above by the decomposition (2.10), that is \(\nabla f(\theta_{k},\xi_{k})=\nabla F(\theta_{k})+\sqrt{\sigma F(\theta_{k})} Z_{\theta,\xi}\), and use the bound (2.4) as well as the local Lajasiewicz condition (2.2). By taking the expectation we have that
\[\mathbb{E}[F(\theta_{k+1})\mathbbm{1}_{E_{k+1}(r)}|\mathcal{F}_{k}]\leq\big{(} 1-\eta\alpha+\frac{\eta^{2}}{2}C_{L}(2C_{L}+\sigma)\big{)}F(\theta_{k}) \mathbbm{1}_{E_{k}(r)}. \tag{3.8}\]
We may choose a small step-size \(0<\eta^{*}\leq\min\{\frac{1}{\alpha},\frac{\alpha}{4C_{L}(2C_{L}+\sigma)}\}\), so that
\[\rho=1-\eta^{*}\alpha+\frac{\eta^{*2}}{2}C_{L}(2C_{L}+\sigma)\leq 1-\frac{ \alpha\eta^{*}}{2}\in(0,1).\]
With this, we obtain a contraction
\[\mathbb{E}[F(\theta_{k+1})\mathbbm{1}_{E_{k+1}(r)}|\theta_{0}]\leq\rho\mathbb{ E}[F(\theta_{k})\mathbbm{1}_{E_{k}(r)}|\theta_{0}]\leq\rho^{k+1}F(\theta_{0}). \tag{3.9}\]
Based on the contraction of \(F\), we can nicely control the Euclidean distance between \(\theta_{k}\) and \(\theta_{0}\). Note that
\[\mathbb{E}[|(\theta_{k+1}-\theta_{k})\mathbbm{1}_{E_{k}(r)}|] =\eta^{*}\mathbb{E}[|\nabla f(\theta_{k},\xi_{k})\mathbbm{1}_{E_{ k}(r)}|]=\eta^{*}\mathbb{E}[|(\nabla F(\theta_{k})+\sqrt{\sigma F(\theta_{k})} Z_{\theta_{k},\xi_{k}})\mathbbm{1}_{E_{k}(r)}|]\] \[\leq\eta^{*}\sqrt{2C_{L}}\sqrt{\mathbb{E}[F(\theta_{k})\mathbbm{ 1}_{E_{k}(r)}]}+\eta^{*}\sqrt{\mathbb{E}[\sigma F(\theta_{k})\mathbbm{1}_{E_{ k}(r)}]}\sqrt{\mathbb{E}[|Z_{\theta_{k},\xi_{k}})|^{2}]} \tag{3.10}\] \[=(\sqrt{2C_{L}}+\sqrt{\sigma})\eta^{*}\rho^{k/2}F(\theta_{0}),\]
where we apply the Cauchy-Schwarz inequality, the growth bound (2.4), and (2.10-2.11). The total distance is thus finite
\[\mathbb{E}\Big{[}\sum_{k=0}^{\infty}|(\theta_{k+1}-\theta_{k})\mathbbm{1}_{E_ {k}(r)}|\Big{]}\leq\frac{(\sqrt{2C_{L}}+\sqrt{\sigma})\eta^{*}}{1-\sqrt{\rho}} F(\theta_{0})<\infty. \tag{3.11}\]
By the Markov's inequality, for any \(\tilde{\delta}\in(0,1)\), we have the length of trajectory bounded by
\[\sum_{k=0}^{\infty}|(\theta_{k+1}-\theta_{k})\mathbbm{1}_{E_{k}(r)}|\leq\frac {(\sqrt{2C_{L}}+\sqrt{\sigma})\eta^{*}}{(1-\sqrt{\rho})\tilde{\delta}}F( \theta_{0}), \tag{3.12}\]
with probability at least \(1-\tilde{\delta}\). This means that as long as the local Lajasiewicz condition holds, the stochastic iterates will converge to a point with high probability. What remains to show that there exists a positive probability for
\[E_{\infty}(r):=\bigcap_{j=0}^{\infty}\{|\theta_{j}-\theta_{0}|\leq r\} \tag{3.13}\]
to occur. In fact, we can set \(r=R-1\) and argue that there exists a positive probability that all stochastic iterates are trapped in the ball \(B(\theta_{0},R-1)\). Let us consider the following two cases:
* Utilizing (3.10), the chance for the next iterate to escape the ball \(B(\theta_{0},R)\) is \[\begin{split}\mathbb{P}\big{(}E_{k}(R-1)&\text{ occurs but }\theta_{k+1}\in B^{c}(\theta_{0},R)\big{)}\leq\mathbb{P}(|\theta_{k+1}-\theta_{k}| \mathbbm{1}_{E_{k}(R-1)}\geq 1)\\ &\leq\mathbb{E}[|\theta_{k+1}-\theta_{k}|\mathbbm{1}_{E_{k}(R-1)} ]\leq(\sqrt{2C_{L}}+\sqrt{\sigma})\eta^{*}\rho^{k/2}F(\theta_{0}).\end{split}\] (3.14)
* Utilizing (2.5), the chance for the next iterate to enter the annulus \(B(\theta_{0},R)\setminus B(\theta_{0},R-1)\) is \[\begin{split}\mathbb{P}\big{(}E_{k}(R-1)&\text{ occurs but }\theta_{k+1}\in B(\theta_{0},R)\setminus B(\theta_{0},R-1)\big{)}\\ &\leq\mathbb{P}(F(\theta_{k+1})\mathbbm{1}_{E_{k+1}(R)}\geq M_{0 })\leq\frac{\mathbb{E}[F(\theta_{k+1})\mathbbm{1}_{E_{k+1}(R)}]}{M_{0}}\leq \frac{\rho^{k+1}F(\theta_{0})}{M_{0}}.\end{split}\] (3.15)
We are ready to conclude, note that the event defined in (3.3) over radius \(r=R-1\) is monotonically decreasing,
\[E_{k+1}(R-1)\subseteq E_{k}(R-1). \tag{3.16}\]
Let us define
\[\tilde{E}_{k+1}(R-1):=E_{k}(R-1)\setminus E_{k+1}(R-1), \tag{3.17}\]
whose probability is indeed the summation of (3.14) and (3.15)
\[\mathbb{P}(\tilde{E}_{k+1}(R-1))\leq F(\theta_{0})\Big{(}(\sqrt{2C_{L}}+\sqrt {\sigma})\eta^{*}\rho^{k/2}+\frac{\rho^{k+1}}{M_{0}}\Big{)}. \tag{3.18}\]
The probability of the complementary event can be bounded,
\[\mathbb{P}(E_{k}^{c}(R-1))=\sum_{i=1}^{k}\mathbb{P}(\tilde{E}_{i}(R-1))<F( \theta_{0})\Big{(}(\sqrt{2C_{L}}+\sqrt{\sigma})\frac{\eta^{*}\sqrt{\rho}}{1- \sqrt{\rho}}+\frac{\rho^{2}}{M_{0}(1-\rho)}\Big{)} \tag{3.19}\]
for any \(k\in\mathbb{N}\). Thus for every \(0<\delta<1\), there exists \(\varepsilon>0\) depending on \(C_{L},\sigma,\eta^{*},\rho\) and \(M_{0}\), such that if \(F(\theta_{0})\leq\varepsilon\), then
\[\mathbb{P}(E_{k}(R-1))\geq 1-\delta \tag{3.20}\]
for all \(k\in\mathbb{N}\). Taking \(k\to\infty\), we then have \(\mathbb{P}(E_{\infty}(R-1))\geq 1-\delta\).
Conditioned on the event \(E_{\infty}(R-1)\), we can conclude from (3.9) that for every \(\beta\in[1,\rho^{-1})\),
\[\lim_{k\to\infty}\beta^{k}F(\theta_{k})=0 \tag{3.21}\]
almost surely. Moreover, for all \(1\leq j<k\), by (3.10),
\[\begin{split}\mathbb{E}[|\theta_{k}-\theta_{j}|\mathbbm{1}_{E_{ \infty}(R-1)}]&\leq\sum_{i=j}^{k-1}\mathbb{E}[|\theta_{i+1}- \theta_{i}|\mathbbm{1}_{E_{\infty}(R-1)}]\leq\sum_{i=j}^{k-1}\mathbb{E}[| \theta_{i+1}-\theta_{i}|\mathbbm{1}_{E_{i}(R-1)}]\\ &\leq\sum_{i=j}^{k-1}(\sqrt{2C_{L}}+\sqrt{\sigma})\eta^{*}\rho^{ i/2}F(\theta_{0})<(\sqrt{2C_{L}}+\sqrt{\sigma})\eta^{*}F(\theta_{0})\frac{\rho^{j/2}}{1- \sqrt{\rho}}.\end{split} \tag{3.22}\]
As \(j\to\infty\), this bound goes to zero. Thus \(\{\theta_{k}\}_{k\geq 0}\) from a Cauchy sequence conditioned on \(E_{\infty}(R-1)\), and
\[\theta_{k}\to\theta_{*}\in B(\theta_{0},R-1),\quad\text{as }k\to\infty.\]
By (3.9), we see that \(F(\theta_{*})=0\) conditioned on \(E_{\infty}(R-1)\). We also know that with probability at least \(1-\delta\), \(\theta_{k}\in B(\theta_{0},R-1)\) for all \(k\in\mathbb{N}\). Therefore, we can conclude that with probability at least \(1-\delta\), \(\theta_{k}\) converges to a global minimizer \(\theta_{*}\in B(\theta_{0},R-1)\)
Non-convergence with bounded noises
One may wonder if the machine learning noise is relaxed by only assuming the boundedness (2.9), whether it can be shown that SGD iterates still converge inside \(B(\theta_{0},r)\) for some radius \(r>0\). As a companion of bounded noises, we should consider to take a Robbins-Monro type adaptive step sizes
\[\eta_{k}=\frac{\gamma}{(k+n_{0})^{q}},\ \ \ \ q\in(1/2,1] \tag{4.1}\]
for SGD (1.2), with constants \(\gamma,n_{0}>0\) to be chosen.
If all the stochastic iterates \(\theta_{k}\) stay inside the ball where the local Lajasiewicz condition (2.2) holds, then we show in Lemma 4.2 that the convergence happens with an algebraic rate. Lemma 4.2 relies on classical results in numerical sequences, which can be traced back to [9].
**Lemma 4.1** ([9], Lemma 1 and Lemma 4).: _Let \(\{b_{k}\}_{k\geq 1}\) be a non-negative sequence such that_
\[b_{k+1}\leq\Big{(}1-\frac{C_{1}}{(k+n_{0})^{q}}\Big{)}b_{k}+\frac{C_{2}}{(k+n_{ 0})^{q+p}}, \tag{4.2}\]
_where \(q\in(0,1]\), \(p>0\) and \(C_{1},C_{2}>0\), then_
1. _if_ \(q=1\) _and_ \(C_{1}>p\)_, we have_ \[b_{k}\leq\frac{C_{2}}{C_{1}-p}\frac{1}{k}+o\Big{(}\frac{1}{k}\Big{)};\] (4.3)
2. _if_ \(q<1\)_, we have_ \[b_{k}\leq\frac{C_{2}}{C_{1}}\frac{1}{k^{p}}+o\Big{(}\frac{1}{k^{p}}\Big{)}.\] (4.4)
In the proof of Lemma 4.2, an iteration inequality of the form (4.2) will show up after we apply the local Lajasiewicz condition (2.2), set up suitable \(n_{0}\), and use the boundedness of the noise (2.9). The power \(q\) comes from (4.1), and the extra power \(p\) comes from \(\eta_{k}^{2}\).
**Lemma 4.2**.: _Given a radius \(r>0\) where (2.2) holds. Assume that the noise term in (2.6) satisfies (2.9), and the SGD iterates are updated with adaptive step sizes \(\eta_{k}=\frac{\gamma}{(k+n_{0})^{q}},1/2<q\leq 1\)._
_Conditioned on the event that \(\theta_{k}\in B(\theta_{0},r)\) for all \(k\in\mathbb{N}\), then for \(n_{0}\geq\Big{(}\frac{2C_{L}^{2}\gamma}{\alpha}\Big{)}^{1/q}\), we have the convergence_
\[\mathbb{E}[F(\theta_{k})]\leq\begin{cases}\frac{\gamma^{2}c^{2}C_{L}}{\alpha \gamma-2}\frac{1}{k}+o\Big{(}\frac{1}{k}\Big{)},&\text{ if }q=1\text{ and }\gamma>2/\alpha\\ \frac{\gamma^{2}c^{2}C_{L}}{\alpha\gamma}\frac{1}{k^{q}}+o\Big{(}\frac{1}{k^ {q}}\Big{)},&\text{ if }1/2<q<1.\end{cases} \tag{4.5}\]
Proof.: We consider the same event as before
\[E_{k}(r):=\bigcap_{j=0}^{k}\{|\theta_{j}-\theta_{0}|\leq r\}, \tag{4.6}\]
for some \(0<r\leq R\). Taking similar beginning steps as in Theorem 3.1, we arrive to the same inequality as (3.7).
\[F(\theta_{k+1})\mathbbm{1}_{E_{k+1}(r)}\leq\Big{(}F(\theta_{k})-\eta_{k} \nabla F(\theta_{k})\cdot\nabla f(\theta_{k},\xi_{k})+\frac{\eta_{k}^{2}}{2} C_{L}|\nabla f(\theta_{k},\xi_{k})|^{2}\Big{)}\mathbbm{1}_{E_{k}(r)}. \tag{4.7}\]
By inserting \(\nabla f(\theta,\xi)=\nabla F(\theta)+Z(\theta,\xi)\), we expand the right side to be
\[F(\theta_{k+1})\mathbbm{1}_{E_{k+1}(r)}\] \[\leq\Big{(}F(\theta_{k})-\eta_{k}|\nabla F(\theta_{k})|^{2}+(\eta_ {k}^{2}C_{L}-\eta_{k})\nabla F(\theta_{k})\cdot Z(\theta_{k},\xi_{k})+\frac{ \eta_{k}^{2}}{2}C_{L}(|\nabla F(\theta_{k},\xi_{k})|^{2}+|Z(\theta_{k},\xi_{ k})|^{2})\Big{)}\mathbbm{1}_{E_{k}(r)} \tag{4.8}\]
Because \(Z(\theta_{k},\xi_{k})\) has zero mean and its second moment is bounded by \(c^{2}\) (2.9), in addition with (2.4), by taking the expectation we have that
\[\mathbb{E}[F(\theta_{k+1})\mathbbm{1}_{E_{k+1}(r)}|\theta_{0}]\leq\Big{(}1- \alpha\eta_{k}+\eta_{k}^{2}C_{L}^{2}\Big{)}\mathbb{E}[F(\theta_{k})\mathbbm{1 }_{E_{k}(r)}|\theta_{0}]+\frac{\eta_{k}^{2}c^{2}C_{L}}{2}. \tag{4.9}\]
We may view \(\mathbb{E}[F(\theta_{k})\mathbbm{1}_{E_{k}(r)}|\theta_{0}]\) as \(b_{k}\) in (4.2). We may set \(n_{0}\) to satisfy
\[n_{0}\geq\Big{(}\frac{2C_{L}^{2}\gamma}{\alpha}\Big{)}^{1/q}, \tag{4.10}\]
so that
\[1-\alpha\eta_{k}+\eta_{k}^{2}C_{L}^{2}\leq 1-\frac{\alpha\eta_{k}}{2}=1-\frac{ \alpha\gamma}{2(k+n_{0})^{q}}. \tag{4.11}\]
Note that the higher order term in (4.9 has the expression
\[\frac{\eta_{k}^{2}c^{2}C_{L}}{2}=\frac{\gamma^{2}c^{2}C_{L}}{2(k+n_{0})^{2q}}. \tag{4.12}\]
Thus, we apply Lemma 4.1 and obtain the convergence
1. if \(q=1\), we set \(\gamma>\frac{2}{\alpha}\) so that \[\mathbb{E}[F(\theta_{k})\mathbbm{1}_{E_{k}(r)}|\theta_{0}]\leq\frac{\gamma^{2 }c^{2}C_{L}}{\alpha\gamma-2}\frac{1}{k}+o\Big{(}\frac{1}{k}\Big{)};\] (4.13)
2. if \(1/2<q<1\), we have \[\mathbb{E}[F(\theta_{k})\mathbbm{1}_{E_{k}(r)}|\theta_{0}]\leq\frac{\gamma^{2 }c^{2}C_{L}}{\alpha\gamma}\frac{1}{k^{q}}+o\Big{(}\frac{1}{k^{q}}\Big{)}.\] (4.14)
**Remark 4.3**.: _Compared with the proof of Theorem 3.1, the decay rate (4.5) of \(F(\theta_{k})\) is not enough to ensure that \(E_{\infty}(r)\) happens with a positive probability. In fact, we need both probabilities in (3.14) and (3.15) to be summable._
_In [24], the authors assume a local convexity in a convex neighborhood \(\mathcal{K}\) of a local minimizer \(x^{*}\), so that there exists \(0<\alpha<\beta<\infty\) and_
\[\alpha|x-x^{*}|^{2}\leq\nabla F(x)^{\top}(x-x^{*})\leq\beta|x-x^{*}|^{2},\quad \text{for all }x\in\mathcal{K}.\]
_This ensures a strong contraction of \(|x_{k}-x^{*}|^{2}\) through iterations. Just with bounded noises, they can show \(\{x_{k}\}\) stays inside the neighborhood \(\mathcal{K}\) with a positive probability under Ronbin-Monro step sizes. Unfortunately, the local Lajasiewicz condition (2.2) we assume here is not sufficient to guarantee such a contraction._
In fact, we can show that the boundedness of noises is not enough to guarantee that iterates stay inside a ball all the time under the local Lajasiewicz condition (2.2). Let us present a negative result in the following. By constructing identically independent random variables \(Z(\theta_{k},\xi)\equiv Z_{k}\) such that \(\mathbb{E}[Z_{k}]=0\) and \(\mathbb{E}[|Z_{k}|]=\bar{m}>0\) for \(k\in\mathbb{N}\), it turns out that under adaptive step sizes (4.1), iterates will escape the ball \(B(\theta_{0},r)\) almost surely.
**Lemma 4.4**.: _Consider the following stochastic gradient iteration_
\[\theta_{k+1}=\theta_{k}-\eta_{k}(\nabla F(\theta_{k})+Z_{k}), \tag{4.15}\]
_where \(\eta_{k}=\frac{\gamma}{(k+n_{0})^{q}},1/2<q\leq 1\), \(n_{0}\geq\left(\frac{2C_{L}^{2}\gamma}{\alpha}\right)^{1/q}\) and \(\gamma>2/\alpha\) as in Lemma 4.2. We assume that \(Z_{k}\) are i.i.d. with \(\mathbb{E}[Z_{k}]=0,\mathbb{E}[|Z_{k}|]=\bar{m}\) for some \(\bar{m}>0\), and \(\mathbb{E}[|Z_{k}|^{2}]\leq c^{2}\) for some \(c>0\). Then, given a radius \(r>0\) such that (2.2) holds, \(\{\theta_{k}\}_{k\geq 0}\) will exit the ball \(B(\theta_{0},r)\) almost surely._
Proof.: We argue by contradiction. Suppose all iterates from (4.15) stay inside the ball \(B(x_{0},r)\), we aim to prove that \(\lim_{n\to\infty}\sum_{k=0}^{n}|\theta_{k+1}-\theta_{k}|=\infty\) almost surely. For any \(n\geq 1\), we have
\[\begin{split}\sum_{k=0}^{n}|\theta_{k+1}-\theta_{k}|& =\sum_{k=0}^{n}\eta_{k}|\nabla F(\theta_{k})+Z_{k}|\geq\sum_{k=0} ^{n}\frac{\gamma}{k+n_{0}}|\nabla F(\theta_{k})+Z_{k}|\\ &\geq\tilde{c}\sum_{k=1}^{n}\frac{1}{k}|\nabla F(\theta_{k})+Z_{ k}|\geq\tilde{c}\Big{(}\sum_{k=1}^{n}\frac{1}{k}|Z_{k}|-\sum_{k=1}^{n}\frac{1}{k}| \nabla F(\theta_{k})|\Big{)},\end{split} \tag{4.16}\]
where the second inequality can be satisfied if we choose \(\tilde{c}\in(0,\frac{\gamma}{n_{0}+1})\). We claim that
\[\sum_{k=1}^{n}\frac{1}{k}|Z_{k}|\to\infty,\quad\text{as $n\to\infty$ \ almost surely}. \tag{4.17}\]
If not, say there exists a finite number \(\mu>0\) such that \(\lim_{n\to\infty}\sum_{k=1}^{n}\frac{1}{k}|Z_{k}|=\mu\), the Cesaro mean theorem implies that the Cesaro average also converges to the same limit,
\[\frac{1}{n}\sum_{j=1}^{n}\Big{(}\sum_{k=1}^{j}\frac{1}{k}|Z_{k}|\Big{)}\to\mu,\quad\text{as $n\to\infty$ \ almost surely}. \tag{4.18}\]
On the other hand,
\[\begin{split}\frac{1}{n}\sum_{j=1}^{n}\Big{(}\sum_{k=1}^{j}\frac {1}{k}|Z_{k}|\Big{)}&=\frac{1}{n}\sum_{k=1}^{n}\Big{(}\sum_{j=k} ^{n}\frac{1}{k}|Z_{k}|\Big{)}=\frac{1}{n}\sum_{k=1}^{n}\frac{n+1-k}{k}|Z_{k}| \\ &=\sum_{k=1}^{n}\frac{1}{k}|Z_{k}|+\frac{1}{n}\sum_{k=1}^{n}\frac {1}{k}|Z_{k}|-\frac{1}{n}\sum_{k=1}^{n}|Z_{k}|,\end{split} \tag{4.19}\]
which implies that as \(n\to\infty\), \(\frac{1}{n}\sum_{k=1}^{n}|Z_{k}|\to 0\) almost surely. But by the law of large number and the construction of \(Z_{k}\), \(\frac{1}{n}\sum_{k=1}^{n}|Z_{k}|\to\bar{m}\neq 0\), we get the contradiction. Thus (4.17) is proved.
Due to (4.5) in Lemma 4.2, we have that \(\lim_{n\to\infty}\sum_{k=1}^{n}\frac{1}{k}|\nabla F(\theta_{k})|\leq\lim_{n \to\infty}\sum_{k=1}^{n}\frac{\sqrt{2C_{L}}}{k}\sqrt{F(\theta_{k})}<\infty\) almost surely. With (4.17), we deduce from (4.16) that \(\lim_{n\to\infty}\sum_{k=0}^{n}|\theta_{k+1}-\theta_{k}|=\infty\) almost surely.
However, this statement cannot happen if all iterates stay inside the ball, because by Lemma 4.2, \(F(\theta_{k})\) converges. By a stopping criterion, it implies that \(\sum_{k=0}^{n}|\theta_{k+1}-\theta_{k}|<\infty\). The infinite sum tells that the SGD iterates will exit the ball almost surely.
### Acknowledgement
The work of JL is supported in part by National Science Foundation via grant DMS-2012286.
## Appendix A Appendix
In [7], Chatterjee provides a feedforward neural network that satisfies the local Lajasiewicz condition (2.2). The purpose of this appendix is to show that this feedforward neural network construction satisfies Assumption 3 (2.5) as well.
The object function that [7] considers is the squared error loss. For the dateset \(\{(x_{i},y_{i})\}_{i=1}^{n}\), we consider
\[F(\theta):=\frac{1}{n}\sum_{i=1}^{n}(y_{i}-\varphi(x_{i},\theta))^{2}.\] (A.1)
Here, \(\varphi(x,\theta)\) is the neural network writes as
\[\varphi(x,\theta):=\tilde{\sigma}_{L}(W_{L}\tilde{\sigma}_{L-1}(\cdots(W_{2} \tilde{\sigma}_{1}(W_{1}x+b_{1})+b_{2})\cdots)+b_{L}),\] (A.2)
with \(\tilde{\sigma}_{1}\cdots,\tilde{\sigma}_{L}\) being activation functions acting componentwisely, and \(W_{l}\in\mathbb{R}^{d_{l}\times d_{l-1}},d_{0}=d,d_{L}=1,b_{l}\in\mathbb{R}^{l}\) for \(1\leq l\leq L\). The parameter to be trained is
\[\theta=(W_{1},b_{1},W_{2},b_{2},\cdots W_{L},b_{L}),\] (A.3)
and it can be viewed as a vector \(\mathbb{R}^{p}\) with \(p=\sum_{l=1}^{L}d_{l}(d_{l-1}+1)\).
Let us follow the setups in Theorem 2.1 in [7] and state our result.
**Theorem A.1**.: _We consider the squared error loss (A.1) with a feedforward neural network (A.2), with depth \(L\geq 2\), \(d_{L}=1\) and \(\tilde{\sigma}_{L}=\) identity. Suppose for \(0\leq l\leq L-1\), the activation functions \(\tilde{\sigma}_{l}\in C^{2},\tilde{\sigma}_{l}(0)=0\), and \(c_{l}:=\min_{x}\tilde{\sigma}_{l}^{\prime}(x)>0\). Suppose the input data \(x_{1},\cdots,x_{n}\in\mathbb{R}^{d}\) are linearly independent, and let \(\lambda_{0}\) be the minimum eigenvalue of \(\frac{1}{n}X^{\top}X\), with the matrix \(X=(x_{1},\cdots x_{n})\in\mathbb{R}^{d\times n}\) formed by column vectors \(x_{i}\)'s. We can initialize \(\theta_{0}=(W_{1},b_{1},W_{2},b_{2},\cdots W_{L},b_{L})\) such that \(b_{l}=0\) for all \(1\leq l\leq L\), \(W_{1}=0\), and the entries of \(W_{2},\cdots,W_{L}\) are all strictly positive. In addition, let \(R>0\) be the minimum value of the entries of \(W_{2},\cdots,W_{L-1}\), and \(A>R>0\) be the minimum value of the entries of \(W_{L}\), we have the following lower bound:_
\[F(\theta)\geq\frac{\lambda_{0}}{2n}(A-R)^{2}R^{2L-4}(d_{L-1}d_{L-2}\cdots d_{ 2}c_{L-1}c_{L-2}\cdots c_{1})^{2}d_{1}|\theta-\theta_{0}|^{2}-\frac{1}{n}\sum _{i=1}^{n}y_{i}^{2}.\] (A.4)
Proof.: Let us define
\[\varphi_{1}(x,\theta)=\tilde{\sigma}_{1}(W_{1}x+b_{1})\] (A.5)
and for \(2\leq l\leq L\),
\[\varphi_{l}(x,\theta)=\tilde{\sigma}_{l}(W_{l}\tilde{\sigma}_{L-1}(\cdots(W_{ 2}\tilde{\sigma}_{1}(W_{1}x+b_{1})+b_{2})\cdots)+b_{l}).\] (A.6)
Following the computations in [7], we have that, the partial derivative of \(\varphi_{l}\) with respect to \((i,j)\) entry of \(W_{1}\) is
\[\partial_{ij}\varphi_{l}(x,\theta)=x_{j}q_{i}(x,\theta):=x_{j}W_{L}D_{L-1}(x, \theta)W_{L-1}\cdots W_{2}D_{1}(x,\theta)e_{i},\] (A.7)
where \(D_{l}(x,\theta)\in\mathbb{R}^{d_{l}\times d_{l}}\) is a diagonal matrix with diagonal entries filled by \(\tilde{\sigma}_{l}^{\prime}(W_{l}\varphi_{l-1}(x,\theta)+b_{l})\), and \(e_{i}\in\mathbb{R}^{d_{1}}\) is the \(i\)-th unit vector.
Let \(\theta_{0}\) be the starting vector where the entries of \(W_{2},\cdots W_{L-1}\) are all strictly positive, and \(b_{1},\cdots,b_{L}\) and \(W_{1}\) be zero, then one can find \(\varphi(x,\theta_{0})=0\) for each \(x\). Thus \(F(\theta_{0})=\frac{1}{n}\sum_{i=1}^{n}y_{i}^{2}\). Using the Young's inequality and the Cauchy-Schwarz inequality, the loss function can be rewritten as
\[\begin{split} F(\theta)&\geq\frac{1}{n}\sum_{i=1}^ {n}\left(y_{i}^{2}+\varphi(x_{i},\theta)^{2}-2|\varphi(x_{i},\theta)||y_{i}| \right)\\ &\geq\frac{1}{2n}\sum_{i=1}^{n}\varphi(x_{i},\theta)^{2}-\frac{1 }{n}\sum_{i=1}^{n}y_{i}^{2}\geq\frac{1}{2n^{2}}\Big{(}\sum_{i=1}^{n}\varphi(x_ {i},\theta)\Big{)}^{2}-\frac{1}{n}\sum_{i=1}^{n}y_{i}^{2}.\end{split}\] (A.8)
Moreover, by mean value theorem, we can find some \(\tilde{\theta}=(\tilde{W}_{1},\tilde{b}_{1},\tilde{W}_{2},\tilde{b}_{2}, \cdots\tilde{W}_{L},\tilde{b}_{L})\) between \(\theta\) and \(\theta_{0}\) such that
\[\Big{(}\sum_{i=1}^{n}\varphi(x_{i},\theta)\Big{)}^{2}=\Big{(}\sum_{i=1}^{n} \partial_{\theta}\varphi(x_{i},\tilde{\theta})^{\top}(\theta-\theta_{0})\Big{)} ^{2}=\sum_{i,j=1}^{n}(\theta-\theta_{0})^{\top}H_{ij}(\tilde{\theta})(\theta- \theta_{0}),\] (A.9)
where \(H_{ij}(\tilde{\theta})=\partial_{\theta}\varphi(x_{i},\tilde{\theta})\cdot \partial_{\theta}\varphi(x_{j},\tilde{\theta})\). We can bound \(\sum_{i,j=1}^{n}H_{ij}(\tilde{\theta})\) from below by the minimum eigenvalue \(\lambda(\tilde{\theta})\). Note that \(\lambda_{0}\) is the minimum eigenvalue of \(\frac{1}{n}X^{\top}X\). As \(A>R\) is a lower bound on the entries of \(W_{L}\), We have that the entries of \(\tilde{W}_{L}\) are bounded below by \(A-R\), with \(c_{l}:=\min_{u}\tilde{\sigma}_{l}^{\prime}(u)>0\), we deduce that
\[\lambda(\tilde{\theta})\geq n\lambda_{0}(A-R)^{2}R^{2L-4}(d_{L-1}d_{L-2}\cdots d _{2}c_{L-1}c_{L-2}\cdots c_{1})^{2}d_{1}.\] (A.10)
Therefore, we bound \(F(\theta)\) by
\[F(\theta)\geq\frac{\lambda_{0}}{2n}(A-R)^{2}R^{2L-4}(d_{L-1}d_{L-2}\cdots d_{2 }c_{L-1}c_{L-2}\cdots c_{1})^{2}d_{1}|\theta-\theta_{0}|^{2}-\frac{1}{n}\sum_{ i=1}^{n}y_{i}^{2}.\] (A.11)
From the lower bound estimate above, if \(R\) is large enough compared with \(\sum_{i=1}^{n}y_{i}^{2}\), we can find \(M_{0}>0\) such that \(F(\theta)\geq M_{0}\) for \(R-1\leq|\theta-\theta_{0}|\leq R\).
|
2305.10459 | AnalogNAS: A Neural Network Design Framework for Accurate Inference with
Analog In-Memory Computing | The advancement of Deep Learning (DL) is driven by efficient Deep Neural
Network (DNN) design and new hardware accelerators. Current DNN design is
primarily tailored for general-purpose use and deployment on commercially
viable platforms. Inference at the edge requires low latency, compact and
power-efficient models, and must be cost-effective. Digital processors based on
typical von Neumann architectures are not conducive to edge AI given the large
amounts of required data movement in and out of memory. Conversely,
analog/mixed signal in-memory computing hardware accelerators can easily
transcend the memory wall of von Neuman architectures when accelerating
inference workloads. They offer increased area and power efficiency, which are
paramount in edge resource-constrained environments. In this paper, we propose
AnalogNAS, a framework for automated DNN design targeting deployment on analog
In-Memory Computing (IMC) inference accelerators. We conduct extensive hardware
simulations to demonstrate the performance of AnalogNAS on State-Of-The-Art
(SOTA) models in terms of accuracy and deployment efficiency on various Tiny
Machine Learning (TinyML) tasks. We also present experimental results that show
AnalogNAS models achieving higher accuracy than SOTA models when implemented on
a 64-core IMC chip based on Phase Change Memory (PCM). The AnalogNAS search
code is released: https://github.com/IBM/analog-nas | Hadjer Benmeziane, Corey Lammie, Irem Boybat, Malte Rasch, Manuel Le Gallo, Hsinyu Tsai, Ramachandran Muralidhar, Smail Niar, Ouarnoughi Hamza, Vijay Narayanan, Abu Sebastian, Kaoutar El Maghraoui | 2023-05-17T07:39:14Z | http://arxiv.org/abs/2305.10459v1 | # AnalogNAS: A Neural Network Design Framework for Accurate Inference with Analog In-Memory Computing
###### Abstract
The advancement of Deep Learning (DL) is driven by efficient Deep Neural Network (DNN) design and new hardware accelerators. Current DNN design is primarily tailored for general-purpose use and deployment on commercially viable platforms. Inference at the edge requires low latency, compact and power-efficient models, and must be cost-effective. Digital processors based on typical von Neumann architectures are not conducive to edge AI given the large amounts of required data movement in and out of memory. Conversely, analog/mixed-signal in-memory computing hardware accelerators can easily transcend the memory wall of von Neumann architectures when accelerating inference workloads. They offer increased area- and power efficiency, which are paramount in edge resource-constrained environments. In this paper, we propose _AnalogNAS_, a framework for automated DNN design targeting deployment on analog In-Memory Computing (IMC) inference accelerators. We conduct extensive hardware simulations to demonstrate the performance of AnalogNAS on State-Of-The-Art (SOTA) models in terms of accuracy and deployment efficiency on various Tiny Machine Learning (TinyML) tasks. We also present experimental results that show AnalogNAS models achieving higher accuracy than SOTA models when implemented on a 64-core IMC chip based on Phase Change Memory (PCM). The AnalogNAS search code is released1.
Analog AI, Neural Architecture Search, Optimization, Edge AI, In-memory Computing
Footnote 1: [https://github.com/IBM/analog-nas](https://github.com/IBM/analog-nas)
## I Introduction
With the growing demands of real-time DL workloads, today's conventional cloud-based AI deployment approaches do not meet the ever-increasing bandwidth, real-time, and low-latency requirements. Edge computing brings storage and local computations closer to the data sources produced by the sheer amount of Internet of Things (IoT) objects, without overloading network and cloud resources. As DNNs are becoming more memory and compute intensive, edge AI deployments on resource-constrained devices pose significant challenges. These challenges have driven the need for specialized hardware accelerators for on-device Machine Learning (ML) and a plethora of tools and solutions targeting the development and deployment of power-efficient edge AI solutions. One such promising technology for edge hardware accelerators is analog-based IMC, which is herein referred to as _analog IMC_.
Analog IMC [1] can provide radical improvements in performance and power efficiency, by leveraging the physical properties of memory devices to perform computation and storage at the same physical location. Many types of memory devices, including Flash memory, PCM, and Resistive Random Access Memory (RRAM), can be used for IMC [2]. Most notably, analog IMC can be used to perform Matrix-Vector Multiplication (MVM) operations in \(O(1)\) time complexity [3], which is the most dominant operation used for DNN acceleration. In this novel approach, the weights of linear, convolutional, and recurrent DNN layers are mapped to crossbar arrays (tiles) of Non-Volatile Memory (NVM) elements. By exploiting basic Kirchhoff's circuit laws, MVMs can be performed by encoding inputs as Word-Line (WL) voltages and weights as device conductances. For most computations, this removes the need to pass data back and forth between Central Processing Units (CPUs) and memory. This back and forth data movement is inherent in traditional digital computing architectures, and is often referred to as the _von Neumann bottleneck_. Because there is greatly reduced movement of data, tasks can be performed in a fraction of the time, and with much less energy.
NVM crossbar arrays and analog circuits, however, have inherent non-idealities, such as noise, temporal conductance drift, and non-linear errors, which can lead to imprecision and noisy computation [4]. These effects need to be properly quantified and mitigated to ensure the high accuracy of DNN models. In addition to the hardware constraints that are prevalent in edge devices, there is the added complexity of designing DNN
architectures which are optimized for the edge on a variety of hardware platforms. This requires hardware-software co-design approaches to tackle this complexity, as manually-designed architectures are often tailored for specific hardware platforms. For instance, MobileNet [5] uses a depth-wise separable convolution that enhances CPU performance but is inefficient for Graphics Processing Unit (GPU) parallelization [6]. These are bespoke solutions that are often hard to implement and generalize to other platforms.
HW-NAS [7] is a promising approach that seeks to automatically identify efficient DNN architectures for a target hardware platform. In contrast to traditional Neural Architecture Search (NAS) approaches that focus on searching for the most accurate architectures, HW-NAS searches for highly accurate models while optimizing hardware-related metrics. Existing HW-NAS strategies cannot be readily used with analog IMC processors without significant modification for three reasons: (i) their search space contains operations and blocks that are not suitable for analog IMC, (ii) lack of a benchmark of hardware-aware trained architectures, and (iii) their search strategy does not include noise injection and temporal drift on weights.
To address these challenges, we propose _AnalogNAS_, a novel HW-NAS strategy to design dedicated DNN architectures for efficient deployment on edge-based analog IMC inference accelerators. This approach considers the inherent characteristics of analog IMC hardware in the search space and search strategy. Fig. 1 depicts the necessity of our approach. As can be seen, when traditional DNN architectures are deployed on analog IMC hardware, non-idealities, such as conductance drift, drastically reduce network performance. Networks designed by _AnalogNAS_ are extremely robust to these non-idealities and have much fewer parameters compared to equivalently-robust traditional networks. Consequently, they have reduced resource utilization.
Our specific contributions can be summarized as follows:
* We design and construct a search space for analog IMC, which contains ResNet-like architectures, including ResNext [8] and Wide-ResNet [9], with blocks of varying widths and depths;
* We train a collection of networks using Hardware-Aware (HWA) training for image classification, Visual Wake Words (VWW), and Keyword Spotting (KWS) tasks. Using these networks, we build a surrogate model to rank the architectures during the search and predict robustness to conductance drift;
* We propose a global search strategy that uses evolutionary search to explore the search space and efficiently finds the right architecture under different constraints, including the number of network parameters and analog tiles;
* We conduct comprehensive experiments to empirically demonstrate that AnalogNAS can be efficiently utilized to carry out architecture search for various edge tiny applications, and investigate what attributes of networks make them ideal for implementation using analog AI;
* We validate a subset of networks on hardware using a 64-core IMC chip based on PCM.
The rest of the paper is structured as follows. In Section II, we present related work. In Section III, relevant notations and terminology are introduced. In Section IV, the search space and surrogate model are presented. In Section V, the search strategy is presented. In Section VI, the methodology for all experiments is discussed. The simulation results are presented in Section VI-B, along with the hardware validation and performance estimation in Section VII. The results are discussed in Section VIII. Section IX concludes the paper.
## II Related Work
### _NAS for TinyML_
HW-NAS has been successfully applied to a variety of edge hardware platforms [7, 10] used to deploy networks for TinyMLPerf tasks [11] such as image classification, VWW, KWS, and anomaly detection. MicroNets [12] leverages NAS for DL model deployment on micro-controllers and other embedded systems. It utilizes a differentiable search space [13] to find efficient architectures for different TinyMLPerf tasks. For each task, the search space is an extension of current SOTA architectures. \(\mu\)-nas [14] includes memory peak usage and a number of other parameters as constraints. Its search strategy combines aging evolution and Bayesian optimization to estimate the objectives and explore a granular search space efficiently. It constructs its search space from a standard CNN and modifies the operators' hyper-parameters and a number of layers.
### _NAS for Mixed-Signal IMC Accelerators_
Many works [15, 16, 17, 18] target IMC accelerators using HW-NAS. FLASH [15] uses a small search space inspired by DenseNet [19] and searches for the number of skip connections that efficiently satisfy the trade-off between accuracy, latency, energy consumption, and chip area. Its surrogate model uses linear regression and the number of
Fig. 1: The effect of PCM conductance drift after one day on standard CNN architectures and one architecture (AnalogNAS_T500) obtained using HW-NAS, evaluated using CIFAR-10. _FP_ refers to the original network accuracy, and _I-day_ to the simulated analog network accuracy after 1-day device drift.
skip connections to predict model accuracy. NAS4RRAM [17] uses HW-NAS to find an efficient DNN for a specific RRAM-based accelerator. It uses an evolutionary algorithm, trains each sampled architecture without HWA training, and evaluates each network on a specific hardware instance. NACIM [16] uses co-exploration strategies to find the most efficient architecture and the associated hardware platform. For each sampled architecture, networks are trained considering noise variations. This approach is limited by using a small search space due to the high time complexity of training. UAE [18] uses a Monte-Carlo simulation-based experimental flow to measure the device uncertainty induced to a handful of DNNs. Similar to NACIM [16], evaluation is performed using HWA training with noise injection. AnalogNet [20] extends the work of Micronet by converting their final models to analog-friendly models, replacing depthwise convolutions with standard convolutions and tuning hyperparameters.
Compared to the above-mentioned SOTA HW-NAS strategies, our AnalogNAS is better tailored to analog IMC hardware for two reasons: (i) Our search space is much larger and more representative, featuring resnet-like connections. This enables us to answer the key question of what architectural characteristics are suitable for analog IMC which cannot be addressed with small search spaces. (ii) We consider the inherent characteristics of analog IMC hardware directly in the objectives and constraints of our search strategy in addition to noise injection during the HWA training as used by existing approaches.
## III Preliminaries
### _Analog IMC Accelerator Mechanisms_
Analog IMC accelerators are capable of performing MVM operations \(\mathbf{Y}^{T}=\mathbf{X}^{T}\mathbf{W}\) using the laws of physics, where \(\mathbf{W}\) is an \(M\times N\) matrix, \(\mathbf{X}\) is a \(M\times 1\) vector, and \(\mathbf{Y}\) is a \(N\times 1\) vector. When arranged in a crossbar configuration, \(M\times N\), NVM devices can be used to compute MVM operations. This is done by encoding elements of \(\mathbf{X}\) as WL voltages, denoted using \(\mathbf{V}\), and elements of \(\mathbf{W}\) as conductances of the unit cells, denoted using \(\mathbf{G}\). Negative conductance states cannot be directly encoded/represented using NVM devices. Consequently, differential weight mapping schemes are commonly employed, where either positive weights, i.e., \(\mathbf{W}^{+}=\max(\mathbf{W},0)\), and negative weights, i.e., \(\mathbf{W}^{-}=-\min(\mathbf{W},0)\), are encoded within unit cells, using alternate columns, or on different tiles [3]. The analog computation, i.e., \(\mathbf{I}=\mathbf{V}\mathbf{G}\) is performed, where the current flow to the end of the \(N\)-th column is \(I_{N}=\sum_{i=0}^{M}G_{i,N}V_{i}\). Typically, Digital-to-Analog Converters (DACs) are required to encode WL voltages and Analog-to-Digital Converters (ADCs) are required to read the output currents of each column. The employed analog IMC tile, its weight mapping scheme, and computation mechanism are depicted in Fig. 2.
### _Temporal Drift of Non-Volatile Memory Devices_
Many types of NVM devices, most prominantly, PCM, exhibit temporal evolution of the conductance values referred to as the conductance drift. This poses challenges for maintaining synaptic weights reliably [2]. Conductance drift is most commonly modelled using Eq. (1), as follows:
\[G(t)=G(t_{0})(t/t_{0})^{-\nu}, \tag{1}\]
where \(G(t_{0})\) is the conductance at time \(t_{0}\) and \(\nu\) is the drift exponent. In practice, conductance drift is highly stochastic because \(\nu\) depends on the programmed conductance state and varies across devices. Consequently, when reporting the network accuracy at a given time instance (after device programming), it is computed across multiple experiment instances (trials) to properly capture the amount of accuracy variations.
### _HWA-training and analog hardware accuracy evaluation simulation_
To simulate training and inference on analog IMC accelerators, the IBM Analog Hardware Acceleration Kit (AIHWKit) [21] is used. The AIHWKit is an open-source Python toolkit for exploring and using the capabilities of in-memory computing devices in the context of artificial intelligence and has been used for HWA training of standard DNNs with hardware-calibrated device noise and drift models [22].
### _Hardware-aware Neural Architecture Search (HW-NAS)_
HW-NAS refers to the task of automatically finding the most efficient DNN for a specific dataset and target hardware platform. HW-NAS approaches often employ black-box optimization methods such as evolutionary algorithms [23], reinforcement learning [24, 25], and Bayesian optimization [26, 27]. The optimization problem is either cast as a constrained or multi-objective optimization [7]. In AnalogNAS, we chose constrained optimization over multi-objective optimization for several reasons. First, constrained optimization is more computationally efficient than multi-objective optimization, which is important in the context of HW-NAS, to allow searching a large search space in a practical time frames. Multi-objective optimization is computationally expensive and can result in a
Fig. 2: Employed analog IMC tile and weight mapping scheme.
large number of non-dominated solutions that can be difficult to interpret. Secondly, by using constrained optimization, we can explicitly incorporate the specific constraints of the analog hardware in our search strategy. This enables us to find DNN architectures that are optimized for the unique requirements and characteristics of analog IMC hardware, rather than simply optimizing for multiple objectives.
## IV Analog-NAS
The objective of AnalogNAS is to find an efficient network architecture under different analog IMC hardware constraints. AnalogNAS comprises three main components: (i) a resnet-like search space, (ii) an analog-accuracy surrogate model, and (iii) an evolutionary-based search strategy. We detail each component in the following subsections.
### _Resnet-like Search Space_
Resnet-like architectures have inspired many manually designed SOTA DL architectures, including Wide ResNet [9] and EfficientNet [28]. Their block-wise architecture offers a flexible and searchable macro-architecture for NAS [29]. Resnet-like architectures can be implemented efficiently using IMC processors, as they are comprised of a large number of MVM and element-wise operations. Additionally, due to the highly parallel nature of IMC, Resnet architectures can get free processing of additional input/output channels. This makes Resnet-like architectures highly amenable to analog implementation.
Fig. 3 depicts the macro-architecture used to construct all architectures in our search space. The architecture consists of a series of \(M\) distinct main blocks. Each main block contains \(R\) residual blocks. The residual blocks use skip connections with or without downsampling. Downsampling is performed using 1x1 convolution layers when required, i.e., when the input size does not match the output size. The residual block can have \(B\) branches. Each branch uses a convolution block. We used different types of convolution blocks to allow the search space to contain all standard architectures such as Resnets [30], ResNext [8], and Wide Resnets [9]. The standard convolution blocks used in Resnets, commonly referred to as _BottleNeckBlock_ and _BasicBlock_, are denoted as A and B respectively. We include variants of A and B in which we inverse the order of the ReLU and Batch normalization operations. The resulting blocks are denoted as C and D. Table I summarizes the searchable hyper-parameters and their respective ranges. The widening factor scales the width of the residual block. We sample architectures with different depths by changing the number of main and residual blocks. The total size of the search space is approximately 73B architectures. The larger architecture would contain 240 convolutions and start from an output channel of 128 multiplying that by 4 for every 16 blocks.
### _Analog-accuracy Surrogate Model_
#### Iv-B1 Evaluation Criteria
To efficiently explore the search space, a search strategy requires evaluating the objectives of each sampled architecture. Training the sampled architectures is very time-consuming; especially when HWA retraining is performed, as noise injection and I/O quantization modeling greatly increases the computational complexity. Consequently, we build a surrogate model capable of estimating the objectives of each sampled architecture in IMC devices. To find architectures that maximize accuracy, stability, and resilience against IMC noise and drift characteristics, we have identified the following three objectives.
The 1-day accuracyis the primary objective that most NAS algorithms aim to maximize. It measures the performance of an architecture on a given dataset. When weights are encoded using IMC devices, the accuracy of the architecture can drop over time due to conductance drift. Therefore, we have selected the 1-day accuracy as a metric to measure the architecture's performance.
The Accuracy Variation over One Month (AVM)is the difference between the 1-month and 1-sec accuracy. This objective is essential to measure the robustness over a fixed time duration. A 30-day period allows for a reasonable trade-off between capturing meaningful accuracy changes and avoiding short-term noise and fluctuations that may not reflect long-term trends.
The 1-day accuracy standard deviationmeasures the variation of the architecture's performance across experiments, as discussed in Section III-B. A lower standard deviation indicates that the architecture produces consistent results on hardware deployments, which is essential for real-world applications.
Fig. 3: Resnet-like macro architecture.
To build the surrogate model, we follow two steps: Dataset creation and Model training:
#### Iv-A2 Dataset Creation
The surrogate model will predict the rank based on the 1-day accuracy and estimates the AVM and 1-day accuracy standard deviation using the Mean Squared Error (MSE). Since the search space is large, care has to be taken when sampling the dataset of architectures that will be used to train the surrogate model.
The architectures of the search space are sampled using two methods: (i) Latin Hypercube Sampling (LHS) [31] and (ii) NAS with full training. A more detailed description of the AnalogNAS algorithm is presented in Section V. We use LHS to sample architectures distributed evenly over the search space. This ensures good overall coverage of different architectures and their accuracies. NAS with full training is performed using an evolutionary algorithm to collect high-performance architectures. This ensures good exploitation when reaching well-performing regions. In Fig. 4, we present a visualization of the search space coverage, which does not show any clustering of similarly performing architectures at the edge of the main cloud of points. Thus, it is not evident that architectures with similar performance are located close to each other in the search space.
This suggests that traditional search methods that rely on local optimization may not be effective in finding the best-performing architectures. Instead, population-based search strategies, which explore a diverse set of architectures, could be more effective in finding better-performing architectures. Our search strategy extracted 400 test points, and we found that architectures were distributed throughout the main cloud, indicating that our dataset covers a diverse portion of the search space, despite the limited size of only 1,000.
Each sampled architecture is trained using different levels of weight noise and HWA training hyper-parameters using the AIHWKit[21]. Specifically, we modify the standard deviation of the added weight noise between [0.1, 5.0] in increments of 0.1. The tile size was assumed to be symmetric and varied in [256, 512], representing 256-by-256 and 512-by-512 arrays respectively. Training with different configurations allowed us to generalize the use of the surrogate model across a range of IMC hardware configurations, and to increase the size of the constructed dataset.
#### Iv-A3 Model training
To train the surrogate model, we used a hinge pair-wise ranking loss [32] with margin \(m=0.1\). The hinge loss, defined in Eq. (2), allows the model to learn the relative ranking order of architectures rather than the absolute accuracy values [32, 33].
\[L(\{a_{j},y_{j}\}_{j=1,\ldots,N})=\sum_{j=1}^{N}\sum_{\{i,j\mid n>y_{j}\}} \max[0,m-P(a_{i})-P(a_{j})] \tag{2}\]
\(a_{j}\) refers to architectures indexed \(j\), and \(y_{j}\) to its corresponding 1-day accuracy. \(P(a)\) is the predicted score of architecture \(a\). \(P(a)\) during training, the output score is trained to be correlated with the actual ranks of the architectures. Several algorithms were tested. After an empirical comparison, we adopted Kendall's Tau ranking correlation [34] as the direct criterion for evaluating ranking surrogate model performance. Fig. 5 shows the comparison using different ML algorithms to predict the rankings and AVMs. Our dataset is tabular. It contains each architecture and its corresponding features. XGBoost outperforms the different surrogate models in predicting the architectures' ranking order, the AVM of each architecture, and the 1-day standard deviation.
Fig. 4: t-Distributed Stochastic Neighbor Embedding (t-SNE) visualization of the sampled architectures for CIFAR-10.
Fig. 5: Surrogate models comparison.
## V Search Strategy
Fig. 6 depicts the overall search framework. Given a dataset and a hardware configuration readable by AIHWKit, the framework starts by building the surrogate model presented in Section IV-B. Then, we use an optimized evolutionary search to efficiently explore the search space using the surrogate model. Similar to traditional evolutionary algorithms, we use real number encoding. Each architecture is encoded into a vector, and each element of the vector contains the value of the hyper-parameter, as listed in Table I.
### _Problem Formulation_
Given the search space \(S\), our goal is to find an architecture \(\alpha\), that maximizes the 1-day accuracy while minimizing the 1-day standard deviation, subject to constraints on the number of parameters and the AVM. The number of parameters is an important metric in IMC, because it directly impacts the amount of on-chip memory required to store the weights of a DNN. Eq. (3) formally describes the optimization problem as follows:
\[\begin{split}\max_{\alpha\in S}&\qquad\frac{ \text{ACC}(\alpha)}{\sigma(\alpha)}\\ \text{s.t}&\qquad\psi(\alpha)<T_{p},\qquad\text{AVM }(\alpha)<T_{\text{AVM}}\end{split} \tag{3}\]
ACC refers to the 1-day accuracy objective, \(\sigma\) denotes the 1-day accuracy's standard deviation, and \(\psi\) is the number of parameters. \(T_{p}\) and \(T_{\text{AVM}}\) are user-defined thresholds that correspond to the maximum number of parameters and AVM, respectively.
### _Search Algorithm_
Our evolutionary search algorithm, i.e., AnalogNAS, is formally defined using Algorithm 1. AnalogNAS is an algorithm to find the most accurate and robust neural network architecture for a given analog IMC configuration and task. The algorithm begins by generating a dataset of neural network architectures, which are trained on the task and evaluated using AIHWKit. A surrogate model is then created to predict the efficiency of new architectures. The algorithm then generates a population of architectures using an LHS technique and selects the top-performing architectures to be mutated and generate a new population. The process is repeated until a stopping criterion is met, such as a maximum number of iterations or a time budget. Finally, the most robust architecture is returned. In the following, we detail how the population initialization, fitness evaluation, and mutations are achieved.
#### V-B1 Population Initialization
The search starts by generating an initial population. Using the LHS algorithm, we sample the population uniformly from the search space. LHS ensures that the initial population contains architectures with different architectural features. LHS is made faster with parallelization by dividing the sampling into multiple independent subsets, which can be generated in parallel using multiple threads.
#### V-B2 Fitness Evaluation
We evaluate the population using the aforementioned analog-accuracy surrogate model. In addition to the rankings, the surrogate model predicts the AVM of each architecture. As previously described, the AVM is used to gauge the robustness of a given network. If the AVM is below a defined threshold, \(T_{\text{AVM}}\), the architecture is replaced by a randomly sampled architecture. The new architecture is constrained to be sampled from the same hypercube dimension as the previous one. This ensures efficient exploration.
#### V-B3 Selection and Mutation
We select the top 50% architectures from the population using the predicted rankings. These architectures are mutated. The mutation functions are classified
Fig. 6: Overview of the AnalogNAS framework.
as follows:
Depth-related mutations modify the depth of the architectures. Mutations include adding a main block, by increasing or decreasing \(M\) or a residual block \(R\), or modifying the type of convolution block, i.e., \(\{A,B,C,D\}\), for each main block.
Width-related mutations modify the width of the architectures. Mutations include modifying the widening factor \(W\) of a main block or adding or removing a branch \(B\), or modifying the initial output channel size of the first convolution, \(OC\).
Other mutations modify the kernel size of the first convolution, \(KS\), and/or add skip connections,denoted using \(ST\).
Depth- and width-related mutations are applied with the same probability of 80%. The other mutations are applied with a 50% probability. In each class, the same probability is given to each mutation. The top 50% architectures in addition to the mutated architectures constitute the new population. For the remaining iterations, we verify the ranking correlation of the surrogate model. If the surrogate model's ranking correlation is degraded, we fine-tune the surrogate model with the population's architectures. The degradation is computed every 100 iterations. The surrogate model is tested on the population architectures after training them. It is fine-tuned if Kendall's tau correlation drops below 0.9.
## VI Experiments
This section describes the experiments used to evaluate AnalogNAS on three tasks: CIFAR-10 image classification, VWW, and KWS. The AIHWKit was used to perform hardware simulations.
### _Experimental Setup_
#### Vi-A1 Training Details
We detail the hyper-parameters used to train the surrogate model and different architectures on CIFAR-10, VWW, and KWS tasks.
Surrogate model trainingWe trained a surrogate model and dataset of HWA trained DNN architectures for each task. The sizes of the datasets were 1,200, 600, and 1,500, respectively. An additional 500 architectures were collected during the search trials for validation. All architectures were first trained without noise injection (i.e., using vanilla training routines), and then converted to AIHWKit models for HWA retraining. The surrogate model architecture used was XGBoost. For VWW and KWS, the surrogate model was fine-tuned from the image classification XGBoost model.
Image classification trainingWe first trained the network architectures using the CIFAR-10 dataset [35], which contains 50,000 training and 10,000 test samples, evenly distributed across 10 classes. We augmented the training images with random crops and cutouts only. For training, we used Stochastic Gradient Descent (SGD) with a learning rate of 0.05 and a momentum of 0.9 with a weight decay of 5e-4. The learning rate was adjusted using a cosine annealing learning rate scheduler with a starting value of 0.05 and a maximum number of 400 iterations.
Visual Wake Words (VWW) trainingWe first trained the network architectures using the VWW dataset [36], which contains 82,783 train and 40,504 test images. Images are labeled 1 when a person is detected, and 0 when no person is present. The image pre-processing pipeline included horizontal and vertical flipping, scale augmentation [37], and random Red Green Blue (RGB) color shift. To train the architectures, we used the RMSProp optimizer [38] with a momentum of 0.9, a learning rate of 0.01, a batch normalization momentum of 0.99, and a \(l_{2}\) weight decay of 1e-5.
Keyword Spotting (KWS) trainingWe first trained the network architectures using the KWS dataset [39], which contains 1-second long incoming audio clips. These are classified into one of twelve keyword classes, including "silence" and "unknown" keywords. The dataset contains 85,511 training, 10,102 validation, and 4,890 test samples. The input was transformed to \(49\times 10\times 1\) features from the Mel-frequency cepstral coefficients [40]. The data pre-processing pipeline included applying background noise and random timing jitter. To train the architectures, we used the Adam optimizer [41] with a decay of 0.9, a learning rate of 3e-05, and a linear learning rate scheduler with a warm-up ratio of 0.1.
#### Vi-A2 Search Algorithm
The search algorithm was run five times to compute the variance. The evolutionary search was executed with a population size of 200. If not explicitly mentioned, the AVM threshold was set to 10%. The width and depth mutation probability was set to 0.8. The other mutations' probability was set to 0.5. The total number of iterations was 200. After the search, the obtained architecture for each task was compared to SOTA baselines for comparison.
### _Results_
The final architecture compositions for the three tasks are listed in Table II. In addition, figure 10 highlights the architectural differences between AnalogNAS_T500 and resnet32. We modified \(T_{p}\) to find smaller architectures. To determine the optimal architecture for different parameter thresholds, we use T\(X\), where \(X\) represents the threshold \(T_{p}\) in K units (e.g., T100 refers to the architecture with a threshold of 100K parameters). When searching for T200 and T100, the probability of increasing the widening factor or depth to their highest values, was lessened to 0.2.
In Fig. 7, the simulated hardware comparison of the three tasks is depicted. Our models outperform SOTA architectures with respect to both accuracy and resilience to drift. On CIFAR-10, after training the surrogate model, the search took 17 minutes to run. We categorize the results based on the number of parameters threshold into two distinct groups. We consider edge models with a number of parameters below 1M and above 400k. Below 400K, architectures are suitable for TinyML deployment. The final architecture, T500, is smaller than Resnet32, and achieved +1.86% better accuracy and a drop of 1.8% after a month of inference, compared to 5.04%. This model is \(\sim 86\times\) smaller than Wide Resnet [9], which has 36.5M parameters. Our smallest model, T100,
was \(1.23\times\) bigger than Resnet-V1, the SOTA model benchmarked by MLPerf [11]. Despite not containing any depth-wise convolutions, Resnet_V1 is extremely small, with only 70k parameters. Our model offers a +7.98% accuracy increase with a 5.14% drop after a month of drift compared to 10.1% drop for Resnet_V1. Besides, our largest model, _AnalogNAS_1M_, outperforms Wide Resnet with +0.86% in the 1-day accuracy with a drop of only 1.16% compared to 6.33%. In addition, the found models exhibit greater consistency across experiment trials, with an average standard deviation of 0.43 over multiple drift times as opposed to 0.97 for SOTA models.
Similar conclusions can be made about VWW and KWS. In VWW, current baselines use a depth-wise separable convolution that incurs a high accuracy drop on analog devices. Compared to AnalogNet-VWW and Micronets-VWW, the current SOTA networks for VWW in analog and edge devices, our T200 model has similar number of parameters (x1.23 smaller) with a +2.44% and +5.1% 1-day accuracy increase respectively. AnalogNAS was able to find more robust and consistent networks with an average AVM of 2.63% and a standard deviation of 0.24. MCUNet [42] and MobileNet-V1 present the highest AVM. This is due to the sole use of depth-wise separable convolutions.
On KWS, the baseline architectures, including DSCNN [43], use hybrid networks containing recurrent cells and convolutions. The recurrent part of the model ensures high robustness to noise. While current models are already robust with an average accuracy drop of 4.72%, our model outperforms tiny SOTA models with 96.8% and an accuracy drop of 2.3% after a month of drift. Critically, our AnalogNAS models exhibit greater consistency across experiment trials, with an average standard deviation of 0.17 over multiple drift times as opposed to 0.36 for SOTA models.
### _Comparison with HW-NAS_
In accordance with commonly accepted NAS methodologies, we conducted a comparative analysis of our search approach with Random Search. Results, presented in Fig. 8, were obtained across five experiment instances. Our findings indicate that Random Search was unable to match the 1-day accuracy levels of our final models, even after conducting experiments for a duration of four hours and using the same surrogate model. We further conducted an ablation study to evaluate the effectiveness of our approach by analyzing the impact of the LHS algorithm and surrogate model. The use of a
Fig. 8: Ablation study comparison against HW-NAS. Mean and standard deviation values are reported across five experiment instances (trials).
Fig. 7: Simulated hardware comparison results on three benchmarks: (a,b) CIFAR-10, (c)VWW, and (d) KWS. The size of the marker represents the size (i.e., the number of parameters) of each model. The shaded area corresponds to the standard deviation at that time.
random sampling strategy and exclusion of the surrogate model resulted in a significant increase in search time. The LHS algorithm helped in starting from a diverse initial population and improving exploration efficiency, while the surrogate model played a crucial role in ensuring practical search times.
Besides, AnalogNAS surpasses both FLASH [15] and \(\mu\) -nas [14] in performance and search time. FLASH search strategy is not adequate for large search spaces such as ours. As for \(\mu\)-nas, although it manages to achieve acceptable results, its complex optimization algorithm hinders the search process, resulting in decreased efficiency.
### _Search Time and Accuracy Variation over One Month (AVM) Threshold Trade-Off_
During the search, we culled architectures using their predicted AVM, i.e., any architecture with a higher AVM than the AVM threshold was disregarded. As listed in Table III, we varied this threshold to investigate the trade-off between \(\text{T}_{\text{AVM}}\) and the search time. As can be seen, as \(\text{T}_{\text{AVM}}\) is decreased, the delta between AVM and \(\text{T}_{\text{AVM}}\) significantly decreases. The correlation between the search time and \(\text{T}_{\text{AVM}}\) is observed to be non-linear.
## VII Experimental Hardware Validation and Architecture Performance Simulations
### _Experimental Hardware Validation_
An experimental hardware accuracy validation study was performed using a 64-core IMC chip based on PCM [44]. Each core comprises a crossbar array of 256x256 PCM-based unit-cells along with a local digital processing unit [45]. This validation study was performed to verify whether the simulated network accuracy values and rankings are representative of those when the networks are deployed on real physical hardware. We deployed two networks for the CIFAR-10 image classification task on hardware: AnalogNAS_T500 and the baseline ResNet32 [30] networks from Fig. 7(a).
To implement the aforementioned models on hardware, after HWA training was performed, a number of steps were carried out. First, from the AIHWKit, unit weights of linear (dense) and unrolled convolutional layers, were exported to a state dictionary file. This was used to map network parameters to corresponding network layers. Additionally, the computational inference graph of each network was exported. These files were used to generate proprietary data-flows to be executed in-memory. As only hardware accuracy validation was being performed, all other operations aside from MVMs were performed on a host machine connected to the chip through an Field-programmable Gate Array (FPGA). The measured hardware accuracy was 92.05% for T500 and 89.87% for Resnet32, as reported in Table IV. Hence, the T500 network performs significantly better than Resnet32 also when implemented on real hardware. This further validates that our proposed AnalogNAS approach is able to find networks with similar number of parameters that are more accurate and robust on analog IMC hardware.
### _Simulated Hardware Energy and Latency_
We conducted power performance simulations for AnalogNAS_T500 and ResNet32 models using a 2D-mesh based heterogeneous analog IMC system with the simulation tool presented in [46]. The simulated IMC system consists of one analog fabric with 48 analog tiles of 512x512 size, on-chip digital processing units, and digital memory for activation orchestration between CNN layers. Unlike the accuracy validation experiments on the 64-core IMC chip, the simulated power performance assumes all intermediate operations to be mapped and executed on-chip. Our results, provided in Table IV, show that AnalogNAS_T500 outperformed ResNet32 in terms of both execution time and energy efficiency.
We believe that this power performance benefit is realized because, in analog IMC hardware, wider layers can be computed in parallel, leveraging the \(O(1)\) latency from analog tiles, and are therefore preferred over deeper layers. It is noted that both networks exhibit poor tile utilization and that the tile utilization and efficiency of these networks could be further improved by incorporating these metrics as explicit constraints. This is left to future work and is beyond the scope of AnalogNAS.
## VIII Discussion
During the search, we analyzed the architecture characteristics and studied which types of architectures perform the best on IMC inference processors. The favoured architectures combine robustness to noise and accuracy performance. Fig. 9 shows the evolution of the average depth, the average widening factor, the average number of branches, and the average first convolution's output channel size of the search population for every 20 iterations. The depth represents the number of convolutions.
A sampled architecture has a widening factor per block. To compute the average widening factor, we first computed the average widening factor per architecture by dividing the sum of the widening factors by the number of blocks contained in the architecture. Then, we calculated the average widening factor across all architectures. Similar computations were performed for the average number of branches.
For each plot, the search was run 5 times and the mean is represented in each point. The plotted error corresponds to one standard deviation from that mean. Starting from a random population obtained using LHS, the population evolves through different width and depth-related mutations. During this analysis, we want to answer the following questions: (i) does the search favor wide or deep networks? And subsequently, are wider architectures more noise resilient? (ii) what architectures are exploited by the search for different tasks when constraining the number of parameters?
### _Are Wider or Deeper Networks More Robust to PCM Device Drift?_
From Fig. 9, it can be observed that the depth of all networks decreases during the search. This trend is especially seen when we constrain the model's size to 100K and 500K parameters. During the search, the widening factor also increases, allowing the blocks to have wider convolutions. The number of branches is highly dependent on \(T_{p}\). This number is, on average, between 1 and 2. The branches are the number of parallel convolutions in a block disregarding the skip connection. In the literature, architectures such as ResNext, that support a higher number of branches, have a number of parameters around 30M. It is still interesting to get blocks with two branches, which also reflects an increase in the width of the network by increasing the number of features extracted within the same block.The average output channel size of the first convolution decreases during the search. Its final value is around the same number of output channels as standard architectures, i.e., between 16 and 32. This follows the general trend of having wider convolutions in deeper network positions.
### _Types Of Architectures_
The architectures and parameter constraints differ for each task, but they all exhibit an increasing expansion ratio in the convolution block. This allows the convolutions to effectively utilize the tile and mitigate noise from unused cells in the crossbar.For CIFAR-10, architectures behave like Wide Resnet [9] while respecting the number of parameters constraint. For the VWW task, the architectures are deeper. The input resolution is \(224\times 224\), which requires more feature extraction blocks. However, they are still smaller than SOTA architectures, with a maximum depth of 22. As depth is essential to obtain high accuracy for the VWW task, no additional branches are added. For the KWS task, the architectures are the widest possible, maximizing the tile utilization for each convolutional layer.
## IX Conclusion
In this paper, we propose an efficient NAS methodology dedicated to analog in-memory computing for TinyML tasks entitled AnalogNAS. The obtained models are accurate, noise and drift-resilient, and small enough to run on resource-constrained devices. Experimental results demonstrate that our method outperforms SOTA models on analog hardware for three tasks of the MLPerf benchmark: image classification on CIFAR-10, VWW, and KWS. Our AnalogNAS_T500 model implemented on physical hardware demonstrates \(>2\%\) higher accuracy experimentally on the CIFAR-10 benchmark than ResNet32. Calculated speed and energy efficiency estimates reveal a \(>4\times\) reduction in execution time, in addition to \(>1.2\times\) higher energy efficiency for AnalogNAS_T500 compared with ResNet32 when evaluated using a system-level simulator. While our paper has focused on a ResNet-like search space, it is important to note that our search strategy is adaptable and can be extended in future work to explore a broader range of architectures.
Fig. 10: Architectural differences between AnalogNAS_T500 and Resnet32.
Fig. 9: Evolution of architecture characteristics in the population during the search for CIFAR-10. Random individual networks are shown.
## Acknowledgement
We would like to thank Thanos Vasilopoulos and Julian Buchel from IBM Research, Zurich lab, for their help with the development of the hardware-software infrastructure. We also thank the IBM Research Analog AI team for their feedback. We thank the computational support from AiMOS, an AI supercomputer made available by the IBM Research AI Hardware Center and Rensselaer Polytechnic Institute's Center for Computational Innovations (CCI).
|
2310.01084 | Non-negative isomorphic neural networks for photonic neuromorphic
accelerators | Neuromorphic photonic accelerators are becoming increasingly popular, since
they can significantly improve computation speed and energy efficiency, leading
to femtojoule per MAC efficiency. However, deploying existing DL models on such
platforms is not trivial, since a great range of photonic neural network
architectures relies on incoherent setups and power addition operational
schemes that cannot natively represent negative quantities. This results in
additional hardware complexity that increases cost and reduces energy
efficiency. To overcome this, we can train non-negative neural networks and
potentially exploit the full range of incoherent neuromorphic photonic
capabilities. However, existing approaches cannot achieve the same level of
accuracy as their regular counterparts, due to training difficulties, as also
recent evidence suggests. To this end, we introduce a methodology to obtain the
non-negative isomorphic equivalents of regular neural networks that meet
requirements of neuromorphic hardware, overcoming the aforementioned
limitations. Furthermore, we also introduce a sign-preserving optimization
approach that enables training of such isomorphic networks in a non-negative
manner. | Manos Kirtas, Nikolaos Passalis, Nikolaos Pleros, Anastasios Tefas | 2023-10-02T10:54:46Z | http://arxiv.org/abs/2310.01084v1 | # Non-negative isomorphic neural networks for photonic neuromorphic accelerators
###### Abstract
Neuromorphic photonic accelerators are becoming increasingly popular, since they can significantly improve computation speed and energy efficiency, leading to femtojoule per MAC efficiency. However, deploying existing DL models on such platforms is not trivial, since a great range of photonic neural network architectures relies on incoherent setups and power addition operational schemes that cannot natively represent negative quantities. This results in additional hardware complexity that increases cost and reduces energy efficiency. To overcome this, we can train non-negative neural networks and potentially exploit the full range of incoherent neuromorphic photonic capabilities. However, existing approaches cannot achieve the same level of accuracy as their regular counterparts, due to training difficulties, as also recent evidence suggests. To this end, we introduce a methodology to obtain the non-negative isomorphic equivalents of regular neural networks that meet requirements of neuromorphic hardware, overcoming the aforementioned limitations. Furthermore, we also introduce a sign-preserving optimization approach that enables training of such isomorphic networks in a non-negative manner.
## 1 Introduction
Neuromorphic architectures have gained increasing attention recently, as they provide novel electronic solutions focusing on memory architectures suitable for high-speed matrix-based calculations, which cover a significant fraction of the calculations involved during the inference of Deep Learning (DL) models, with low energy consumption [33; 21]. Neuromorphic photonics are among the most promising approaches, with recent layouts already paving a realistic road map towards femtojoule per MAC efficiencies [39], leveraging advantages in materials and waveguide technologies [7; 11], enabling ultra-fast analog processing and vector-matrix multiplication with almost zero power consumption [37; 15], significantly exceeding their electronic counterparts [29].
However, integrating DL models into physically implemented devices comes with additional cost if their physical properties are not taken into account during the implementation phase. For example, the vast majority of currently available photonic architectures relies on incoherent layouts and is facing challenges to support negative quantities, since optical signals get naturally converted into power signals during the nonlinear process that has to take place at the activation stage, implying that the sign information of the weighted sum is ignored. This mechanism turns the use of negative number representations within a Photonic Neural Network (PNN) into a challenging process, typically enforcing the adoption of higher complexity hardware architectures, such as balanced photodetector schemes [36], biasing configurations [26] and signal transformation blocks [24].
Our main contributions are two-fold:
* We propose a method to transform trained Artificial Neural Networks (ANNs) to their fully non-negative equivalent.
* We propose an optimization method which ensures that the model's parameters will remain non-negative during training, enabling one to either train a non-negative model from scratch or continue training in a non-negative manner.
## 2 Related Works
Neuromorphic PhotonicsPNNs deployment requires the employment of optically enabled mechanisms and photonic building blocks to realize the respective signals and parameters of the neural layer. Input signals are typically imprinted in the optical domain using optical modulators, while weighting functionality generally requires the use of variable optical attenuation schemes [38]. Several approaches have been utilized to date in on-chip weight implementations of photonic neural layers, including tunable i) optical filtering mechanisms [38; 24; 28], ii) waveguide absorption techniques [7; 11] and, iii) optical gain approaches [9; 23]. Summation is then performed in the optical domain via i) wavelength multiplexers or power combiners in the case of incoherent layouts [11; 13; 36] and ii) interferometric stages and optical couplers in the case of coherent architectures [37; 14; 15; 23]. Finally, the non-linear activation can be offered by either i) optoelectronic schemes [11; 14; 28] or ii) all-optical non-linear modules [11; 14; 28].
Furthermore, there are several works that take into account the unique nature of neuromorphic photonics and design the training and deployment of models accordingly [25; 27; 24]. Although such approaches integrate transfer function- and noise-related limitations of photonic hardware [18; 32; 28], leading to significant performance improvements during deployment, they typically ignore the sign limitation of photonic architectures.
Cost ReductionDecreasing the hardware complexity of PNNs has been extensively studied in the literature, and several approaches have been proposed, ranging from pruning [13], to quantization [17], and mixed-precision representations [14]. However, even the simplest PNN implementation would require amplitude modulators for its inputs, meaning that in this case both the input signal modulators and photonic weighting stage control just the amplitude of the optical field, resulting in non-negative networks [39]. Introducing sign information in a PNN has to incorporate a new physical dimension that can be used to correlate its state with the sign. Coherent photonic architectures offer a rather simple way of representing signed optical signals by correlating the sign to the phase of the propagating optical fields [15; 24; 28]. However, this requires additional phase modulation circuitry both at the input signal generation and weighting stage, also requiring more complex circuitry at the non-linear activation stage in order to account for the sign information of the weighted optical sum at the activation unit, such as an optical biasing scheme or a coherent receiver [34; 23; 15; 13].
Non-negative ANNsAlthough there are some works studying non-negativity on ANNs, they mostly target partial nonnegative architectures that are focused on reconstruction (e.g. autoencoders) [3; 8] applied on small datasets or non-traditionally used ANNs (such as Pyramid Neural Networks) [12], facing difficulties in scaling ability, generalization in DL architectures (such as Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs)) and result in significantly performance degradation that hinders their application on both DL and neuromorphic photonics. Our proposed framework can be applied on traditionally used architectures without any major change and performance degradation, since it is based on the non-negative isomorphic representation of traditional models.
Non-negative TrainingExisting training methods, which are oriented to non-negative architectures, are based on either limited memory optimization (such as Quasi-Newton) [3; 8] or in non-gradient based optimization methods [12], clipping parameters during the backpropagation, constraining in such a way the variance of parameters during the first epochs of training, which leads on convergence difficulties (typical examples are demonstrated in the Appendix). Our proposed non-negative optimization method is based on a multiplicative alternative of the Stochastic Gradient Descent (SGD) optimizer that combined with the proposed non-negative transformation ensures that the variance of parameters will not diminish, allowing the training process to proceed smoothly without reducing the variance.
Although multiplicative updates have been extensively studied during the early years of machine learning research [2; 19], to the best of our knowledge, this is the first work that investigates them in the context of non-negative training. Even works that target sign-preserving optimization are limited to studying both the excitatory and inhibitory functions of neurons, obeying in general Dale's rule [10], pointing out mostly the anatomical correlation with biological synapses [1; 4]. In such a direction is also the recent work [5] that leverages multiplicative updates on Adam to train lower bit-width synapses stored in logarithmic numbers oriented to software-hardware co-design. The authors note that the sign-pattern of initialized weights can possibly restrict the expressive ability of networks. Our work goes beyond such approaches since we claim that even with positive sign only parameters, we can acquire an expressive isomorphic representation of a network by applying the appropriate transformation and training it in a non-negative manner.
Isomorphism in ANNsIsomorphism is a general mathematical property that is especially useful in graph theory. As a result, there are several works that consider isomorphism to extract more expressive representations of the input graphs on graph classification problems [6; 22]. As far as we know, this is the first work studying the isomorphism of the networks and proposes a structured methodology to acquire their non-negative equivalent opening a new research direction with possible wider implications, e.g., design of isomorphic networks that are adjusted for conventional accelerators as well, providing potentially more explainable DL architectures [3; 8; 12].
## 3 On the Difficulty of Training Non-negative Neural Networks
Conceptually, ANNs project the input features to a latent dimensional space, aiming to represent them in a more separable way, seeking for a hyperplane decision boundary that can classify them in an optimal way. However, classifying non-negative features even when they are linear separable is often impossible when using non-negative parameters to a linear classifier. For example, in a two dimensional binary classification task, as depicted in Figure 1.a, the decision boundary of a traditional logistic regression classifier is given by:
\[x_{1}=-\frac{w_{0}}{w_{1}}x_{0}-\frac{b}{w_{1}}\in\mathbb{R}, \tag{1}\]
where \(x_{i}\in[0,1]\) is the input, \(w_{i}\in\mathbb{R}\) and \(b\in\mathbb{R}\) are the weights and biases of the classifier respectively and \(i\in\{0,1\}\). We can easily conclude that there is a positive slope decision, with the slope given by \(m=-w_{0}/w_{1}\), which requires a negative weight. In fact, training the linear classifier, using SGD, we obtain a positive slope decision boundary, as shown in Figure 1.a, achieving optimal performance with \(w_{0}<0\). On the other hand, constraining the classifier to non-negative parameters results in a negative slope decision boundary that is unable to discriminate the two classes, as presented in Figure 1.b.
Inspired by the fact that challenging computational tasks, such as calculating the motion of our solar system's planets in a geocentric manner, can be easily solved by changing the coordinate system, for example, calculating the motion of the planets on a heliocentric system, we considered isomorphism, i.e. the same behavior but with different implementation and/or parameters, to claim that an equivalent classifier with non-negative parameters exists. Such a non-negative isomorphic classifier produces the same classification outcome as the original one by applying a coordination change to the original
Figure 1: (a) The resulting positive-slope decision boundary as acquired by the traditional optimization process. (b) Regular training of the non-negative classifier results in a non-positive slope decision boundary. (c) The proposed transformation on the trained parameters of (a).
inputs and parameters. Indeed, in the example of Figure 1.a, we can easily transform the original problem and classifier, by shifting and rotating points and decision boundary, getting an equivalent non-negative classifier, as depicted in Figure 1.c, using only positive parameters. Such non-negative isomorphic classifier outcomes the decision boundary given by:
\[x_{1}=-\frac{|w_{0}|}{w_{1}}x_{0}^{\prime}-\frac{b-a\left|w_{0}\right|}{w_{1}}\in \mathbb{R} \tag{2}\]
where \(w_{0}<0\), \(w_{1}>0\), \(a=1\) and \(x_{0}^{\prime}=(a-x_{0})\). The non-negative classifier leads to a rotated decision boundary and is equal to the original classifier performance.
## 4 Non-negative Isomorphic Neural Networks
Definition 4.1 (Linear Neuron)Let \(z_{i}\in\mathbb{R}\) the response of the linear part of the \(i\)-th neuron of a fully connected layer, where \(i=1\ldots N\), given by:
\[z_{i}=u_{i}(\mathbf{x})=\sum_{j=1}^{M}w_{ij}x_{j}+b_{i}\in\mathbb{R}, \tag{3}\]
where \(\mathbf{w}_{i}\in\mathbb{R}^{M}\), \(b_{i}\in\mathbb{R}\) and \(\mathbf{x}\in\mathbb{R}^{M}\) are the weight, bias and input vectors of the \(i\)-th neuron, respectively. Assuming an activation function \(g(\cdot):\mathbb{R}\rightarrow\mathbb{R}_{+}\), where \(\mathbb{R}_{+}\) denotes the set of positive real values, the outputs of \(i\)-th neuron is given by:
\[y_{i}=g(z_{i})\in\mathbb{R}_{+}. \tag{4}\]
Theorem: For every linear neuron given by Definition 4.1 there is a non-negative isomorphic with the linear response provided as:
\[u_{i}^{\prime}(\mathbf{x}^{\prime})=z_{i}^{\prime}=\sum_{j=1}^{M}|w_{ij}|x_{j}^{ \prime}+b_{i}^{\prime}\in\mathbb{R}_{+}, \tag{5}\]
and the output as:
\[y_{i}^{\prime}=g_{c}(u_{i}^{\prime}(\mathbf{x}^{\prime}))\in\mathbb{R}_{+}, \tag{6}\]
where:
\[x_{j}^{\prime}=\begin{cases}a-x_{j}&\text{if }w_{ij}<0\\ x_{j}&\text{otherwise}\end{cases}. \tag{7}\]
Then, appropriate parameters \(b_{i}^{\prime}\in\mathbb{R}_{+}\) and \(a\in\mathbb{R}_{+}\), as well as activation function \(g_{c}(\cdot):\mathbb{R}\rightarrow\mathbb{R}_{+}\) exist so the neuron leads to the same response, i.e., \(y_{i}=y_{i}^{\prime}\). \({}_{\blacksquare}\)
Proof: Equation 3 can be written as:
\[z_{i}=-\sum_{\{j|w_{ij}<0\}}|w_{ij}|x_{j}+\sum_{\{j|w_{ij}>0\}}w_{ij}x_{j}+b_{ i}\in\mathbb{R}. \tag{8}\]
Assuming that every input of a linear layer is non-negative, which can be enforced in the input of the network by trivially performing normalization to features, then, by adding and subtracting the quantity \(\alpha\sum_{\{j|w_{ij}<0\}}|w_{ij}|\), where \(a\geq max(\mathcal{X})\) and _max_ denotes the maximum element in the feasible set \(\{\mathbf{x}:\mathbf{x}\in\mathbb{R}_{+}^{N}\}\), defined as \(\mathcal{X}\), the Equation 8 can be written as:
\[z_{i}^{\prime}=\alpha\sum_{\{j|w_{ij}<0\}}|w_{ij}|-\sum_{\{j|w_{ij}<0\}}|w_{ij} |x_{j}+\sum_{\{j|w_{ij}>0\}}w_{ij}x_{j}+\left(b_{i}-\alpha\sum_{\{j|w_{ij}<0\} }|w_{ij}|\right)\in\mathbb{R}. \tag{9}\]
This allows us to rotate the input feature space, similarly to Section 3. To this end, the first two terms can be merged, while the last term can be integrated to an updated bias term:
\[\tilde{b}_{i}=b_{i}-\alpha\sum_{\{j|w_{ij}<0\}}|w_{ij}|\in\mathbb{R}. \tag{10}\]
Therefore, Equation 9 can be written as:
\[z_{i}^{\prime}=\sum_{\{j|w_{ij}<0\}}|w_{ij}|(a-x_{j})+\sum_{\{j|w_{ij}>0\}}w_{ij} x_{j}+\tilde{b}_{i}\in\mathbb{R}. \tag{11}\]
To simplify the Equation 11, we can define the rotated input as:
\[x_{j}^{\prime}=f(x_{j})=\begin{cases}a-x_{j}&\text{if $w_{ij}<0$}\\ x_{j}&\text{otherwise},\end{cases} \tag{12}\]
where \(x_{j}^{\prime}\in\mathbb{R}_{+}\). Similarly, the updated non-negative weights of the \(i\)-th neuron can be directly calculated as:
\[\mathbf{w}_{i}^{\prime}=|\mathbf{w}_{i}|\in\mathbb{R}_{+}^{M}, \tag{13}\]
where \(|\cdot|\) denotes the element-wise absolute value, i.e., \(w_{ij}^{\prime}=\{|w_{ij}|,j=0\dots M\}\), since all weights involved in (11) are positive. Furthermore, the new non-negative biases are transformed according to the following formula:
\[b_{i}^{{}^{\prime}}=\tilde{b}_{i}+c\in\mathbb{R}_{+}, \tag{14}\]
where \(c\) is computed as:
\[c=max\{|\tilde{b}_{0}|,\dots|\tilde{b}_{N}|\}\in\mathbb{R}_{+}, \tag{15}\]
denoting _activation shifting point_. The _activation shifting point_ is applied to the original activation to slide it into the input domain, leading to the same output as the original network. To ensure that the network will work in a non-negative manner, the original activation function has to work on a non-negative output space, i.e., \(g(z):\mathbb{R}\rightarrow[g_{min},g_{max}]\), where \(0\leq g_{min}<g_{max}<\infty\), with the shifted activation is calculated as:
\[g_{c}(x)=g(x-c)\in\mathbb{R}_{+}. \tag{16}\]
Typically the \(a^{(k+1)}\) of the next layer, where the \(k\) is the number of current layer, can be set to \(g_{max}\), validating the aforementioned assumption, \(a^{(k+1)}=g_{max}=max(\mathcal{X})\).
Since Non-negative Transformation (NNT) targets neuromorphic architectures, the _activation shifting_ can be integrated during the design phase of the activation function, allowing one to integrate it in layer. This evaluates Theorem 4.1 since the original parameters are transformed to their non-negative equivalent, with the isomorphic neuron has the same response as the original one.
The computational complexity of the proposed transformation is equal to the number of trainable parameters and can be trivially acquired by taking into account its algorithmic representation, presented in Appendix. More precisely, if the \(\mathbf{\theta}^{(k)}\) defines the trainable parameter of \(k\)-th layer, consisting of weights \(\mathbf{w}^{(k)}\) and biases \(\mathbf{b}^{(k)}\), with \(k=1\dots n\), then the computational complexity of the transformation is \(O(|\bigcup\{\mathbf{\theta}^{(k)}\}_{1}^{n}|)\), where \(|\cdot|\) denotes the number of elements of the corresponding set.
Definition 4.2 (Recurrent Neuron): Let \(\mathbf{x}_{t}\in\mathbb{R}^{M}\) denote \(M\) features fed on \(i\)-th neuron at \(t\)-th time-step. Each recurrent neuron processes two signals: a) the current input signal, which is weighted by \(\mathbf{w}_{i}^{(in)}\in R^{M}\), and b) a recurrent feedback signal, denoted by \(\mathbf{y}_{t-1}^{(r)}\in R^{N}\) and weighted by a set of recurrent weights \(\mathbf{w}_{i}^{(r)}\in R^{N}\), which corresponds to the output of the \(N\) recurrent neurons of the same layer in a previous time step. The linear response of \(i\)-th neuron is given by:
\[u_{ti}^{(r)}(\mathbf{x}_{t},\mathbf{y}_{t-1})=\sum_{j=1}^{M}w_{ij}^{(in)}x_{tj}+\sum_{ j=1}^{N}w_{ij}^{(r)}y_{t-1,j}^{(r)}+b_{i}\in\mathbb{R}_{+}, \tag{17}\]
and the outputs as:
\[y_{ti}^{(r)}=g(u_{ti}^{(r)}(\mathbf{x}_{t},\mathbf{y}_{t-1}^{(r)}))\in\mathbb{R}_{+}, \tag{18}\]
**Theorem 4.2**: _For every recurrent neuron given by Definition 4.2 there is a non-negative isomorphic with the linear response provided as:_
\[z_{ti}^{(r)^{\prime}}=u_{ti}^{(r)^{\prime}}(\mathbf{x}_{t}^{\prime},\mathbf{y}_{t-1}^{ \prime})=\sum_{j=1}^{M}|w_{ij}^{(in)}|x_{tj}^{\prime}+\sum_{j=1}^{N}|w_{ij}^{ (r)}|y_{t-1,j}^{(r)^{\prime}}+b_{i}^{\prime}\in\mathbb{R}_{+}, \tag{19}\]
_and the output as:_
\[y_{ti}^{(r)^{\prime}}=g_{c}(u_{ti}^{(r)^{\prime}}(\mathbf{x}_{t}^{\prime},\mathbf{y}_{t-1 }^{(r)^{\prime}}))\in\mathbb{R}_{+}, \tag{20}\]
_where:_
\[x_{tj}^{\prime}=\begin{cases}a^{(in)}-x_{tj}&\text{if }w_{ij}^{(in)}<0\\ x_{tj}&\text{otherwise}\end{cases}, \tag{21}\]
_and:_
\[y_{t-1,j}^{(r)^{\prime}}=\begin{cases}a^{(r)}-y_{t-1,j}^{(r)}&\text{if }w_{ij}^{(r)}<0\\ y_{t-1,j}^{(r)}&\text{otherwise}\end{cases}. \tag{22}\]
_Then, appropriate parameters \(b_{i}^{\prime}\in\mathbb{R}_{+}\), \(a^{(in)}\in\mathbb{R}_{+}\) and \(a^{(r)}\in\mathbb{R}_{+}\), as well as activation function \(g_{c}(\cdot):\mathbb{R}\rightarrow\mathbb{R}_{+}\) exist so the neuron leads to the same response, i.e., \(y_{ti}^{(r)}=y_{ti}^{(r)^{\prime}}\)._
Proof 4.2:Recurrent neurons can conceptually be seen as two linear neurons in which the current input signal, \(x_{tj}\), is fed to one neuron and the recurrent feedback signal, \(y_{t-1,j}^{(r)}\), to the other. To this end, we can easily conclude that there is a non-negative isomorphic for recurrent neurons by applying Proof 4.1 on the first two terms of Equation 17.
**Definition 4.3** (2D Convolutional Neuron): Let \(\mathbf{X}\in\mathbb{R}^{C\times N\times M}\) denote the input of a 2D convolutional layer, where \(C\), \(N\), and \(M\) represent the number of channels, height and width of the input feature, with the layer consist of the kernel's weights \(\mathbf{W}\in\mathbb{R}^{D\times C\times N^{k}\times M^{k}}\), where \(D\) denotes the number of output channels, while \(N^{k}\) and \(M^{k}\) represent the height and width of the kernel. The bias is denoted as \(\mathbf{b}\in\mathbb{R}^{D}\). Convolutional neurons can be constructed by trivially extending Definition 4.1 by sliding the linear neuron over the input after flattening each input patch to a vector. Thus, starting from the \(c\)-th channel of the input and flattening the \(i\)-th \(N^{k}\times M^{k}\) sub-matrix of \(\mathbf{X}\in\mathbb{R}^{C\times N\times M}\), denoted as \(\mathbf{x}_{i}\in\mathbb{R}^{N^{k}M^{k}}\), we can define the \(z_{i}\) the linear output of the \(i\)-th element of \(d\)-th output channel given by:
\[z_{i}=u_{i}(\mathbf{x})=\sum_{j=1}^{M}w_{j}x_{ij}+b_{d}\in\mathbb{R}, \tag{23}\]
where \(\mathbf{w}\in\mathbb{R}^{N^{k}M^{k}}\) is the equivalent to the flattened kernel weight of \(\mathbf{W}_{cd}\in\mathbb{R}^{N^{k}_{i}\times M^{k}_{i}}\), where \(c\) and \(d\) are the current input and output channels, respectively. To this extent, Theorem 4.1 can also be applied to convolutionals, since they can be defined as linear neuron building blocks, allowing us to apply Proof 4.1. Without loss of generality the theorem can be applied to 1D, 3D and multidimensional convolutionals as well.
## 5 Non-negative Optimization
Transforming an already trained network to its non-negative isomorphic can be limiting in cases where continuing training is required. To this end, we propose an adjustments on SGD that constrains the trainable parameter \(\mathbf{\theta}^{(k)}\) of \(k\)-th layer, consisting of weights \(\mathbf{w}^{(k)}\) and biases \(\mathbf{b}^{(k)}\), to non-negative quantities. More precisely, we modify the additive update rule, in which sign shifting is attributed during SGD optimization, by normalizing the gradients using the non-linear function \(\tanh:\mathbb{R}\rightarrow(-1,1)\) and multiplying it by the absolute value of the parameter. The optimization algorithm picks a point \(\mathbf{\theta}_{\tau}^{(k)}\), at each time step \(\tau\), and updating it according to:
\[\mathbf{\theta}_{\tau}^{(k)}=\mathbf{\theta}_{\tau-1}^{(k)}+|\mathbf{\theta}_{\tau-1}^{(k )}|\tanh\left(-\eta_{in}\frac{\partial J}{\partial\mathbf{\theta}_{\tau-1}^{(k)}} \right)\eta_{out}\in\mathbb{R}^{+}, \tag{24}\]
where \(\eta_{in}\in\mathbb{R}^{+}\) is the inner and \(0<\eta_{out}\leq 1\) the outer learning rate. Essentially, the inner learning rate allows one to adjust the gradients regarding the working range of the used non-linearity. The outer learning rate affects the size of the step similarly to the learning rate used in traditionally applied optimization methods. The proposed optimization method is a sign-preserving alternative of SGD, named _non-negative stochastic gradient descent_ (NNSGD), and when combined with the NNT can be used to post-train or train from scratch DL models in a non-negative manner. Both proposed non-negative optimization is presented algorithmically as well in the Appendix.
## 6 Experimental Evaluation
We experimentally evaluate the proposed framework using a wide range of architectures and photonic configurations demonstrating its capabilities in three scenarios: a) transforming pretrained ANN to its non-negative isomorphic without any performance degradation, b) continuing training of a non-negative isomorphic network that is regularly pretrained, and c) non-negative training from scratch. We evaluate it in image classification (MNIST, Fashion MNIST, CIFAR10), malware classification (Malimg [30]), a large scale financial time-series forecasting (FI2010 [31]) and a simple natural language processing (Names [35]) task. Details about the experimental setup, models applied, datasets, photonic configurations and hyper parameter tuning process are provided in detail in the Appendix. Note that we applied the proposed framework to small models according to current neuromorphic photonics capabilities and limitations. Additionally, the employed hyper parameter tuning is applied to all scenarios for fairness since both proposed methods and evaluated baselines are targeting non-negative quantities with significantly different parameter distributions and magnitudes with those the SGD traditionally targets.
### Trained model transformation
In Table 1, we report the evaluation results of the proposed NNT when applied to traditionally trained DL models. More specifically, we optimize the network using the SGD optimizer and then we apply the proposed transformation to acquire its non-negative isomorphic network reporting if the performance matches exactly to the original one. In each case, after performing hyper parameter search, we evaluate the best configuration obtained (e.g. learning rate) evaluating them in 5 evaluation runs and report the average and variance of the evaluation accuracy (or F1 and \(\kappa\) scores on highly unbalanced datasets). The proposed transformation leads to the exact same accuracy irrespective of applied architectures, dataset, and/or photonic configuration. The proposed transformation enables us to exploit known techniques of optimization without constraining the model's parameters during training, which is shown to lead to performance degradation [3; 8; 12], and, in turn, transform to its non-negative isomorphic before being deployed on photonic hardware.
### Non-negative post training
We also evaluate the proposed transformation and non-negative optimization method when applied to regular pre-trained models. More specifically, we train models using SGD optimizer, and after a few epochs we transform the trained network and continue the training on its isomorphic model in a non-negative manner. We compare the proposed non-negative optimization method to a baseline non-negative training used in literature [3], given by the update rule \(\theta_{t}=max\{0,\theta_{t-1}-\eta(\partial J/\partial\theta_{t-1})\}\in \mathbb{R}^{+}\), applied on the SGD optimizer, denoted as Clipping Stochastic Gradient Descent (CSGD).
In Table 2, we report the average accuracy of the evaluation phase and the variance over 5 evaluation runs using MLPs. The proposed framework, including both transformation and NNSGD optimization, is reported in the 4th column. As baselines, we evaluate CSGD optimization used directly after pre
\begin{table}
\begin{tabular}{l|l|l c||c c} \multirow{2}{*}{Dataset} & \multirow{2}{*}{Architecture} & \multicolumn{2}{c||}{Photonic Sigmoid} & \multicolumn{2}{c}{Photonic Sinusoidal} \\ \cline{3-6} & & Regular & Non-negative Match & Regular & Non-negative Match \\ \hline MNIST & MLP & \(97.44\pm 0.08\) & ✓ & \(97.90\pm 0.10\) & ✓ \\ MNIST & CNN & \(98.81\pm 0.14\) & ✓ & \(99.04\pm 0.07\) & ✓ \\ FMNIST & MLP & \(85.64\pm 0.22\) & ✓ & \(86.30\pm 0.52\) & ✓ \\ FMNIST & CNN & \(87.59\pm 0.47\) & ✓ & \(89.22\pm 0.18\) & ✓ \\ CIFAR10 & MLP & \(37.08\pm 0.37\) & ✓ & \(39.48\pm 1.69\) & ✓ \\ CIFAR10 & CNN & \(84.03\pm 0.69\) & ✓ & \(84.54\pm 0.28\) & ✓ \\ Mailing\({}^{*}\) & CNN & \(91.06\pm 4.56\) & ✓ & \(93.28\pm 4.30\) & ✓ \\ Names & RNN & \(56.02\pm 0.70\) & ✓ & \(65.82\pm 0.90\) & ✓ \\ FI2010\({}^{\dagger}\) & RNN & \(0.1371\pm 0.0024\) & ✓ & \(0.1612\pm 0.0033\) & ✓ \\ \end{tabular}
*F1 score, \(\dagger\)Cohen’s kappa score are reported, since the datasets a highly unbalanced
\end{table}
Table 1: Applying proposed transformation in various datasets, architectures and photonic configurations. The Non-negative Match column defines if the performance of the non-negative isomorphic matches trained model’s one.
training (column 2) and after applying the proposed transformation (column 3). As shown, directly applying clipping to trained parameters is catastrophic since it leads to a huge loss of information, while the network is unable to recover from it. When applying the proposed transformation, the transformed parameters preserve the information obtained from the training while the models are further optimized with CSGD. Applying the proposed NNSGD method, the evaluation performance is slightly improved compared to the CSGD. Compared to the regular training using the same epochs of training, the performance obtained is highly competitive, with some cases even exceeding them (such as in photonic sinusoidal configuration on FMINST and CIFAR10 datasets). Finally, we provide an experimental analysis on parameter variance in the Appendix, which shed light on the destructive effects of not applying the proposed transformation as well as the benefits of applying NNSGD.
We extend the evaluation of the proposed method in both CNNs and RNNs with the results reported in Table 3. Both optimization methods achieve competitive results with the regular training, while the proposed non-negative optimizer slightly improves the performance of the non-negative isomorphic models compared to CSGD. Interestingly, in the Maling dataset, the proposed non-negative optimizer improves the performance of the model even compared to the regular training (\(\sim 5\%\) in case of photonic sigmoid).
\begin{table}
\begin{tabular}{l|l l|l l} & **Exp. Initialization** & **Proposed** & **Exp. Initialization** & **Proposed** \\
**Dataset** & **+ CSGD** & **+ CSGD** & **+ NNSGD** & **+ NNSGD** \\ \hline \hline \multicolumn{5}{c}{Photonic Sigmoid} \\ \hline MNIST & \(11.35\pm 0.00\) & \(\mathbf{92.52\pm 0.52}\) & \(11.35\pm 0.00\) & \(\mathbf{93.34\pm 1.11}\) \\ FMNIST & \(10.00\pm 0.00\) & \(\mathbf{82.31\pm 1.29}\) & \(10.00\pm 0.00\) & \(\mathbf{84.09\pm 0.56}\) \\ CIFAR10 & \(10.00\pm 0.00\) & \(\mathbf{37.25\pm 1.57}\) & \(10.00\pm 0.00\) & \(\mathbf{37.61\pm 1.76}\) \\ \hline \multicolumn{5}{c}{Photonic Sinusoidal} \\ \hline MNIST & \(86.36\pm 1.39\) & \(\mathbf{92.51\pm 0.78}\) & \(35.31\pm 6.11\) & \(\mathbf{94.14\pm 0.90}\) \\ FMNIST & \(68.95\pm 3.71\) & \(\mathbf{82.91\pm 0.55}\) & \(9.97\pm 0.02\) & \(\mathbf{83.57\pm 0.26}\) \\ CIFAR10 & \(10.00\pm 0.00\) & \(\mathbf{34.96\pm 2.44}\) & \(10.00\pm 0.00\) & \(\mathbf{35.41\pm 1.43}\) \\ \end{tabular}
\end{table}
Table 4: Non-negative training from scratch in MLPs
\begin{table}
\begin{tabular}{l|l l|l l} & **Regular** & **Proposed** & **Proposed** \\
**Dataset** & **+ CSGD** & **+ CSGD** & **+ NNSGD** \\ \hline \hline \multicolumn{5}{c}{Photonic Sigmoid} \\ \hline MNIST & \(11.00\pm 0.00\) & \(\mathbf{97.31\pm 0.90}\) & \(\mathbf{97.40\pm 0.06}\) \\ FMNIST & \(10.00\pm 0.00\) & \(\mathbf{84.77\pm 0.21}\) & \(\mathbf{85.01\pm 0.12}\) \\ CIFAR10 & \(10.00\pm 0.00\) & \(\mathbf{36.34\pm 0.37}\) & \(\mathbf{36.81\pm 0.33}\) \\ \hline \multicolumn{5}{c}{Photonic Sinusoidal} \\ \hline MNIST & \(09.74\pm 0.00\) & \(\mathbf{97.85\pm 0.10}\) & \(\mathbf{97.86\pm 0.07}\) \\ FMNIST & \(10.00\pm 0.00\) & \(\mathbf{86.62\pm 0.09}\) & \(\mathbf{86.95\pm 0.17}\) \\ CIFAR10 & \(10.00\pm 0.00\) & \(\mathbf{40.96\pm 0.30}\) & \(\mathbf{41.19\pm 0.38}\) \\ \end{tabular}
\end{table}
Table 2: Non-negative post training after transformation using MLPs
\begin{table}
\begin{tabular}{l|l l l|l l l} & **Exp. Initialization** & **Proposed** & **Exp. Initialization** & **Proposed** \\
**Dataset** & **+ CSGD** & **+ CSGD** & **+ NNSGD** & **+ NNSGD** \\ \hline \hline \multicolumn{5}{c}{Photonic Sigmoid} \\ \hline MNIST & \(11.35\pm 0.00\) & \(\mathbf{92.52\pm 0.52}\) & \(11.35\pm 0.00\) & \(\mathbf{93.34\pm 1.11}\) \\ FMNIST & \(10.00\pm 0.00\) & \(\mathbf{82.31\pm 1.29}\) & \(10.00\pm 0.00\) & \(\mathbf{84.09\pm 0.56}\) \\ CIFAR10 & \(10.00\pm 0.00\) & \(\mathbf{37.25\pm 1.57}\) & \(10.00\pm 0.00\) & \(\mathbf{37.61\pm 1.76}\) \\ \hline \multicolumn{5}{c}{Photonic Sinusoidal} \\ \hline MNIST & \(86.36\pm 1.39\) & \(\mathbf{92.51\pm 0.78}\) & \(35.31\pm 6.11\) & \(\mathbf{94.14\pm 0.90}\) \\ FMNIST & \(68.95\pm 3.71\) & \(\mathbf{82.91\pm 0.55}\) & \(9.97\pm 0.02\) & \(\mathbf{83.57\pm 0.26}\) \\ CIFAR10 & \(10.00\pm 0.00\) & \(\mathbf{34.96\pm 2.44}\) & \(10.00\pm 0.00\) & \(\mathbf{35.41\pm 1.43}\) \\ \end{tabular}
\end{table}
Table 4: Non-negative training from scratch in MLPs
### Non-negative training from scratch
In the final set of experiments, we evaluate the proposed framework in fully non-negative training. The proposed transformation is applied after randomly initializing the weights using the default PyTorch initialization [16]. In Table 4, we use as baseline the non-negative exponential initialization, used in [3]. Thus, four different non-negative approaches are evaluated: a) applying non-negative exponential initialization combined with CSGD optimizer, b) applying the proposed transformation along with the CSGD optimizer, c) applying non-negative exponential initialization combined with proposed NNSGD optimizer, and d) applying the proposed non-negative framework. The non-negative exponential initialization (columns 2 and 4) leads to unstable performance in contrast to the proposed NNT (columns 3 and 5) which leads to more robust non-negative training. Competing the proposed non-negative optimization method with the CSGD, we conclude that the proposed non-negative optimization leads to slightly better performance.
Due to the poor performance when non-negative exponential initialization is applied, in Table 5 we report the evaluation performance only in cases where the proposed transformation is applied. The results confirm that the proposed NNSGD can generalize to CNNs and RNNs architectures and improve the average evaluation accuracy compared to CSGD when a non-negative training from scratch is required. Similar to non-negative post-training the performance obtained is highly competitive to regular training with some cases even exceeding them (such as in Malimg dataset and in RNNs).
## 7 Conclusions
We introduce a transformation method that acquires the non-negative isomorphic of a regular trained or untrained neural network. Along with the proposed non-negative optimization method, one can either train from scratch a non-negative neural network or continue training when needed. The experimental results confirm that the acquired non-negative isomorphic results in exactly the same performance as the regular one, while when combined with the proposed non-negative optimization leads to competitive accuracies in contrast to regular training.
## 8 Limitations and future works
The proposed transformation cannot be directly applied in RNNs with gated mechanisms, such as GRUs and LSTMs, and further adjustments are needed. Also, the shifted activation function introduces a subtraction that can be integrated into the hardware implementation since it shifts the transfer function on the positive side of \(x\)-axis. As a result, the proposed non-negative optimization method ensures non-negativity to parameters only. In order to acquire also non-negative intermediate values the proposed transformation should be applied anew after non-negative post-training, which can slightly increase the computational complexity. Furthermore, it should be noted that non-negative ANNs leverage advantages on interpretability since they allow us to accomplish part-based learning due to the elimination of canceling neurons resulting in additive data representation [3] which is conceptually tied to human cognition [20]. As has been shown, non-negativity constraints significantly improve the human interpretation of ANNs through the visual representation [3; 8] of models, giving us further insights that can be used to further improve the performance of DL models [12]. The wide range of applications that the non-negative ANNs can be applied highlighting the significance of such a research direction.
\begin{table}
\begin{tabular}{l|c c c|c c}
**Method** & MNIST & FMNIST & CIFAR10 & Malimg\({}^{*}\) & Names & F2010\({}^{1}\) \\ \hline \multicolumn{6}{c}{Photonic Sigmoid} \\ \hline Proposed (CSGD) & \(97.69\pm 0.20\) & \(83.81\pm 0.85\) & \(71.61\pm 1.04\) & \(91.47\pm 4.37\) & \(54.00\pm 2.13\) & \(0.1338\pm 0.0045\) \\ Proposed (NNSGD) & \(\mathbf{98.14\pm 0.23}\) & \(\mathbf{83.87\pm 0.19}\) & \(\mathbf{72.16\pm 0.54}\) & \(\mathbf{91.98\pm 4.00}\) & \(\mathbf{56.68\pm 1.11}\) & \(\mathbf{0.1498\pm 0.0112}\) \\ \hline \multicolumn{6}{c}{Photonic Sinusoidal} \\ \hline Proposed (CSGD) & \(98.54\pm 0.14\) & \(87.77\pm 0.36\) & \(69.87\pm 1.73\) & \(96.53\pm 0.42\) & \(64.17\pm 0.14\) & \(0.1732\pm 0.0175\) \\ Proposed (NNSGD) & \(\mathbf{98.71\pm 0.05}\) & \(\mathbf{87.86\pm 0.36}\) & \(\mathbf{75.16\pm 1.83}\) & \(\mathbf{97.26\pm 0.31}\) & \(\mathbf{68.70\pm 0.78}\) & \(\mathbf{0.1789\pm 0.0102}\) \\ \multicolumn{6}{l}{*F1 score, \(\uparrow\)Cohen’s kappa score are reported, since the datasets a highly unbalanced} \\ \end{tabular}
\end{table}
Table 5: Non-negative training from scratch in CNNs and RNNs |
2304.04063 | Counterfactual Explanations of Neural Network-Generated Response Curves | Response curves exhibit the magnitude of the response of a sensitive system
to a varying stimulus. However, response of such systems may be sensitive to
multiple stimuli (i.e., input features) that are not necessarily independent.
As a consequence, the shape of response curves generated for a selected input
feature (referred to as "active feature") might depend on the values of the
other input features (referred to as "passive features"). In this work, we
consider the case of systems whose response is approximated using regression
neural networks. We propose to use counterfactual explanations (CFEs) for the
identification of the features with the highest relevance on the shape of
response curves generated by neural network black boxes. CFEs are generated by
a genetic algorithm-based approach that solves a multi-objective optimization
problem. In particular, given a response curve generated for an active feature,
a CFE finds the minimum combination of passive features that need to be
modified to alter the shape of the response curve. We tested our method on a
synthetic dataset with 1-D inputs and two crop yield prediction datasets with
2-D inputs. The relevance ranking of features and feature combinations obtained
on the synthetic dataset coincided with the analysis of the equation that was
used to generate the problem. Results obtained on the yield prediction datasets
revealed that the impact on fertilizer responsivity of passive features depends
on the terrain characteristics of each field. | Giorgio Morales, John Sheppard | 2023-04-08T16:25:21Z | http://arxiv.org/abs/2304.04063v2 | # Counterfactual Explanations of Neural Network-Generated Response Curves
###### Abstract
Response curves exhibit the magnitude of the response of a sensitive system to a varying stimulus. However, response of such systems may be sensitive to multiple stimuli (i.e., input features) that are not necessarily independent. As a consequence, the shape of response curves generated for a selected input feature (referred to as "active feature") might depend on the values of the other input features (referred to as "passive features"). In this work we consider the case of systems whose response is approximated using regression neural networks. We propose to use counterfactual explanations (CFEs) for the identification of the features with the highest relevance on the shape of response curves generated by neural network black boxes. CFEs are generated by a genetic algorithm-based approach that solves a multi-objective optimization problem. In particular, given a response curve generated for an active feature, a CFE finds the minimum combination of passive features that need to be modified to alter the shape of the response curve. We tested our method on a synthetic dataset with 1-D inputs and two crop yield prediction datasets with 2-D inputs. The relevance ranking of features and feature combinations obtained on the synthetic dataset coincided with the analysis of the equation that was used to generate the problem. Results obtained on the yield prediction datasets revealed that the impact on fertilizer responsivity of passive features depends on the terrain characteristics of each field. 1
Footnote 1: This paper is a preprint (accepted to appear in the International Joint Conference on Neural Networks 2023). IEEE copyright notice. 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted.
Counterfactual explanations, response curves, deep regression, explainable machine learning.
## I Introduction
Response curves are tools that allow for the analysis of the responsivity of a sensitive system to a particular stimulus; we refer to that stimulus as the "active feature". Specifically, a response curve is defined as a curve that exhibits the various values taken by the response of the system to all admissible values of the active feature.
Typically, analysis of response curves is focused on their shape rather than on their absolute values. For instance, in pharmacology, the shape of a drug's dose-response curve reflects the strength of the drug [1, 2]. Likewise, in agriculture, nitrogen fertilizer-yield response (N-response) curves estimate the crop yield based on different fertilizer inputs. The shape of such curves can help determining the economic optimum nitrogen rate (EONR), defined as the nitrogen rate beyond which there is no actual profit for the farmers [3].
Motivating the current work, response values displayed by response curves may depend, not only on the relationship between the response variable and the active feature, but also on other stimuli, which we will refer to as "passive features". Traditionally, such response curves are fitted using univariate (non-)linear regression models between the active feature and the response variable. Such models could be based on parametric response functions that assume plateau-type, quadratic, and exponential behavior, in the case of N-response curves [3, 4], or sigmoidal, U-shape, and hill functions, in the case of dose-response curves [2, 5].
The traditional approaches assume that, given an active feature, the shape of its response curves is independent of other variables that could possibly affect the absolute response value. In the case of agricultural applications, fitting a single N-response curve for an entire field implies that the field is homogeneous and behaves similarly at every location. Nevertheless, recent works suggest that the N-response functional form varies across the fields given the variability of factors such as terrain slope and soil composition [6, 7].
In this work, we build upon the conclusion that the shape of response curves might be dependent not only on the active feature but also on the interaction of the active feature with the passive features. We present a method to derive non-parametric response curves from observed data. That is, we learn the functional form of the response curves instead of using parametric curve-fitting approaches. To do this, we train multivariate regression neural networks (NNs) that act as mappings from the feature space and the response value space. Thus, they are used to generate approximated response curves given an active feature.
The high complexity of the non-linear functions learned by NNs often prevents humans from explaining their behavior, which is why they are usually referred to as "black-box models." Given that the importance of NNs in inference tasks has grown rapidly, the area of explainable machine learning (XML) has gained more interest in recent years. XML aims to allow humans to identify cause-effect relationships between inputs and outputs of black-box models [8]. In concordance with this, we propose a _post-hoc_ explainability method that allows for understanding the impact that interacting passive features have on the shape of NN-generated response curves.
Thus, we propose a method that generates counterfactual explanations (CFEs) for each sample to find the minimum number of passive features we need to modify for the response curve of the counterfactual sample to show a change in responsivity with respect to the original one. The CFE generation problem is posed as a multi-objective optimization (MOO) problem and is solved using a genetic algorithm-based approach. The upper and lower regions of Fig. 1 illustrate the response curve generation and counterfactual response curve generation processes, respectively, of a given sample. Here, the Non-dominated Sorting Genetic Algorithm II (NSGA-II) represents the selected genetic algorithm. Finally, we repeat this process for each sample in a dataset to derive global relevance scores, not only for each passive feature but also for specific combinations of them. Our specific contributions are summarized as follows:
1. Our main contribution is a CFE generation method posed as a MOO problem that, given a sample, finds the passive features with the greatest impact on the responsivity of the active feature.
2. Considering a multivariate regression problem, this is the first work that analyzes the influence of a set of passive features over the shape of the response curves generated for the response variable and a selected active feature.
3. We present a method to generate aligned approximate response curves using regression NNs.
4. We provide global scores that assess the relevance of individual features and combinations of features.
## II Related Work
Counterfactual explanations are used to identify "actionable knowledge," i.e., knowledge about causal dependencies between inputs and outputs of a system [9]. Such explanations contribute to understanding what could be changed in the input to achieve a desired outcome. Most CFE methods focus on classification problems where the objective is to find CFEs that produce class labels different from those that were originally predicted [10]. These methods can be categorized as model-agnostic or model-specific. The former are based on general principles that use outputs of already fitted black-box models [9, 11, 12], where the latter are specific to a particular class of methods, such as differentiable models [13] or tree-based ensembles [14].
To date, little attention has been given to counterfactual methods for regression. Spooner _et al._[15] proved that verifying the existence of counterfactuals is NP-complete. Regression problems may deal with multiple-dimensional real outputs and, thus, it is not always possible to have efficient algorithms for establishing the existence of counterfactuals with a desired target value. However, it is not always necessary to find CFEs that lead to specific target values. For example, Kommiya Mothilal _et al._[16] proposed a method that assesses individual feature relevance from CFEs. They consider that a feature is more relevant than others for a prediction task if it is changed more often when generating CFEs.
On the other hand, analyses of systems' response curves consider that the shape of the response curve can be described solely by the relationship between the response variable and the active feature [3, 4]. This requires the strong assumption that the explanatory features are mutually independent. If this held, the set of passive features would only shift the response curves vertically but would not alter their shape. However, there is no guarantee for this assumption to always hold, as seen in some precision agriculture applications [6, 7]. Thus, we argue that it is essential to acknowledge all possible sources of variability in analyzing the shape of response curves. To the best of our knowledge, no work has been published on this type of analysis.
In this paper, we explore the use of CFEs to analyze the impact of passive features on the shape of response curves. It is worth noting that Schwab _et al._[17] proposed a method to learn counterfactual representations for estimating individual dose-response curves using NNs. They aimed to estimate what would have happened if a different treatment (dose) had been given to a patient, hence their objective was to produce accurate response estimates across the entire range of all available treatment options. In this context, the term "counterfactual outcome" simply refers to the estimated response value obtained for dosages different from the one that the patient actually received (i.e., estimated response curves). This differs from our perspective significantly. Unlike previous counterfactual methods that aim to obtain a different response value, our objective is to generate counterfactual samples for a subset of the input features (i.e., passive features) in order to alter the response curve's shape. In other words, our CFEs aim to change the system's responsivity; that is, the way it reacts to all admissible values of the active feature.
Fig. 1: Overview of counterfactual response curve generation methodology.
## III Methodology
Let \(f(\cdot)\) denote the underlying function of a system whose response \(y\) is sensitive to \(n\) stimuli (i.e., input features) \(\textbf{x}=\{x_{1},\ldots,x_{n}\}\), such that \(y=f(\textbf{x})+\varepsilon\). Here, \(\varepsilon\) is a random variable that represents the error term. For convenience, we first consider the case where each input and output is one-dimensional (\(x_{i}\in\mathbb{R}\), \(\forall i\in[1,n]\) and \(y\in\mathbb{R}\)). Later in Sec. IV-B, we extend this approach to consider two-dimensional inputs.
Without loss of generality, suppose we select the \(s\)-th feature (\(s\in[1,n]\)) as the active feature. The set of remaining features (i.e., passive features) is denoted as \(\textbf{p}=\textbf{x}\setminus\{x_{i}\}\). Thus, a response curve generated for the \(s\)-th feature, \(R_{s}(\textbf{x})\), consists of the set of values taken by the response \(y\) for all admissible values of \(x_{s}\) (bounded by \(x_{s}^{\min}\leq x_{s}\leq x_{s}^{\max}\)) as follows:
\[R_{s}(\textbf{x})=\{f(\textbf{x}|x_{s}=x_{s}^{\min}),\ldots,f(\textbf{x}|x_{ s}=x_{s}^{\max})\}.\]
### _Response Curve Generation_
In many real-life settings, the underlying function \(f(\cdot)\) cannot be retrieved directly; therefore, we need to approximate it based on observed data. Let \(\textbf{X}=\{\textbf{x}_{1},\ldots,\textbf{x}_{N}\}\) be a data set with \(N\) training samples, where each sample is denoted as \(\textbf{x}_{j}=\{x_{j1},\ldots,x_{jn}\}\), and \(\textbf{y}=\{y_{1},\ldots,y_{N}\}\) is the set of corresponding target observations. We construct a NN regression model that captures the association between **X** and **y**. Its computed function is denoted as \(\hat{f}(\cdot)\), and \(\boldsymbol{\theta}\) denotes its weights. Thus, given an input \(\textbf{x}_{j}\), the target estimate is computed as \(\hat{y}_{j}=\hat{f}(\textbf{x}_{j},\boldsymbol{\theta})\).
The network \(\hat{f}\) is trained to reduce the mean square error of the estimations such that the parameters \(\boldsymbol{\theta}\) are obtained by the following optimization: \(\boldsymbol{\theta}^{\star}=\underset{\boldsymbol{\theta}}{\text{argmin}}\ \frac{1}{N}\sum_{j=1}^{N}(\hat{y}_{j}-y_{j})^{2}\). Hence, once the network is trained, and assuming that it captures the underlying causal structure of the problem sufficiently well, it can be used to generate an approximate response curve, \(\hat{R}_{s}(\textbf{x}_{j})\), for a given active feature and input sample \(\textbf{x}_{j}\):
\[\hat{R}_{s}(\textbf{x}_{j})=\{\hat{f}(\textbf{x}_{j}|x_{js}=x_{s}^{\min}), \ldots,\hat{f}(\textbf{x}_{j}|x_{js}=x_{s}^{\max})\}. \tag{1}\]
Note that we are not interested in the absolute estimated response values when comparing the shape of two or more curves. Thus, we get rid of any vertical shifts and obtain the aligned approximate response curve \(\tilde{R}_{s}(\textbf{x}_{j})\) by subtracting from \(\tilde{R}_{s}(\textbf{x}_{j})\) its minimum value:
\[\tilde{R}_{s}(\textbf{x}_{j})=\hat{R}_{s}(\textbf{x}_{j})-\min(\hat{R}_{s}( \textbf{x}_{j})). \tag{2}\]
Finally, we create the set \(\textbf{R}_{s}\) that consists of the approximate response curves of all samples in **X**; that is, \(\textbf{R}_{s}=\{\tilde{R}_{s}(\textbf{x}_{0}),\ldots,\tilde{R}_{s}(\textbf{ x}_{N})\}\).
It is worth mentioning that, previously, we also experimented with other types of classifiers (i.e., support vector machines and random forests) to use in the response curve generation process. However, due to the fast convergence rates and the substantial improvements in performance, we decided to focus on feedforward NNs (FNNs) and convolutional neural networks (CNNs) for 1-D and 2-D regression, respectively, over the other types of models.
### _Functional Principal Component Analysis_
The set \(\textbf{R}_{s}\) can be interpreted as a set of functional data where each of its samples corresponds to an approximated response curve. Recall that one of our objectives is to generate counterfactual response curves whose shapes differ with respect to the original ones. As such, we need to determine a distance metric that conveys the difference in shape between two functional data samples.
In that sense, we use functional principal component analysis (fPCA), which is a tool to extract the dominant modes of variation of functional data [18]. This approach allows us to obtain a reduced set of \(K\) orthonormal functions \(\{\xi_{1},\ldots\xi_{K}\}\) so that each curve in \(\textbf{R}_{s}\) is approximated as an expansion of these basis functions as \(\tilde{R}_{s}(\textbf{x}_{j})\approx\sum_{k=1}^{K}v_{k}^{(s)}(\textbf{x}_{j}) \,\xi_{k}\), where \(v_{k}^{(s)}(\textbf{x}_{j})\) is the value of the \(k\)-th principal component corresponding to the response curve generated for \(\textbf{x}_{j}\) considering the \(s\)-th feature as the active feature.
A _functional_ principal component (fPC) represents a distinct curve pattern. Hence, two curves with different shapes will be encoded using different fPC values. We consider using \(K=3\) fPCs, as they were sufficient to explain at least 99.5% of the variance of the datasets used in our experiments. We define our distance metric \(d_{s}(\textbf{x}_{j},\textbf{x}_{q})\), calculated between the transformed response curves generated for \(\textbf{x}_{j}\) and \(\textbf{x}_{q}\), as:
\[d_{s}(\textbf{x}_{j},\textbf{x}_{q})=\sqrt{\sum_{k=1}^{K=3}\left(v_{k}^{(s)}( \textbf{x}_{j})-v_{k}^{(s)}(\textbf{x}_{q})\right)^{2}}. \tag{3}\]
### _Counterfactual Explanations for Response Curves_
Given a sample \(\textbf{x}_{j}\), let \(\textbf{x}_{j}^{\prime}=\{x_{j1}^{\prime},\ldots,x_{jn}^{\prime}\}\) denote a counterfactual explanation and let \(\tilde{R}_{s}(\textbf{x}_{j}^{\prime})\) be its corresponding aligned approximated response curve. More specifically, we define \(\textbf{x}_{j}^{\prime}\) as a data point whose response curve \(\tilde{R}_{s}(\textbf{x}_{j}^{\prime})\) has a different shape from that of \(\tilde{R}_{s}(\textbf{x}_{j})\) (i.e., it shows different responsivity), such that \(d_{s}(\textbf{x}_{j},\textbf{x}_{j}^{\prime})\geq\epsilon\), where \(\epsilon\) is a hyperparameter threshold. The CFE \(\textbf{x}_{j}^{\prime}\) is obtained by introducing perturbations to the set of passive features:
\[x_{ji}^{\prime}=\begin{cases}x_{ji}+\Delta_{ji},&\text{if }i\neq s\\ x_{ji},&\text{if }i=s.\end{cases}\]
Moreover, we aim to find the minimum subset of passive features that should be affected by small perturbations so that the CFE's responsivity is sufficiently different from that of the original one. As such, the counterfactual search problem can be cast as a MOO problem that is solved for each sample \(\textbf{x}_{j}\) as follows:
\[\min_{\textbf{x}_{j}^{\prime}}\textbf{g}(\textbf{x}_{j})=\min_{\textbf{x}_{j }^{\prime}}\left(g_{1}(\textbf{x}_{j},\textbf{x}_{j}^{\prime}),g_{2}(\textbf{ x}_{j},\textbf{x}_{j}^{\prime}),g_{3}(\textbf{x}_{j},\textbf{x}_{j}^{\prime}) \right), \tag{4}\]
where \(g_{1}(\textbf{x}_{j},\textbf{x}_{j}^{\prime})\), \(g_{2}(\textbf{x}_{j},\textbf{x}_{j}^{\prime})\), and \(g_{3}(\textbf{x}_{j},\textbf{x}_{j}^{\prime})\) are independent objective functions whose goals may contradict.
The first objective maximizes the distance between the transformed response curves using Eq. 3:
\[g_{1}(\textbf{x}_{j},\textbf{x}_{j}^{\prime})=\begin{cases}-d_{s}(\textbf{x}_{j },\textbf{x}_{j}^{\prime}),&\text{if }d_{s}(\textbf{x}_{j},\textbf{x}_{j}^{\prime})<\epsilon\\ -\epsilon,&\text{if }d_{s}(\textbf{x}_{j},\textbf{x}_{j}^{\prime})\geq\epsilon. \end{cases} \tag{5}\]
The second objective function is used to minimize the number of features that are modified, which are calculated using the \(L_{0}\) norm of \((\textbf{x}_{j}-\textbf{x}_{j}^{\prime})\) so that:
\[g_{2}(\textbf{x}_{j},\textbf{x}_{j}^{\prime})=||\textbf{x}_{j}-\textbf{x}_{j}^{ \prime}||_{0}=\sum_{i=1}^{n}\mathbb{I}_{x_{ji}\neq x_{ji}^{\prime}}. \tag{6}\]
Finally, the third objective function minimizes the distance between \(\textbf{x}_{j}\) and \(\textbf{x}_{j}^{\prime}\):
\[g_{3}(\textbf{x}_{j},\textbf{x}_{j}^{\prime})=\frac{1}{n}\sum_{i=1}^{n} \delta_{G}(\textbf{x}_{ji},\textbf{x}_{ji}^{\prime}), \tag{7}\]
where \(\delta_{G}\) represents the Gower distance, which takes into account that passive features can be numerical or categorical:
\[\delta_{G}(\textbf{x}_{ji},\textbf{x}_{ji}^{\prime})=\begin{cases}\frac{1}{r_ {j}}(|\textbf{x}_{j}-\textbf{x}_{j}^{\prime}|),&\text{if numerical}\\ -\epsilon,&\text{if categorical},\end{cases} \tag{8}\]
and \(r_{i}\) indicates the range of values of the \(i\)-th feature. All of the datasets used in this work consist of numerical features only (i.e., integer and real-valued); however, the use of the Gower distance in Eq. 7 allows us to consider more general cases.
Note that calculating \(g_{1}(\textbf{x}_{j})\), \(g_{2}(\textbf{x}_{j})\), and \(g_{3}(\textbf{x}_{j})\) involves generating the aligned approximated response curves \(\tilde{R}_{s}(\textbf{x}_{j})\) and \(\tilde{R}_{s}(\textbf{x}_{j}^{\prime})\). Some approaches optimize the CFE objective functions as part of a differentiable loss function [13]; however, those approaches cannot be applied in our case given that generating an aligned approximated response curve is not a differentiable process (see Eq. 1 and 2).
Similar to the work proposed by Dandl _et al._[11], we use the Non-dominated Sorting Genetic Algorithm II (NSGA-II) [19] to solve the mixed-variable optimization problem given in Eq. 4. NSGA-II is an elitist genetic algorithm that finds Pareto non-dominated solutions to MOO problems and uses a crowding distance measure to maintain diversity in subsequent generations. NSGA-II was preferred over the follow-on NSGA-III since it has shown superior performance on optimization problems with three objectives [20]. Note that even though we use a similar optimization framework to that of Dandl _et al._[11], our objectives are designed to alter the shape of the response curve of a given sample (Fig. 1), while theirs are designed to alter a single response value in the context of classification or regression problems.
Let us consider a population size of \(T_{0}\) CFE candidates, from which NSGA-II may select \(T\) non-dominated solutions (\(T\leq T_{0}\)) denoted as \(\{\textbf{x}_{j}^{(1)},\ldots,\textbf{x}_{j}^{(T)}\}\). For each solution, we calculate its performance \(z_{t}=\{g_{1}(\textbf{x}_{j}^{\prime(t)}),g_{2}(\textbf{x}_{j}^{\prime(t)}),g _{3}(\textbf{x}_{j}^{\prime(t)})\}\). The objective space is defined as the set of \(T\) three-dimensional points \(\textbf{Z}=\{z_{1},\ldots,z_{T}\}\). A solution \(z_{t}\) is said to dominate another solution \(z_{q}\) (\(z_{t}\preceq z_{q}\)) if it is no worse than \(z_{q}\) (i.e., \(g_{1}(\textbf{x}_{j}^{\prime(t)})\leq g_{1}(\textbf{x}_{j}^{\prime(q)})\), \(g_{2}(\textbf{x}_{j}^{\prime(t)})\leq g_{2}(\textbf{x}_{j}^{\prime(q)})\), and \(g_{3}(\textbf{x}_{j}^{\prime(t)})\leq g_{3}(\textbf{x}_{j}^{\prime(q)})\)) and it is strictly better than \(z_{q}\) in at least one objective. By definition, the set of non-dominated solutions **Z** constitutes a Pareto set on the Pareto front.
The remaining question is how to select the best solution from **Z**. Recall that we are mainly interested in minimizing the number of modified features that are sufficient to alter the responsivity of \(\textbf{x}_{j}\) such that \(d_{s}(\textbf{x}_{j},\textbf{x}_{j}^{\prime})\geq\epsilon\). Thus, from the subset of solutions in the Pareto set that produces the lowest \(g_{1}\) value, we select the solution that yields the lowest \(g_{2}\) value. Note that, according to this criterion, we could reconfigure our MOO problem to optimize \(g_{1}\) and \(g_{2}\) only, or \(g_{1}\) and \(g_{3}\) only (selecting the solution with the fewest changes). Nevertheless, in practice, we noticed substantially faster convergence rates by optimizing the three functions simultaneously, as they were shown to guide the search more effectively.
### _Local and Global Explanations_
Two types of explanation are sought: local and global. Local explanations convey which passive features have the greatest impact on the responsivity of a given sample, while global explanations allow for the identification of the passive features with the highest impact on the shape of the response curves generated for a sensitive system in general.
Given an input \(\textbf{x}_{j}\), its local explanation \(\alpha_{j}\) is the set of passive features that were modified during the generation of the counterfactual sample \(\textbf{x}_{j}^{\prime}\):
\[\alpha_{j}=\{i\mid(x_{ji}\in\textbf{x}_{j})\wedge(x_{ji}^{\prime}\in\textbf{x }_{j}^{\prime})\wedge(x_{ji}\neq x_{ji}^{\prime})\}. \tag{9}\]
On the other hand, the global explanation of a sensitive system is twofold. First, we assess individual feature relevance \(r_{i}\) by providing the ratio of times that a feature was modified during the CFE generation process of all samples in **X**:
\[r_{i}=\frac{1}{N}\sum_{j=1}^{N}\mathbb{I}_{i\,\in\,\alpha_{j}}. \tag{10}\]
We acknowledge that passive features are not necessarily independent and thus individual relevance scores are not enough to understand how they interact. For that reason, we also report the five most repeated feature combinations. By doing so, we identify which features react together and which feature combinations are the most effective. We restrict the number of reported feature combinations to five for conciseness, as explanations becomes more manageable for humans when smaller result sets are provided [8]. However, note that more feature combinations might need to be provided when dealing with problems with several passive features (the datasets used in this work require less than eight).
## IV Experimental Results
For our experiments, we evaluated our approach on a synthetic dataset with 1-D inputs and two real-world crop yield prediction datasets with 2-D outputs. As stated previously, no other works have been published on the analysis of the impact that a set of passive features have on the shape of response curves generated for an active feature. Therefore, we were unable to include other methods for comparison.
We also note that the described approach should not be confused with a sensitivity analysis, which is used to study
how the different values of a set of independent variables affect the response variable. In contrast, we study how the different values of a subset of the input features (i.e., the passive features) affect how the response variable reacts to the entire range of admissible values of a selected feature (i.e., the active feature). Furthermore, we do not assume independence among features; thus, we report the feature combinations with the greatest responsivity impact in addition to the estimated individual feature relevance values.
A 10-fold cross-validation (CV) design was used with all datasets. Having selected an active feature, the trained network was used to generate the response curves for all samples in the dataset along with their corresponding CFEs. We argue that including samples from the training set in this process does not lead to biased results. The reason is that, when generating a response curve, we synthesize samples that were not observed either in the training set or in the validation set. We considered that the resulting global explanations would be more consistent if they were calculated using as many approximated response curves as possible. Thus, these curves were used to produce local and global explanations, and we produced an independent set of explanations for each of the ten folds, given each fold could yield different curves since we have no ground truth for the curves themselves. Even so, if the data available is sufficiently large and diverse, it is reasonable to expect similar curves and explanation results from the ten models. The implementation code is available at [https://github.com/GiorgioMorales/ResponsivityAnalysis](https://github.com/GiorgioMorales/ResponsivityAnalysis).
### _Synthetic Dataset_
Validation of the analysis proposed in this work is challenging if the target response curves are unknown, which is the case when working with our real-world agricultural applications. For this reason, we created a synthetic dataset consisting of 10,000 samples derived from the following multiple nonlinear regression problem with five input features:
\[y=\text{sigmoid}((10x_{1}-5)+x_{2})x_{3}^{2}x_{4}+10x_{5}, \tag{11}\]
where \(x_{1}\)\(\sim\)\(U(0,1)\), \(x_{2}\)\(\sim\)\(U(-3,3)\), \(x_{3}\)\(\sim\)\(U(1,2)\), \(x_{4}\)\(\sim\)\(U(1,2)\), and \(x_{5}\)\(\sim\)\(U(0,2)\). We used \(x_{1}\) as the active feature (\(s=1\)).
For these experiments, we trained a feed-forward neural network with two hidden layers, each with 100 nodes. We calculated the mean square error (MSE) on the validation sets after CV to analyze regression performance. The resulting average MSE and standard deviation were \(6.78\times 10^{-3}\pm 1.32\times 10^{-4}\). Fig. 2 shows the aligned ground-truth response curves (generated from Eq. 11) and the aligned approximated response curves using the NN trained during the first CV iteration, for 100 random samples.
For the CFE generation process, we used a population size of \(T=50\) samples for NSGA-II and 100 iterations. Eq. 5 specifies that two response curves show different responsivity if their distance in the transformed space (after using fPCA) is greater than a threshold \(\epsilon\). Since selecting \(\epsilon\) is subjective, we considered using multiple threshold values and evaluating the consistency of the results. Intuitively, higher \(\epsilon\) leads to bigger differences between the shapes of two response curves; thus, the number of modified passive features might increase. In the future, we plan to replace this threshold with tests that determine if the responsivity of two or more response curves is statistically significant. For this dataset, we considered three thresholds: \(\epsilon\in\{0.4,0.6,0.8\}\). These values were chosen because they allowed us to see how the feature relevance values change as the \(\epsilon\) values increase.
For example, Fig. 3 shows the counterfactual response curves generated for the first sample of the dataset (\(j=1\)) using the network trained during the first CV iteration. Given this input, the local explanation is given by \(\alpha_{1}=\{2,3\}\) for the three selected thresholds. In other words, it was sufficient to modify the second and third features, \(x_{2}\) and \(x_{3}\) to alter the responsivity of the selected sample, regardless of the threshold.
We repeated this process for all the samples to obtain global explanations. Fig. 4 shows the individual feature relevance values achieved for all CV iterations and \(\epsilon\) values. The remaining global explanations are given by the most frequent feature combinations, which are reported in Table I. Specifically, we counted the combination of features that were most repeated
Fig. 4: Individual relevance of passive features (in %) of the synthetic dataset.
Fig. 3: Example of the counterfactual response curves generated using \(\epsilon=0.4,0.6,\text{and}\,0.8\) for a sample of the synthetic dataset.
Fig. 2: Response curve generation from the synthetic dataset. **(a)** Ground-truth response curves. **(b)** NN-generated response curves.
across all the CV iterations and selected the top five. For each selected combination, we calculated the ratio of times it appeared in each CV iteration and reported the mean ratio along with its corresponding standard deviation.
### _Yield Prediction Dataset_
To consider the usefulness of our approach in a real-world setting, we analyzed data collected on a crop yield prediction problem, which is one of the main tasks of precision agriculture. Accurate and reliable crop yield prediction provides farmers with tools to make informed decisions, such as determining the nitrogen fertilizer rates needed in specific regions of their fields to maximize their profits [6].
We used an early-yield prediction dataset of winter wheat we curated and presented in previous work [21]. The early-yield prediction is posed as a regression problem where the explanatory variables are represented by a set of features obtained during the growing season (March):
1. \(N\): Nitrogen rate applied previously (lb/ac).
2. \(A\): Topographic aspect (radians).
3. \(S\): Topographic slope (degrees).
4. \(TPI\): Topographic position index.
5. \(P\): Prior year precipitation (mm).
6. \(VH\) and \(VV\): Backscattering coefficients obtained from synthetic aperture radar (SAR) images from Sentinel-I.
The response variable corresponds to the yield value in bushels per acre (bu/ac), measured during the harvest season (August). Hence, the data acquired in March is used to predict crop yield values in August of the same year.
Each sample \(\textbf{x}_{j}=\{x_{j1},\ldots x_{jm}\}\) is represented as a spatial data cube of \(5\times 5\) pixels with seven features or channels (\(n=7\)); that is, \(x_{ji}\in\mathbb{R}^{5\times 5},\,\forall i\in[1,n]\). Each pixel represents a region of \(10\times 10\,\mathrm{m}\) of the field. The output represents the yield value corresponding to the central pixel of the input sample (\(y_{j}\in\mathbb{R}\)). To tackle this regression problem, we trained a convolutional neural network. In particular, we use the Hyper3DNetReg network architecture we proposed in [21]. It is a 3D-2D CNN specifically designed to predict the yield values of small spatial neighborhoods of a field. For our experiments, we used data collected from two winter wheat fields, which we refer to as "Field A" and "Field B". Data from three growing seasons were collected for each field (2016, 2018, and 2020).
The selected active feature of this problem was \(N\) (i.e., fertilizer input, \(s=1\)), as we are interested in the analysis of the N-response curves. The remaining six features constituted the set of passive features. Important factors such as EONR depend directly on the shape of these curves. For instance, EONR is traditionally found as the fertilizer rate at which the first derivative of the N-response curve is equal to a common yield-nitrogen price ratio [3].
After CV, the average validation MSE and standard deviation were \(147.25\pm 8.17\) and \(50.02\pm 4.29\) for fields A and B, respectively. Fig. 5.a and Fig. 5.b show the aligned approximated response curves generated for 100 random samples of field A and B, respectively, using the Hyper3DNetReg network. For these datasets, we experimented with \(\epsilon\in\{0.4,0.6,0.8\}\). Fig.6 shows an example of the response curves generated for the first sample of field A (\(j=1\)) using the network from the first CV iteration. Here, the local explanation is given by \(\alpha_{1}=\{6\}\), for the three selected thresholds. That is, we only needed to decrease the value of the sixth feature, \(VH\), to alter the responsivity of the selected sample. Counterfactual response curves and local explanations generated for samples of field B were similar but were omitted due to space limitations.
Figs. 7 and 8 show the individual feature relevance values achieved for all CV iterations and \(\epsilon\) values for fields A and B, respectively. Tables II and III show the mean percent of time that a top-five feature combinations appeared across CV for fields A and B, respectively.
ground-truth response curves. Even though ground-truth N-response curves are unobservable, we argue that the curves shown in Fig. 5 are sound. For example, previous works have described N-response curves as sigmoid-like curves [4], similar to most of the curves of Fig. 5.b. Other works have also considered quadratic functions [3] that account for the apparent decrease in yield response after reaching a certain saturation point, which can be seen in most of the curves of Fig. 5.a. Note that our analysis, as any other explainability method, allows for the explanation of the function approximated by the model. Specifically, we aim to identify the features (and feature combinations) that the model considers having the most responsivity impact.
The regression problem introduced in Eq. 11 consists of four passive features. \(x_{2}\) alters the shape of the response curves by stretching them horizontally. \(x_{3}\) multiplies the sigmoid function, as well as \(x_{4}\); however, since it is squared, it is expected to have more impact on the shape of the response curves. In addition, a small change in \(x_{3}\) produces a vertical stretching so that the resulting distance between the modified and original response curve is greater than the one produced by \(x_{2}\) when modified by the same amount. From Fig. 4, we verified that \(x_{3}\) has greater responsivity impact than \(x_{4}\) and \(x_{2}\), as it was more often modified by our CFE generation process. Finally, \(x_{5}\) is independent of the rest of the features, which implies that it only shifts the response curves vertically but does not alter their shape. Again from Fig. 4, we verified that \(x_{5}\) was assigned relevance values near \(0\%\). Hence, our method found that the feature with the greatest responsivity impact is \(x_{3}\), followed in relevance by \(x_{2}\) and \(x_{4}\). We conclude that the results obtained by our method coincide with those obtained by the analysis the equation used to generate the dataset. Also, Table I shows that the most effective feature combination is \(x_{2}\) and \(x_{3}\), followed by \(x_{3}\) and \(x_{4}\).
It is important to point out that we observed that the relevance ranking of features remained constant for different values of \(\epsilon\). For instance, Fig. 4 shows the ranking of features (from the most relevant to the least relevant) is: \([x_{3},x_{2},x_{4},x_{5}]\) for the three tested \(\epsilon\) values. Similarly, Fig. 7 shows that, for the three tested \(\epsilon\) values in field A, \(A\) is the most relevant feature while the relevance scores of \(S\), \(TPI\), \(VH\), and \(VV\) are comparable, and \(P\) is the least relevant feature. A similar behavior is observed in Fig. 8. This is meaningful because it suggests that the selection of \(\epsilon\) is not crucial when finding the passive features with greater global relevance.
Note in Fig. 4 that the resulting relevance scores are similar for all CV iterations. This indicates that the NN model learns similar functions across different iterations. This is also the case for Fig. 7 and 8, although they show greater variation due to overfitting issues caused by limited data set sizes and lack of data variability in some of the training folds.
Furthermore, we found that the relevance values increase as \(\epsilon\) increases. This is because, given a sample, if there is an increment in the desired distance threshold, it is likely that the number of modified passive features will grow. For example, Table I shows that the most repeated feature combination, \((x_{2},x_{3})\), occurs 10.9% of the time when \(\epsilon=0.4\) but 36.6% when \(\epsilon=0.8\). This also happens for fields A and B, as seen in Tables II and III. However, the most repeated feature combination for field A, \((A,VV)\), occurs only 0.6% of the time when \(\epsilon=0.6\) and 4.3% when \(\epsilon=1\). This means that changes in individual features have greater impact on the models trained for field A. For instance, in Fig. 6, it was enough to reduce the \(VH\) value to alter the shape of the response curve. This is because \(VH\) is related to the moisture content, meaning by lower \(VH\) values indicate less moisture in the soil. Thus it is reasonable to expect the soil to be less responsive to higher amounts of nitrogen.
From Figs. 7 and 8, we see that feature relevance values are different for both fields. The main reason is that field A is located on steep abrupt terrain while field B is not. As
Fig. 8: Rounded individual relevance of passive features (in %) of field B.
Fig. 7: Rounded individual relevance of passive features (in %) of field A.
a consequence, the model trained for field A learned that the aspect \(A\) (i.e., the slope orientation) is the most relevant feature. Interestingly, in terrain with varying elevations located in the Northern Hemisphere, regions that are facing north and east have limited sunlight during the day and are more prone to snow retention. These are factors that may affect the responsiveness of the fertilizer. On the other hand, field B has an almost constant elevation, so \(A\) is not an important factor. The model trained for this field learned that the \(TPI\), which is related to the ruggedness of the terrain, is the most relevant feature. The slope \(S\), which influences fertilizer runoff, which in turn affects the responsiveness of the fertilizer, is the second most important factor for this field. Finally, \(P\) was assigned a low responsivity impact on both fields. This seems counterintuitive considering that precipitation is a critical factor for crop production. This would suggest that \(P\) is independent of the other features and, as in \(x_{5}\) from the synthetic problem, only shifts the response curves vertically but does not affect their shape.
## VI Conclusion
The analysis of feature response curves often ignores other explanatory variables as a source of variability in the shape of the curves. Acknowledging all relevant variables, quantifying their responsivity impact, and understanding how they interact, may improve the accuracy of important applications such as drug dose optimization and N-fertilization.
We presented a method that generates approximate response curves for a selected active feature using neural networks. Our approach then estimates the impact that a set of passive features have on the shape of the response curves by generating counterfactual explanations. Experimental results on a synthetic dataset coincide with expectations following the analysis of the equation that was used to generate the training data. Experiments on two crop yield prediction datasets found that the factor with the greatest responsivity impact on N-response curves was the terrain aspect, for one of the studied winter wheat fields, and the topographic position index, for the other field. While perhaps already understood by farmers and agronomists, this analysis confirms that the models are reflecting what science would expect in crop production. Future work will focus on designing tests that will determine if the responsivity of two or more response curves is statistically significant. We also plan to use equation discovery approaches to approximate parametric equations for crop production.
## Acknowledgements
The authors wish to thank the team members of the On-Field Precision Experiment (OFPE) project for their comments throughout the development of this work. We would also like to thank Jordan Schupbach for providing advice on the experimental design. This research was supported by a USDA-NIFA-AFRI Food Security Program Coordinated Agricultural Project (Accession Number 2016-68004-24769), and also by the USDA-NRCS Conservation Innovation Grant from the On-farm Trials Program (Award Number NR213A7500013G021).
|
2304.08881 | Segmentation of glioblastomas in early post-operative multi-modal MRI
with deep neural networks | Extent of resection after surgery is one of the main prognostic factors for
patients diagnosed with glioblastoma. To achieve this, accurate segmentation
and classification of residual tumor from post-operative MR images is
essential. The current standard method for estimating it is subject to high
inter- and intra-rater variability, and an automated method for segmentation of
residual tumor in early post-operative MRI could lead to a more accurate
estimation of extent of resection. In this study, two state-of-the-art neural
network architectures for pre-operative segmentation were trained for the task.
The models were extensively validated on a multicenter dataset with nearly 1000
patients, from 12 hospitals in Europe and the United States. The best
performance achieved was a 61\% Dice score, and the best classification
performance was about 80\% balanced accuracy, with a demonstrated ability to
generalize across hospitals. In addition, the segmentation performance of the
best models was on par with human expert raters. The predicted segmentations
can be used to accurately classify the patients into those with residual tumor,
and those with gross total resection. | Ragnhild Holden Helland, Alexandros Ferles, André Pedersen, Ivar Kommers, Hilko Ardon, Frederik Barkhof, Lorenzo Bello, Mitchel S. Berger, Tora Dunås, Marco Conti Nibali, Julia Furtner, Shawn Hervey-Jumper, Albert J. S. Idema, Barbara Kiesel, Rishi Nandoe Tewari, Emmanuel Mandonnet, Domenique M. J. Müller, Pierre A. Robe, Marco Rossi, Lisa M. Sagberg, Tommaso Sciortino, Tom Aalders, Michiel Wagemakers, Georg Widhalm, Marnix G. Witte, Aeilko H. Zwinderman, Paulina L. Majewska, Asgeir S. Jakola, Ole Solheim, Philip C. De Witt Hamer, Ingerid Reinertsen, Roelant S. Eijgelaar, David Bouget | 2023-04-18T10:14:45Z | http://arxiv.org/abs/2304.08881v1 | # Segmentation of glioblastomas in early post-operative multi-modal MRI with deep neural networks
###### Abstract
We present a new method for detecting the presence of a high-quality multi-modal MRI with deep neural networks. We present a new method for detecting the presence of a high-quality multi-modal MRI with deep neural networks. We present a new method for detecting the presence of a high-quality multi-modal MRI with deep neural networks. We present a new method for detecting the presence of a high-quality multi-modal MRI with deep neural networks. We present a new method for detecting the presence of a high-quality multi-modal MRI with deep neural networks. We present a new method for detecting the presence of a high-quality multi-modal MRI with deep neural networks. We present a new method for detecting the presence of a high-quality multi-modal MRI with deep neural networks. We present a new method for detecting the presence of a high-quality multi-modal MRI with deep neural networks. We present a new method for detecting the presence of a high-quality multi-modal MRI with deep neural networks. We present a new method for detecting the presence of a high-quality multi-modal MRI with deep neural networks. We present a new method for detecting the presence of a high-quality multi-modal MRI MRI with deep neural networks. We present a new method for detecting the presence of a high-quality multi-modal MRI MRI with deep neural networks. We present a new method for detecting the presence of a high-quality multi-modal MRI MRI with deep neural networks. We present a new method for detecting the presence of a high-quality multi-modal MRI MRI with deep neural networks. We present a new method for detecting the presence of a high-quality multi-modal MRI MRI MRI with deep neural networks. We present a new method for detecting the presence of a high-quality multi-modal MRI MRI MRI with deep neural networks. We present a new method for detecting the presence of a high-quality multi-modal MRI MRI MRI MRI with deep neural networks. We present a new method for detecting the presence of a high-quality multi-modal
*these authors contributed equally to this work
## 1 Introduction
Glioblastoma, the most common malignant primary brain cancer, requires a multidisciplinary treatment approach comprising maximum safe surgical resection, followed by concurrent radiation and chemotherapy [1]. Even so, median survival in unselected patients is only 12 months [2]. Due to the high invasiveness, a complete resection of all tumor cells is not possible. Still, extensive surgical resections are associated with longer survival [3], but as surgically induced neurological impairment is associated with shorter survival [4], extent of resection (EOR) and surgical strategy, for example resection or biopsy only, needs to be weighed up against risks in individual patients.
The EOR is calculated as the ratio between the surgically-removed and pre-operative tumor volume, which relies on an accurate segmentation of the tumor tissue in both pre- and post-operative MR scans. In recent years, a large body of work has focused exclusively on automated segmentation of pre-operative glioblastoma, yet the task of residual tumor segmentation from early post-operative MRI (EPMR) has gained less attention from the research community. In current practice, the residual tumor size is estimated manually through eye-balling [5], or using crude measures such as the bi-dimensional product of the largest axial diameter of the contrast enhancing residual tumor, according to the Response Assessment in Neuro-Oncology (RANO) criteria [6]. Manual volume segmentations are more sensitive but expertise-dependent and time-consuming, with high inter- and intra-rater variability [5, 7]. An automated method for post-operative tumor volume segmentation from EPMR would therefore be beneficial.
Glioblastoma segmentation from pre-operative MR scans has received a lot of attention in the literature in recent years. Many contributions were motivated by the MICCAI Brain Tumor Segmentation (BraTS) Challenge [8]. With the emergence of fully convolutional neural networks (CNNs) [9], deep learning-based approaches have nearly completely replaced more conventional methods in medical image segmentation [10]. Variants of the U-Net architecture [11] have facilitated the basis-architecture for the majority of auto-segmentation algorithms, including DeepMedic [12], Attention U-Net [13], and the recently established nnU-Net [14], with state-of-the-art results in several medical image segmentation benchmarks. The winning submissions in the BraTS challenge in 2021 and 2022 were an extension of the nnU-Net architecture [15], and an ensemble of three state-of-the art architectures for medical image segmentation, comprising nnU-Net, DeepSeg, and DeepSCAN [16], respectively. In the absence of a publicly available dataset for residual tumor segmentation from EPMR, the literature on this problem is sparse when compared to the pre-operative segmentation task. Semi-automatic methods, combining of one or several voxel- or shape-based image segmentation algorithm, have been proposed from intensity thresholding (e.g., Otsu and relative entropy) [17, 18, 19], fuzzy algorithms [18], Gaussian mixture model [20], morphological operations [19], region-based active contours [21, 22], the level set approach [21, 22, 23], and CNNs [24]. Unfortunately, these methods relied on user inputs, either by manual initialisation, or by interactive refinement of the resulting segmentation. They are therefore challenging to use in clinical practice, and in large datasets. In addition, all validation studies were solely performed on single-center local datasets, consisting of 15 to 37 patients, making if difficult to demonstrate the generalizability of the proposed methods.
Regarding fully automated approaches, Meier et al. [25] presented an automated method based on decision forests for residual tumor segmentation using EPMR from 19 patients. A more recent work by Ghaffari et al. [26] proposed to fine-tune a 3D densely connected U-Net, pre-trained on the BraTS20 dataset, on a local dataset of 15 post-operative glioblastomas. However, the MR scans were all acquired for radiation therapy planning and not within the recommended time frame to acquire EPMR scans, within 72 hours after surgical resection [6]. Deep learning approaches have recently shown to outperform more traditional algorithms on most image segmentation tasks, including segmentation of pre-operative glioblastomas [15, 16]. The
utmost requirement is the number of included patients and the quality of the MR images comprising a study dataset. Preferably, the data should originate from different locations, to evaluate the ability of the trained models to generalize across different hospitals, scanners, or clinical practice.
In this work, we determine the performance of two CNN architectures to segment residual enhancing glioblastoma on early post-operative scans. The selected architectures are the nnU-Net, state-of-the-art for pre-operative glioblastoma segmentation, and AGU-Net, an architecture developed for pre-operative segmentation of brain tumors. These architectures have both demonstrated excellent performance on pre-operative segmentation in previous studies on pre-operative brain tumor segmentation [27, 28, 29], and they exhibit different strengths and weaknesses. The automatic results are compared with manual segmentations, using different combinations of MRI scans in a large dataset consisting of paired pre- and early post-operative MRI scans from 956 patients in 12 medical centers in Europe and the United States. Extensive validation studies are presented to identify the best architecture configuration, quantify the performances and ability to generalize, and highlight potential relevance for use in clinical practice. Finally, the best performing models are made publicly available and integrated into the open software Raidionics [29].
## 2 Materials & Method
### Data
A dataset comprised of pre-operative and early post-operative MRI scans from 956 patients, who underwent surgical resection of glioblastoma, was assembled for this study. Twelve different hospitals across Europe and in the US contributed data, with the following patient distribution per center: 23 patients from the Northwest Clinics, Alkmaar, Netherlands (_ALK_); 73 patients from the Amsterdam University Medical Centers, location VU medical center, Netherlands (_AMS_); 43 patients from the University Medical Center Groningen, Netherlands (_GRO_); 40 patients from the Medical Center Haaglanden, the Hague, Netherlands (_HAG_); 55 patients from the Humanitas Research Hospital, Milano, Italy (_MIL_); 41 patients from the Hopital Lariboisiere, Paris, France (_PAR_); 108 patients from the University of California San Francisco Medical Center, U.S. (_SFR_); 53 patients from the University Medical Center Utrecht, Netherlands (_UTR_); 45 patients from the Medical University Vienna, Austria (_VIE_); 51 patients from the Isala hospital, Zwolle, Netherlands (_ZWO_); 237 patients from St. Olavs hospital, Trondheim University Hospital, Norway (_STO_); and 187 patients from the Sahlgrenska University Hospital, Gothenburg, Sweden (_GOT_).
The cohorts are subsets of a broader dataset, thoroughly described previously for their pre-operative content [30], for patients with available EPMR data. All EPMR scans were acquired within 72 hours after surgery, with the exception of the UTR center where the limit used was up to one week post-surgery. The recommended time frame for acquiring the EPMR scans has been stated in the National Comprehensive Cancer Network (NCCN) recommendations [31], in order to maximize differences between residual enhancing tumor and enhancement due to post-surgical changes in the tissue [32, 33]. For each patient in the dataset, the following post-operative MRI sequences were acquired: T1-weighted (T1w), gadolinium-enhanced T1-weighted (T1w-CE), and T2-weighted fluid attenuated inversion recovery (FLAIR).
The residual tumor tissue was manually segmented in 3D in T1w-CE MR scans by trained annotators, supervised by expert neuroradiologists and neurosurgeons. The manual segmentation was performed using all available standard MR sequences,
Figure 1: Dataset examples for four patients, separated by white dash-lines. For each patient, an axial view from the EPMR T1w-CE, EPMR T1w, EPMR FLAIR, and pre-operative T1w-CE are displayed. Outlines of the manually annotated tumors are shown in green.
and residual tumor tissue was defined as enhancing tissue in the T1w-CE scan, but darker in the T1w scan. Hence, blood was distinguished from residual tumor by a hyperintense signal on T1w scans. For each patient, a further post-operative distinction can be made between cases showcasing residual tumor (RT) in EPMR scans and cases presenting a gross total resection (GTR), defined as a residual tumor volume of less than 0.175 ml [34]. The cut-off was chosen to reduce risk of interpretation problems when distinguishing between tumour enhancement and that of non-specific enhancement, such as small vessels or enhancing pia
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c}
**Hospital** & **HAG** & **MIL** & **ZWO** & **VIE** & **ALK** & **PAR** & **SFR** & **GRO** & **UTR** & **AMS** & **STO** & **GOT** \\ \hline Patients & 40 & 55 & 51 & 45 & 23 & 41 & 108 & 43 & 53 & 73 & 237 & 187 \\ \hline RT & 23 & 34 & 18 & 30 & 18 & 29 & 80 & 26 & 20 & 51 & 162 & 113 \\ GTR & 17 & 21 & 33 & 15 & 5 & 12 & 28 & 17 & 33 & 22 & 75 & 74 \\ \hline RT ratio (\%) & 57.5 & 61.8 & 35.3 & 66.7 & 78.3 & 70.7 & 74.1 & 60.5 & 37.7 & 69.9 & 68.4 & 60.4 \\ \end{tabular}
\end{table}
Table 1: Dataset distributions and statistics across the twelve hospitals, represented by their acronyms. RT: residual tumor, GTR: gross total resection.
Figure 2: Overall residual tumor segmentation pipeline from EPMR scans and classification between gross total resection or residual tumor. The registration is performed using the SyN approach from ANTs, multiple input configurations using different combinations of MR sequences were considered (noted from A to E), and two architectures were evaluated: nnU-Net and AGU-Net.
tumor segmentation mask, and (**E**) all standard EPMR sequences: T1w-CE, T1w, and FLAIR scans, and the pre-operative T1w-CE MR scan and corresponding tumor segmentation mask. An overview of the whole segmentation pipeline with the different input designs and subsequent steps is presented in the following sections, and illustrated in Fig. 2.
#### 2.2.1 Pre-processing
For proper anatomical consistency across the different inputs sequences, an initial image-to-image registration procedure was performed. The EPMR T1w-CE scan was elected as the reference space and all subsequent volumes were registered to it using the SyN diffeomorphic method [35] from the Advanced Normalization Tools (ANTs) framework [36]. Skull-stripping was subsequently performed on all input MR scans, based on the brain mask from the EPMR T1w-CE scan. All brain masks were automatically generated using a pre-trained slab-wise AGU-Net model with input shape \(256\times 192\times 32\) voxels. For the nnU-Net architecture, the pre-processing was automatically decided by the framework based on the dataset, and all inputs were resampled to \(0.5\times 0.5\times 1.0\,\mathrm{mm^{3}}\) spacing and zero-mean normalized. For the AGU-Net architecture, the full-resolution analysis required a lower resolution, and therefore all inputs were resampled to an isotropic \(1.0\,\mathrm{mm^{3}}\) spacing, resized to \(128\times 128\times 144\) voxels, and zero-mean normalized.
#### 2.2.2 Training specifications for the nnU-Net architecture
**Architecture design** From the nnU-Net framework analysis of the dataset, the 3D full-resolution U-Net with the following parameters was recommended, using \(192\times 128\times 80\) voxels as input patch size. The network used five levels, downsampling using strided convolution layers, and upsampling using transposed convolution layers. Kernel size of \(1\times 3\times 3\) voxels for the first level, \(3\times 3\times 3\) for the remaining four levels, and filter sizes of \(\{32,64,128,256,320\}\) were used for each level, respectively. The loss function was a combination of the Dice score and cross-entropy. A stride of one was used for the convolution layers.
**Network training** All models were trained from scratch for 1000 epochs using a stochastic gradient descent with Nesterov momentum optimizer (momentum=0.99). One epoch was defined as 250 batch iterations with a batch size of two. On-the-fly data augmentations were performed comprising rotation, scaling, additive Gaussian noise, Gaussian blur, brightness and contrast augmentation, and gamma augmentation.
#### 2.2.3 Training specifications for the AGU-Net architecture
**Architecture design** The AGU-Net, as described by Bouget et al. [28], is a 3D U-Net architecture with an integrated attention-gated mechanism, with five block levels using filter sizes of \(\{16,\,32,\,128,\,256,\,256\}\), respectively. The input size of the network was set to \(128\times 128\times 144\times S\), with \(S\) being the number of sequences used as input. The architecture also uses multi-scale input and deep supervision. The class-averaged Dice loss, excluding the background, was used for training the different models.
**Network training** All models were initialized using pre-trained weights from the best pre-operative glioblastoma segmentation model [27], and only the input layer was adapted to account for the different input combinations considered. The Adam optimizer was used with an initial learning rate of \(1\times 10^{-3}\), and the training was stopped after 30 consecutive epochs without validation loss improvement. Gradient accumulation [37] was performed to increase the batch size from 2 samples to 32, tackling graphics processing unit (GPU) memory limitations for large batch training. Data augmentation techniques were leveraged including horizontal and vertical flipping, random rotations in the range \([-20^{\circ},20^{\circ}]\), and a translation of up to 10% of the axis dimension. Each augmentation was performed with a probability of 50% for each training sample.
#### 2.2.4 Post-processing and GTR classification
During inference, residual tumor tissue was predicted by each trained model, resulting in a probability map of the same resolution as the EPMR T1w-CE scan. A binary mask was then generated from the probability map, using the best threshold determined from the validation studies. The binary mask was further refined by filtering out potential noise, inherent to the voxel-wise segmentation task, by applying a connected components analysis and removing any identified object smaller than 20 voxels. Finally, the refined binary mask was used to assess whether gross total resection has been achieved for the patient.
### Validation studies
In this work, the trained models were assessed based on their ability to perform segmentation of the residual tumor and to classify patients into those with gross total resection and those with residual tumor. For the segmentation task, only two classes are considered, whereby a positive voxel exhibits tumor tissue, whereas a negative voxel represents either background or normal tissue. For the classification task, a rest tumor volume threshold was selected to serve as cut-off value.
#### 2.3.1 Protocols
The validation studies presented in this work were conducted following a five-fold cross-validation, summarized in Table 2. First, all patients from 11 out of the 12 the hospitals in our dataset, excluding the AMS cohort, were split into five hospital-stratified folds, with an approximately balanced number of patients in each fold. The remaining 73 patients from the AMS hospital were
kept as an hold-out test set. For each iteration of the cross-validation, four folds were used for training, the remaining fifth fold was used for validation, and the hold-out set was used for test.
This approach presents similar benefits to the leave-one-hospital-out strategy used in previous work [27], with the advantage of a reduced training time. Finally, predictions over the test set were generated by ensembling over the predictions obtained by each of the five trained models. An average pooling voting scheme was applied to each of the model predictions, to produce a single softmax prediction.
#### 2.3.2 Metrics
To evaluate the models' voxel-wise performance on the task of residual tumor segmentation, Dice scores were computed between the ground truth annotation and the post-processed binary prediction mask. The Dice scores are reported for the subgroup of patients with residual tumor tissue according to the ground truth annotation, labelled as the 'positive' (P) group. The Dice scores for the subgroup of patients with residual tumor according to the ground truth annotation and the network predictions, labelled as the 'true positive' (TP) group, are also reported. Pooled estimates, when computed from each fold's results, are reported for each measurement as mean and standard deviation (indicated by \(\pm\)) in the tables.
For the patient-wise classification task of distinguishing patients with gross total resection and patients with residual tumor, a standard sensitivity and specificity approach was conducted represented by the balanced accuracy score (noted bAcc). A residual tumor volume below the clinical volume threshold was thus counted as a negative (i.e., GTR) and as positive otherwise (i.e., RT). Following this consideration, a patient was considered a true positive (TP) if both the ground truth annotation residual tumor volume and detected residual tumor volume were \(\geq 0.175\,\mathrm{ml}\), for any given Dice score (i.e., \(\geq 0.01\)). Conversely, if both volumes were \(<0.175\,\mathrm{ml}\), the patient was labelled as a true negative (TN). Patients where the ground truth volume was above the threshold volume and the prediction was below were marked as false negatives (FN), and false positive (FP) vice versa.
In the case of inter-rater variability, the Jaccard score, closely related to the Dice score by \(J=\frac{D}{2-D}\), was used to compare the models' performance. The Jaccard was chosen for easy comparison with a previously published work on the same dataset [7].
#### 2.3.3 Experiments
The following three experiments were conducted in this study:
(i) Residual tumor segmentation performance study: using the 5-fold cross-validation protocol and segmentation metrics, both nnU-Net and AGU-Net architectures' segmentation performances were compared for the five combinations of input sequences.
(ii) Gross total resection classification performance study: using the 5-fold cross-validation protocol, classification metrics, and best input combination identified in the first experiment, both architectures were compared in terms of ability to classify between gross total resection and residual tumor patients.
(iii) Inter-rater variability study: the best model from each architecture was benchmarked in terms of segmentation performance against the performance of novice and expert annotators, using the inter-rater variability dataset. For each patient, a consensus agreement annotation has been created using a majority voting approach. Using all eight annotations from both experts and novices, a voxel was defined as belonging to a tumor if annotated by more than half of the annotators. The models' binary predictions and the eight inter-rater annotations were then compared against the ground truth annotations (as used in the hold-out test set) and the consensus annotations.
## 3 Results
The studies were performed using multiple machines with the two following specifications: (i) Intel Core Processor (Broadwell, no TSX, IBRS) central processing unit (CPU) with 16 cores, 64GB of RAM, Tesla V100S (32GB) dedicated GPU, and a regular hard-drive and (ii) a GPU server with a total of 256 CPU cores, 2TB of RAM, and six NVIDIA A100-SXM4 (80GB) cards. The AGU-Net architecture was implemented in Python 3.6 with the TensorFlow v1.13.1 library [38]. For the nnU-Net architecture, Python 3.8, PyTorch v1.13.1 [39], and the nnU-Net framework v1.7.0 [14] were used.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c} & \multicolumn{5}{c|}{**Hospital-wise cross-validation set**} & \multicolumn{1}{c}{**Hold-out set**} \\
**Fold** & **0** & **1** & **2** & **3** & **4** & **Hold-out set** \\ \hline Hospitals validation & STO & \begin{tabular}{c} GRO, MIL \\ UTR \\ \end{tabular} & SFR, VIE &
\begin{tabular}{c} PAR, ZWO \\ ALK, HAG \\ \end{tabular} & GOT & AMS \\ \hline Patients train & 646 & 732 & 730 & 728 & 696 & — \\ Patients validation & 237 & 151 & 153 & 155 & 187 & 73 \\ \end{tabular}
\end{table}
Table 2: Distribution of hospitals and patient samples featured in the 5-fold validation sets and hold-out test set.
### Residual tumor segmentation performance study
Segmentation performances across both architectures, for all input sequences combinations, and only for patients with residual tumor are summarized in Table 3. For both architectures, the lowest average Dice score over the external test set was obtained with configuration A, indicating that solely using T1w-CE MR scans is insufficient for identifying post-operative residual tumor. The addition of the T1w scan as input (i.e., configuration B) provides at least a 5% improvement in Dice scores over the test set for both architectures. This illustrates the additional value of the T1w sequence, presumably due to better distinction between blood and tumor. The inclusion of the FLAIR scan in input configuration C slightly degraded the Dice score compared to input configuration B. Finally, the inclusion of pre-operative data does not seem to improve the performance for any architecture, as the Dice scores for input configuration D are again slightly lower than for configuration B. Further addition of the FLAIR scan in input configuration E leads to a minor decrease in Dice scores compared to configuration D. For both architectures, input configuration B yielded the highest Dice scores on the test set. The highest Dice and true positive Dice scores were obtained with the nnU-Net architecture trained on input configuration B, with respectively 59% and 61% Dice on the validation and test sets. Overall, performances obtained across the test set are stable, in support of generalizability. Likewise, performances over the validation sets from the cross-validation protocol are consistent for inputs configurations B to E. The same results trends can be observed across both architectures for the true positive Dice, although slightly higher for the positive Dice using the nnU-Net architecture.
Looking at patient-wise performances, models trained with the nnU-Net architecture achieve perfect recall across all configurations for both the validation and test sets. Whereas the patch-wise strategy followed allows for segmenting smaller structures, the loose criterion to consider a network prediction as true positive further strengthens this aspect. Indeed, only a few correctly overlapping voxels between the prediction and the ground truth are needed for residual tumor to be considered satisfactorily identified patient-wise. Due to the full volume approach, models trained with AGU-Net generally struggle to identify small elements, as indicated by an overall around 80% across the board. Conversely, the opposite trend can be noticed in regards to patient-wise precision performance. Models trained with nnU-Net tend to perform more erroneous predictions as indicated by average precision scores below 70%, whereas AGU-Net models tend to be more precise with precision scores up to 95%. Ultimately, F1-scores are very similar between models from both architectures across the different input configurations, as they combine recall and precision performances.
From the segmentation performances analysis, the best results have been obtained with the nnU-Net architecture using input configuration D. Visual comparisons are provided in Figure 3 between the two architectures using the best input configuration for some patients from the test set, one featured per row. In the top row, both models achieved excellent segmentation with a
\begin{table}
\begin{tabular}{c c c|c c|c c c}
**Input** & **Prot.** & **Arch.** & \multicolumn{2}{c|}{**Voxel-wise**} & \multicolumn{3}{c}{**Patient-wise**} \\ & & & **DSC-P** & **DSC-TP** & **Recall** & **Precision** & **F1** \\ \hline \multirow{4}{*}{A} & Val & nnU-Net & 46.94\(\pm\)24.03 & 49.51\(\pm\)21.70 & 99.81\(\pm\)0.35 & 62.96\(\pm\)6.60 & 77.03\(\pm\)4.98 \\ & & AGU-Net & 37.72\(\pm\)29.54 & 51.05\(\pm\)22.28 & 79.70\(\pm\)6.69 & 80.75\(\pm\)5.79 & 79.83\(\pm\)3.01 \\ & & nnU-Net & 52.38\(\pm\)21.14 & 53.43\(\pm\)19.77 & 100.00 & 70.83 & 82.93 \\ & & AGU-Net & 38.06\(\pm\)27.45 & 46.21\(\pm\)22.80 & 84.31 & 84.31 & 84.31 \\ \hline \multirow{4}{*}{B} & Val & nnU-Net & 52.97\(\pm\)22.66 & 55.62\(\pm\)19.63 & 99.47\(\pm\)0.71 & 66.82\(\pm\)6.06 & 79.78\(\pm\)4.35 \\ & & AGU-Net & 39.71\(\pm\)28.25 & 51.54\(\pm\)20.59 & 81.25\(\pm\)6.47 & 82.30\(\pm\)4.60 & 81.52\(\pm\)2.97 \\ & & nnU-Net & 59.19\(\pm\)20.49 & 61.61\(\pm\)16.72 & 98.04 & 80.65 & 88.50 \\ & & AGU-Net & 43.76\(\pm\)27.61 & 53.14\(\pm\)20.23 & 84.31 & 87.76 & 86.00 \\ \hline \multirow{4}{*}{C} & Val & nnU-Net & 52.43\(\pm\)22.45 & 54.72\(\pm\)19.77 & 99.81\(\pm\)0.35 & 63.70\(\pm\)6.68 & 77.58\(\pm\)5.01 \\ & & AGU-Net & 37.43\(\pm\)28.69 & 51.09\(\pm\)20.49 & 79.29\(\pm\)10.08 & 84.70\(\pm\)3.23 & 81.32\(\pm\)4.43 \\ & & nnU-Net & 58.14\(\pm\)21.01 & 60.51\(\pm\)17.52 & 100.00 & 76.12 & 86.44 \\ & & AGU-Net & 42.33\(\pm\)27.87 & 53.97\(\pm\)18.51 & 78.43 & 95.24 & 86.02 \\ \hline \multirow{4}{*}{D} & Val & nnU-Net & 52.80\(\pm\)22.59 & 55.26\(\pm\)19.73 & 99.66\(\pm\)0.44 & 66.21\(\pm\)5.72 & 79.42\(\pm\)4.15 \\ & & AGU-Net & 41.02\(\pm\)28.08 & 52.45\(\pm\)20.14 & 82.80\(\pm\)5.27 & 85.16\(\pm\)5.24 & 83.73\(\pm\)3.17 \\ & & nnU-Net & 58.05\(\pm\)22.74 & 60.42\(\pm\)19.61 & 100.00 & 79.69 & 88.70 \\ & & AGU-Net & 40.84\(\pm\)28.62 & 52.07\(\pm\)20.96 & 78.43 & 93.02 & 85.11 \\ \hline \multirow{4}{*}{E} & Val & nnU-Net & 53.61\(\pm\)22.57 & 55.81\(\pm\)19.97 & 100.00 & 63.86\(\pm\)6.93 & 77.73\(\pm\)5.19 \\ & & AGU-Net & 39.44\(\pm\)27.05 & 48.89\(\pm\)20.92 & 85.61\(\pm\)4.83 & 84.58\(\pm\)3.39 & 84.91\(\pm\)1.59 \\ \cline{1-1} & & nnU-Net & 56.30\(\pm\)21.07 & 58.60\(\pm\)17.84 & 100.00 & 76.12 & 86.44 \\ \cline{1-1} & & AGU-Net & 41.23\(\pm\)25.72 & 47.78\(\pm\)20.93 & 86.27 & 89.80 & 88.00 \\ \end{tabular}
\end{table}
Table 3: Segmentation performances for patients with residual tumor, for both architectures, all input configurations, and over the validation and test sets.
Dice score above 90%. In the second row, a multifocal post-operative residual tumor case is featured whereby the AGU-Net model produced one false positive component as can be seen in red in the 3D representation. For the third row, a challenging multifocal and fragmented residual tumor case is displayed where both models failed to segment the largest component. Finally, in the last row, oversegmentation was performed using both models leading to Dice scores below 40%.
### Gross total resection classification performance study
Classification performances between patients with residual tumor and gross total resections, across both architectures and for all input configurations, are reported in Table 4. The first noticeable result is the overall tendency of the nnU-Net architecture to oversegment, resulting in a perfect recall over both the test set and validation set, for a really poor specificity often below 30%. Overall, nnU-Net achieves balanced accuracy scores barely above 0.5 for all input configurations, which means the classification performance is only slightly better compared to the average score of random guessing (i.e., 0.5). Conversely, models trained with the AGU-Net architecture are more conservative leading to higher specificity scores, up to 90% for input configuration C, and reasonably high recall/sensitivity values above 80%. In contrast to segmentation performances, the successive addition of MR scans within the input configuration lead to improved classification performances for both architectures. One apparent difference is the added value of the FLAIR sequence with the AGU-Net architecture, further increasing the specificity and balanced accuracy, unlike performances with the nnU-Net architecture.
From the classification performances analysis, the best results on the test set according to the balanced accuracy have been obtained with the AGU-Net architecture using input configuration C. However, the best results on the validation sets are achieved with input configuration E. In a clinical scenario, a high sensitivity has higher priority than a high specificity, as long as the trade-off is reasonable. AGU-Net trained with input configuration E is therefore the preferred model for classification. This configuration achieves the highest sensitivity for all input configurations while still achieving a reasonable specificity,
Figure 3: Segmentation comparison between the manual ground truth (in blue) and the binary predictions (in red) for the two architectures using configuration D, over the test set. One patient is featured per row, the patient-wise Dice is reported in white, and a 3D representation of the overlap is included (best viewed digitally and in color).
higher than configurations A and B.
### Inter-rater variability study
For the 20 patients constituting the inter-rater variability dataset, a comparison of the Jaccard scores, obtained between each rater and the best model from each architecture, are reported in Figure 4. As all configurations B-E yielded very similar segmentation performance scores, the selected best configuration for each architecture were the models that produced the best trade-off between sensitivity and specificity for the classification task. For the nnU-Net, configuration D was selected as the best configuration as this model achieved the highest specificity out of all the trained nnU-Net models, and for the AGU-Net, configuration E was selected as this model yielded the highest sensitivity while maintaining a reasonable specificity. Using the ground truth annotation from the test dataset as a reference segmentation, both architectures achieved Jaccard scores within a variability range very similar to that of the novice and expert annotators. With the consensus agreement annotation as a reference segmentation, the AGU-Net model achieved slightly poorer Jaccard scores than the majority of the expert human raters but remained within the variability range of the novice annotators. The nnU-Net model achieved scores similar to the variability range of the expert raters, also when compared to the consensus agreement annotation.
## 4 Discussion
In this multicenter study, the feasibility of post-operative residual tumor segmentation with deep neural networks was assessed. Two state-of-the-art architectures for pre-operative glioblastoma segmentation were compared: nnU-Net and AGU-Net. Both architectures were trained on five different combinations of early post-operative and pre-operative MR scans as input, and benchmarked in terms of segmentation and classification performances compared with manual rating. The main finding is that automatic segmentation performances are comparable to human rater performance on real world MRI scans, requiring early post-operative T1w-CE and T1w MRI scans only. In addition, the trained automated models have shown promising ability to classify patients who underwent gross total resection from patients exhibiting post-operative residual tumor.
The multimodal and multicentric dataset in this study is the largest cohort used for the task of early post-operative glioblastoma segmentation, with a total of 956 patients. Regarding the dataset curation, our strict inclusion criteria required availability of all four MR scans as input (i.e., post-operative T1w-CE, T1w, FLAIR, and pre-operative T1w-CE) for each patient. Whereas this decision was motivated by a simpler method design, approximately 150 patients were excluded as one or more MR scans were missing. A relaxation of the inclusion criteria would increase the size of the dataset, and open the
\begin{table}
\begin{tabular}{c c c|c c c}
**Exp.** & **Data** & **Arch.** & \multicolumn{3}{c}{**Patient-wise**} \\ & & & **Sensitivity** & **Specificity** & **bAcc** \\ \hline \multirow{4}{*}{A} & Val & nnU-Net & 99.81\(\pm\)0.35 & 2.53\(\pm\)2.21 & 51.17\(\pm\)1.22 \\ & & AGU-Net & 79.70\(\pm\)6.69 & 68.01\(\pm\)10.43 & 73.86\(\pm\)4.94 \\ & Test & nnU-Net & 100.00 & 4.55 & 52.27 \\ & & AGU-Net & 84.31 & 63.64 & 73.98 \\ \hline \multirow{4}{*}{B} & Val & nnU-Net & 99.47\(\pm\)0.71 & 18.04\(\pm\)4.41 & 58.75\(\pm\)2.30 \\ & & AGU-Net & 81.25\(\pm\)6.47 & 71.01\(\pm\)5.36 & 76.13\(\pm\)4.12 \\ & & nnU-Net & 98.04 & 45.45 & 71.75 \\ & Test & AGU-Net & 84.31 & 72.73 & 78.52 \\ \hline \multirow{4}{*}{C} & Val & nnU-Net & 99.81\(\pm\)0.35 & 5.64\(\pm\)3.44 & 52.73\(\pm\)1.76 \\ & & AGU-Net & 79.29\(\pm\)10.08 & 74.00\(\pm\)11.13 & 76.64\(\pm\)4.87 \\ & & nnU-Net & 100.00 & 27.27 & 63.64 \\ & & AGU-Net & 78.43 & 90.91 & 84.67 \\ \hline \multirow{4}{*}{D} & Val & nnU-Net & 99.66\(\pm\)0.44 & 15.28\(\pm\)6.85 & 57.47\(\pm\)3.55 \\ & & AGU-Net & 82.80\(\pm\)5.27 & 73.00\(\pm\)14.63 & 77.90\(\pm\)6.44 \\ \cline{1-1} & Test & nnU-Net & 100.00 & 40.91 & 70.45 \\ \cline{1-1} & & AGU-Net & 78.43 & 86.36 & 82.40 \\ \hline \multirow{4}{*}{E} & Val & nnU-Net & 100.00 & 6.12\(\pm\)4.30 & 53.06\(\pm\)2.15 \\ \cline{1-1} & & AGU-Net & 85.61\(\pm\)4.83 & 72.63\(\pm\)9.39 & 79.12\(\pm\)4.60 \\ \cline{1-1} & Test & nnU-Net & 100.00 & 27.27 & 63.64 \\ \cline{1-1} & & AGU-Net & 86.27 & 77.27 & 81.77 \\ \end{tabular}
\end{table}
Table 4: Gross total resection versus residual tumor classification performances for both architectures, all input configurations, and over the validation and test sets.
possibility to generate a more diverse set of input MR scans, including for example T2-weighted images. Ideally, the trained methods should be able to deal with a sparse set of inputs, where one or more MR scans are missing. The trained models should be used off the shelf, by replacing missing sequences with empty volumes, synthetically generated sequences, or allowing missing inputs using sparsified learning techniques[40].
In their ability to segment post-operative tumor, nnU-Net and AGU-Net exhibit strengths and weaknesses inherent to their design. Through a patch-wise approach, nnU-Net models are able to segment relatively small structures, having access to more fine-grained details from the use of MR scans close to their initial resolution. Considering the relatively small volumes and fragmented state for residual tumors, nnU-Net models are able to achieve higher Dice score and recall performances. On the other hand, models trained using the AGU-Net approach are following a full volume approach, largely downsampling the input MR scans, hindering the ability for detecting smaller structures. However, such models appear to be more conservative in their predictions, hence heavily reducing the amount of false positives enabling to reach high precision performances. Regarding the different input configurations, the biggest impact on segmentation performances comes from combining EPMR T1w-CE and T1w scans, which corresponds to the favored approach as well in clinical practice. The inclusion of additional MR sequences seems to add little to segmentation performances. Adapting the convolution blocks, filter sizes, or other elements of the architectures might be needed for letting the number of trainable parameters to evolve according to the number of inputs, instead of a fixed amount of parameters.
The validation studies described in this article served the two purposes of investigating the predictive ability and capacity to generalize of the trained models. This is obtained through the use of a unique test set, and equally distributed hospital-stratified validation sets. Our selection for a specific hold-out hospital as a test set was based on the availability of manual annotations from multiple raters, allowing to perform, in addition, an inter-rater variability study. Regarding the computation of the reported metrics, the rationale for only including the true positive patients in the segmentation performances lies in the Dice score computation itself. Indeed, cases with a GTR preclude calculation of a Dice score. Therefore, the validation studies include a separate experiment to classify patients into those with a GTR and those with residual tumor.
The inter-rater variability study demonstrated that residual tumor segmentation performance is on par with the average human expert annotator performance, when evaluated against an independent ground truth segmentation. Even when evaluated against the consensus agreement annotation, which is by definition biased towards each of the human annotators included in the study, the best segmentation model achieves scores similar to the individual expert annotations, and still outperforms the novice annotators. The consensus agreement annotation based on a majority voting scheme over all annotations from the eight different annotators should be considered the gold standard for defining the residual tumor. However, this is not achievable in a real-world clinical scenario, where even an exact delineation of the tumor remnant from one human annotator is rarely performed. The proposed automatic method for residual tumor segmentation should thus be considered an acceptable
Figure 4: Inter-rater Jaccard score variability over a subset of the AMS cohort. To the left, the ground truth annotation used for training served as segmentation of reference. To the right, the reference segmentation was a consensus agreement between annotations from all raters.
alternative to the current standard practice for evaluating the tumor remnant after surgery, as the average performance of the method lies within the variability range of individual expert annotators. Such segmentation performances are even achieved with the exclusive use of post-operative MR sequences as model inputs (T1w-CE, T1w, and FLAIR), whereas the addition of pre-operative information (pre-operative T1w-CE and label) retains the model performance on similar levels. Thus, in clinical practice, our trained models could be deployed even in the absence of pre-operative scans, as long as at least the T1w-CE and T1w post-operative sequences are available, to establish an automated and relatively fast method for the segmentation task. On a second level, the output segmentation masks can be used to differentiate between patients with remnant tumor after surgery and gross total resection patients, with increasing balanced accuracy performance as more sequences are added to the model inputs. Our early post-operative glioblastoma segmentation models have been made freely available in the Raidionics environment1.
Footnote 1: [https://github.com/raidionics](https://github.com/raidionics)
In spite of promising reported performances, the task of early post-operative glioblastoma segmentation is far from accomplished. The full extent of residual tumor, often very fragmented around the resection cavity, is never wholly captured. In future work, the pre-operative MR scans and tumor location should be better leveraged as the residual tumor is bound to lie in its vicinity. Focusing the search solely within a region of interest might help retaining a higher image resolution, for better segmentation of small structures. Nevertheless, competitive pre- and post-operative glioblastoma segmentation models are now publicly available, opening the door to clinically-oriented validation studies. Assuming a positive outcome, the use of automatic models and methods would be highly beneficial in a clinical setting to collect parameters currently obtained through eyeballing or diameter estimation, hence yielding reproducible and deterministic significance.
## 5 Conclusion
In this study, two state-of-the-art neural network architectures for glioblastoma segmentation were trained and thoroughly validated on a large cohort of 956 patients. Automatic segmentation performances are on par with human rater performance on real world MRI scans, requiring early post-operative T1w-CE and T1w MRI scans only. In addition, the presented models have shown promising readiness for automatically distinguishing between patients who underwent gross total resection, and patients with residual tumor. The prognostic value of the automated method should be assessed in future studies.
|
2307.05639 | Learning Active Subspaces and Discovering Important Features with
Gaussian Radial Basis Functions Neural Networks | Providing a model that achieves a strong predictive performance and is
simultaneously interpretable by humans is one of the most difficult challenges
in machine learning research due to the conflicting nature of these two
objectives. To address this challenge, we propose a modification of the radial
basis function neural network model by equipping its Gaussian kernel with a
learnable precision matrix. We show that precious information is contained in
the spectrum of the precision matrix that can be extracted once the training of
the model is completed. In particular, the eigenvectors explain the directions
of maximum sensitivity of the model revealing the active subspace and
suggesting potential applications for supervised dimensionality reduction. At
the same time, the eigenvectors highlight the relationship in terms of absolute
variation between the input and the latent variables, thereby allowing us to
extract a ranking of the input variables based on their importance to the
prediction task enhancing the model interpretability. We conducted numerical
experiments for regression, classification, and feature selection tasks,
comparing our model against popular machine learning models, the
state-of-the-art deep learning-based embedding feature selection techniques,
and a transformer model for tabular data. Our results demonstrate that the
proposed model does not only yield an attractive prediction performance
compared to the competitors but also provides meaningful and interpretable
results that potentially could assist the decision-making process in real-world
applications. A PyTorch implementation of the model is available on GitHub at
the following link. https://github.com/dannyzx/Gaussian-RBFNN | Danny D'Agostino, Ilija Ilievski, Christine Annette Shoemaker | 2023-07-11T09:54:30Z | http://arxiv.org/abs/2307.05639v2 | Learning Active Subspaces and Discovering Important Features with Gaussian Radial Basis Functions Neural Networks
###### Abstract
Providing a model that achieves a strong predictive performance and at the same time is interpretable by humans is one of the most difficult challenges in machine learning research due to the conflicting nature of these two objectives. To address this challenge, we propose a modification of the Radial Basis Function Neural Network model by equipping its Gaussian kernel with a learnable precision matrix. We show that precious information is contained in the spectrum of the precision matrix that can be extracted once the training of the model is completed. In particular, the eigenvectors explain the directions of maximum sensitivity of the model revealing the active subspace and suggesting potential applications for supervised dimensionality reduction. At the same time, the eigenvectors highlight the relationship in terms of absolute variation between the input and the latent variables, thereby allowing us to extract a ranking of the input variables based on their importance to the prediction task enhancing the model interpretability. We conducted numerical experiments for regression, classification, and feature selection tasks, comparing our model against popular machine learning models and the state-of-the-art deep learning-based embedding feature selection techniques. Our results demonstrate that the proposed model does not only yield an attractive prediction performance with respect to the competitors but also provides meaningful and interpretable results that potentially could assist the decision-making process in real-world applications. A PyTorch implementation of the model is available on GitHub at the following link.1
Footnote 1: [https://github.com/dannyzx/GRBF-NNs](https://github.com/dannyzx/GRBF-NNs)
## 1 Introduction
The Radial Basis Function (RBF) is a family of models used for function interpolation and approximation that are defined as a linear combination of radially symmetric basis functions [13]. The RBF approach has many properties which make it attractive as a mathematical tool for interpolation [44; 57]. Once the basis function and its hyperparameters are determined, the weights that multiply the basis functions can be found by solving a convex optimization problem or directly through matrix inversion. The RBF model has been generalized in the context of approximation by using basis functions centered with respect to a subset of the data, that can be interpreted as one hidden layer Neural Network (RBF-NN) with RBF's activation function as shown in [13]. In [50; 51] the authors showed that under some conditions on the basis function, RBF-NNs are universal approximators as Neural Networks (NNs) [33].
Radial Basis Functions have been used for function interpolation or approximation for many decades in different applications. In the work presented in [56; 28] the authors showed that from the regularization principles and through a solution of a variational problem, the RBF model is a subclass of regularization networks. Particularly important, in the case of the Gaussian RBF (GRBF), is the definition of the shape parameter [11], which is problem dependent and controls the variance (or the width) of the Gaussian basis function. The GRBF is very sensitive to the shape parameter as shown empirically in [45] in case of interpolation, and a usual way to set it is to fix it through cross-validation procedures.
In general, various methods have been proposed to estimate the parameters of RBF-NNs in the context of approximation. Some of them are inspired by the work presented in [56] where the location of the centers and a weighted norm (instead of the classical Euclidean norm) are considered part of the learning problem together with the weights. The possibility to use a superposition of kernels with a distinct set of hyperparameters has been also considered [56]. In [46] they propose to compute the width factors by the nearest neighbors heuristic and a clustering procedure for the centers. A different approach has been used in [64], where the centers' locations are considered as additional parameters of the optimization problem as well as the weights. In the same work, they also considered learning the width of the Gaussian kernel around each center, but this hurt the generalization performance of the model [64]. A similar approach has been presented in [59] where a diagonal precision matrix is also considered for each RBF center. Important research to improve the generalization power of RBF-NNs is in [9]. This has been achieved by adding a regularization term that penalizes the second derivative of the output of each neuron. The technique is also known in the case of NNs in [10].
As in the case of NNs, RBF-NNs are considered as _black-box_ models, and consequently, the underlying process of how the input features are used to make predictions is not clear to humans, including those who developed the models. For this reason, sometimes simpler models given just by a linear combination of the input variables are preferred since the coefficients can give an assessment of the importance of each feature in the prediction task. On the other hand, simpler models tend to be less accurate than complex ones. As a result, it is crucial to propose models with powerful predictive capabilities that can also provide simple explanations to support decision-making in complex real-world applications. Thus, recognizing the importance of each input feature in a prediction task from a machine learning model has significant implications in various fields, including genomics [8], environmental science [20], emergency medicine [66], cancer research [32], and finance [48]. In these domains, the model interpretability is crucial as the predictive performance as described in [29; 5; 1].
Explainable AI (XAI) is a rapidly growing field of research that aims to make AI models more transparent and interpretable to humans. XAI techniques provide insight into the decision-making processes of AI models, allowing users to understand how models arrive at their outputs and to identify potential biases or errors. Feature importance ranking (FIR) and feature selection (FS) are two key techniques used in XAI. FIR involves evaluating the contribution of each feature to the output of a model, allowing users to identify which features are most important in driving the model predictions. FS involves choosing a subset of features that are most relevant to the model predictions, which can improve model performance due to the curse of dimensionality [6].
According to the taxonomy in [30] there are three kinds of feature selection methods: filter, wrapper, and embedded. Filter methods for feature selection are techniques that use statistical measures to rank the importance of each feature independently of the model, such as Chi-squared feature selection and variance thresholding. These methods are called "filter" because they filter out irrelevant or redundant features from the dataset before the learning algorithm is applied. Wrapper methods work by selecting a subset of features, training the learning algorithm on that subset, evaluating its performance using cross-validation, and then repeating the process with different feature subsets. This iterative approach can be time-consuming and inefficient. A popular wrapper method is the forward/backward feature elimination [31]. Embedded methods refer to learning algorithms that have FS incorporated. Embedded methods are optimal and time-efficient because they use the target learning algorithm in the selection of features. Some of the embedded methods are linear models such as the LASSO regression [63], and tree-based methods such as the Random Forest (RF) [12] and Gradient Boosting (GB) [26]. They are
inherently easier to interpret and have become prevalent tools across practitioners. Recently, Deep Feature Selection (DFS) [39] and the approach proposed in [65] highlight the important features of NNs architectures. An emerging new kind of feature selection methods are the post-hoc explanatory such as SHAP (SHapleyAdditive exPlanations) [42] and LIME (Local Interpretable Model-agnostic Explanations) [58]. They are applied after the model has made its predictions and provide insights into why the model has made a certain decision.
In parallel to XAI, to make accurate predictions, it is important for models to extract only the salient features from the data and act as feature extractors. One approach to achieving this is through the discovery of underlying factors of variation that generated the data, which may live in a subspace of much lower dimensionality than the input space. As an example, the success of deep learning models has been also imputed to their capability to learn and exploit latent representations through a cascade of multiple non-linear transformations of the dataset [7]. From an unsupervised learning perspective, the popular Principal Component Analysis (PCA) [53; 34] can be used to prune out irrelevant directions in the data, constructing a latent space given as a linear combination of independent factors. In supervised learning, the Fisher linear discriminant [24] learns a new reduced representation taking into account the classification task by searching for those vectors in the latent space that best discriminate among classes. In the context of statistical regression, there is also a vast body of literature about methods for finding a low-dimensional subspace of the input feature space that is statistically sufficient to characterize the input feature/response relationship known as sufficient dimension reduction (SDR) [2; 17; 16] and effective dimension reduction (EDR) [38]. These methodologies are closely related to the concept of active subspace.
The Active Subspace Method (ASM) [15], can be used to discover directions of maximum variability of a particular function by applying a PCA to a dataset composed of its gradients. By eliminating directions defined by the eigenvectors associated with zero eigenvalues one can provide a reduced representation (i.e. the active subspace) where most of the original function variability is preserved. The ASM showed to be relevant in many areas of science and engineering such as in hydrology [35], shape optimization [41], and disease modeling [40]. It enables efficient exploration of the model input space and can help reduce computational costs associated with sensitivity analysis, uncertainty quantification, and optimization.
## 2 Main contribution
This paper presents modifications to the classical RBF model, focusing specifically on Gaussian RBF Neural Networks (GRBF-NNs) for function approximation.
Our main contribution is to enhance the interpretability of the model while maintaining the excellent predictive performance of RBF-NN architectures. This is achieved by exploiting latent relationships between input features and the response variable, highlighting the factors of variation in the data, revealing the active subspace, and identifying the input variables that played prominent roles in the learning task. To achieve this, we equip the kernel of the model with a learnable symmetric precision matrix. The latent information about the prediction task can be extracted by analyzing the spectrum of the precision matrix once the training of the model has been completed. The eigenvalues provide valuable information regarding the second derivative of the argument of the Gaussian kernel at the GRBF-NN centers. Dominant eigenvalues correspond to eigenvectors that explain a significant portion of the variability within the GRBF-NN model. Therefore, analyzing these eigenvalues allows us to understand the directions in which the GRBF-NN model exhibits the most variability. Consequently, one can use our proposed model for supervised dimensionality reduction purposes, for example by projecting the overall learning task in a 2-dimensional active subspace for visualization.
In parallel, to make our model more transparent and interpretable to humans, we estimate the FIR of the learning task allowing one to use our model for FS purposes. This can be easily achieved by knowing that the eigenvectors also represent the Jacobian of the linear transformation of a new coordinate system defined in the latent space. This means that we can assess how a change in a particular latent variable
affects the input variables thereby assessing the importance of the input feature for the current prediction task.
To improve the smoothness and the generalization capability of our model, we introduce two regularization parameters: one for the weights and the other one for the elements of the precision matrix of the Gaussian kernel. To better analyze the behavior of our model, we investigate the synergy between them. Interestingly, numerical results suggest that a stronger role is played by the regularizer of the precision matrix rather than the one that controls the magnitude of the weights.
In the end, we conduct numerical experiments to compare our proposed model with other popular machine learning models such as SVM's [18], Random Forest (RF) [12], Gradient Boosting (GB) [26], and state-of-the-art deep learning embedding methods such as the ones presented in [39; 65]. The results show that our model not only achieves competitive prediction performances but also provides meaningful and interpretable insights.
## 3 Model Description
Radial Basis Functions have been introduced for solving interpolation problems, which consist of building the following interpolant
\[f(\mathbf{x})=\sum_{m=1}^{M}w_{m}\varphi(||\mathbf{x}-\mathbf{x}_{m}||) \tag{1}\]
where we have \(M\) weights \(w_{m}\in\mathbb{R}\), a continuous function \(\varphi:\mathbb{R}^{+}\rightarrow\mathbb{R}\) which represents the basis function, and the centers \(\mathbf{x}_{m}\). One can solve the interpolation problem by solving the following linear system by imposing the interpolation condition
\[\boldsymbol{\Phi}\mathbf{w}=\mathbf{y} \tag{2}\]
where the \(N\times M\) (in this case with \(M=N\)) symmetric matrix \(\boldsymbol{\Phi}\) has elements \(\boldsymbol{\Phi}_{nm}=\varphi(||\mathbf{x}_{n}-\mathbf{x}_{m}||)\), and with the vectors \(\mathbf{w}=(w_{1},\ldots,w_{M})\) and \(\mathbf{y}=(y_{1},\ldots,y_{N})\) is the response or target variable vector of \(\mathbb{R}^{N}\). It has been proven [44] that for some RBF (eg. the Gaussian) the matrix \(\boldsymbol{\Phi}\) is not singular if all the data points are distinct with \(N>2\).
Our first modification to the model in Eq. 1 concerns the kernel. We are primarily interested to learn and exploit hidden correlation structures in the dataset so that we can equip our RBF model with a Gaussian basis function with a symmetric positive definite matrix as follows
\[\begin{split}\varphi(||\mathbf{x}-\mathbf{x}_{j}||)=\exp\left\{- \frac{1}{2}(\mathbf{x}-\mathbf{x}_{j})^{T}\mathbf{M}(\mathbf{x}-\mathbf{x}_{j })\right\}\\ =\exp\left\{-\frac{1}{2}(\mathbf{x}-\mathbf{x}_{j})^{T}\mathbf{U}^ {T}\mathbf{U}(\mathbf{x}-\mathbf{x}_{j})\right\}\end{split} \tag{3}\]
the matrix \(\mathbf{M}\) is a \(D\times D\) symmetric and positive definite precision matrix that can be expressed as upper triangular matrix multiplication using \(\mathbf{U}\).
The function approximation problem, in this case, can be solved by minimizing the following nonconvex optimization problem and defining the vector \(\mathbf{u}=\text{vech}(\mathbf{U})\), where the operator vech is the half vectorization of matrices which means that the upper triangular entries of the matrix \(\mathbf{U}\) are collected inside the vector \(\mathbf{u}\)
\[\min_{\mathbf{w},\mathbf{u}}\quad E(\mathbf{w},\mathbf{u}) \tag{4}\]
and the error function in the regression case takes the following form
\[E(\mathbf{w},\mathbf{u})=\frac{1}{2}\sum_{n=1}^{N}(y_{n}-f_{n})^{2}=\frac{1}{2 }\sum_{n=1}^{N}\left(y_{n}-\sum_{m=1}^{M}w_{m}\exp\left\{-\frac{1}{2}(\mathbf{ x}_{n}-\mathbf{x}_{m})^{T}\mathbf{U}^{T}\mathbf{U}(\mathbf{x}_{n}-\mathbf{x}_{m}) \right\}\right)^{2} \tag{5}\]
The number of parameters to optimize in this case is \(P=M+D+\frac{D\times(D-1)}{2}\). From numerical experiments, the model \(f\) defined in Eq. 1 can produce a very sharply peaked function at the end of the minimization of the error function defined in Eq. 5. In such cases, we encountered large values in the entries of the precision matrix \(\mathbf{M}\). Then, it is natural to force the smoothness of \(f\) through regularization. The measure of the bumpiness of the function \(f\) is controlled by the second derivative of the function \(f\) that depends on both the weights \(\mathbf{w}\) and the precision matrix \(\mathbf{M}\). Consequently, the regularizers have the responsibility to force the Gaussian kernel to be as flat as possible, penalizing large values of the entries of the matrix \(\mathbf{M}\) along with the weights \(\mathbf{w}\) and promoting the smoothness of \(f\). After the considerations above, the regularized error function becomes
\[R(\mathbf{w},\mathbf{u})=E(\mathbf{w},\mathbf{u})+G(\mathbf{w},\mathbf{u}) \tag{6}\]
where the penalty function is given by
\[G(\mathbf{w},\mathbf{u})=\frac{1}{2}\lambda_{\mathbf{u}}||\mathbf{u}||^{2}+ \frac{1}{2}\lambda_{\mathbf{w}}||\mathbf{w}||^{2} \tag{7}\]
Then we solve the following nonconvex optimization problem
\[\min_{\mathbf{w},\mathbf{u}}\ \ R(\mathbf{w},\mathbf{u}) \tag{8}\]
with the partial gradients respect to \(\mathbf{w}\) and \(\mathbf{u}\) given as follows
\[\nabla R(\mathbf{w})=\boldsymbol{\Phi}^{T}(\mathbf{y}-\boldsymbol {\Phi}\mathbf{w})+\lambda_{\mathbf{w}}\mathbf{w}=\boldsymbol{\Phi}^{T} \mathbf{r}+\lambda_{\mathbf{w}}\mathbf{w} \tag{9}\] \[\nabla R(\mathbf{u})=\text{vech}\left(\sum_{n=1}^{N}r_{n}\sum_{m=1 }^{M}w_{m}\mathbf{G}_{nm}\boldsymbol{\Phi}_{nm}\right)+\lambda_{\mathbf{u}} \mathbf{u} \tag{10}\]
where the \(D\times D\) matrix \(\mathbf{G}_{nm}\) defined as \(\mathbf{G}_{nm}=(\mathbf{x}_{n}-\mathbf{x}_{m})(\mathbf{x}_{n}-\mathbf{x}_{m} )^{T}\mathbf{U}\) and \(r_{n}\) the \(n\)th component of the vector \(\mathbf{r}=\mathbf{y}-\boldsymbol{\Phi}\mathbf{w}\).
Until now we assumed the centers are exactly given by our training dataset. This might be unfeasible and computationally very expensive for very large \(N\). This issue can be easily solved by selecting the number of the \(M\) centers collected in the \(M\times D\) matrix \(\mathbf{C}\) to be less than the number of data points \(N\) as shown in [13]. Depending on how the centers are selected we can distinguish two different strategies:
1. Unsupervised selection of the centers: in this case, one can choose an \(M\) centers \(\mathbf{c}_{m}\) at random among the data points or by running a clustering algorithm (e.g. \(k\)-means). Given the centers, the objective function is the same as in Eq. 6 except that now \(M<N\) \[\min_{\mathbf{w},\mathbf{u}}\ \ R(\mathbf{w},\mathbf{u})\] (11) with \[R(\mathbf{w},\mathbf{u})=\frac{1}{2}\sum_{n=1}^{N}\left(y_{n}-\sum_{m=1}^{M}w _{m}\exp\left\{-\frac{1}{2}(\mathbf{x}_{n}-\mathbf{c}_{m})^{T}\mathbf{U}^{T} \mathbf{U}(\mathbf{x}_{n}-\mathbf{c}_{m})\right\}\right)^{2}+G(\mathbf{w}, \mathbf{u})\] (12) The partial gradients are the same as in Eq. 9 and in Eq. 10 together with the partial gradient respect to \(\mathbf{u}\) in Eq. 10 unchanged, together with the total number of parameters \(P\).
2. Supervised selection of the centers: in this case the centers are considered learnable, adding \(D\times M\) parameters in the optimization problem. With this variation, the model has the following form with recasting the matrix containing the centers as a vector \(\mathbf{c}=\text{vec}(\mathbf{C})\) \[\min_{\mathbf{w},\mathbf{u},\mathbf{c}}\ \ R(\mathbf{w},\mathbf{u}, \mathbf{c})\] (13)
with
\[R(\mathbf{w},\mathbf{u},\mathbf{c})=\frac{1}{2}\sum_{n=1}^{N}\left(y_{n}-\sum_{m=1 }^{M}w_{m}\exp\left\{-\frac{1}{2}(\mathbf{x}_{n}-\mathbf{c}_{m})^{T}\mathbf{U}^ {T}\mathbf{U}(\mathbf{x}_{n}-\mathbf{c}_{m})\right\}\right)^{2}+G(\mathbf{w}, \mathbf{u},\mathbf{c}) \tag{14}\]
The partial gradient with respect to the \(m\)th center is the following
\[\nabla R(\mathbf{c}_{m})=\sum_{n=1}^{N}r_{n}\mathbf{U}^{T}\mathbf{U}(\mathbf{ x}_{n}-\mathbf{c}_{m})\boldsymbol{\Phi}_{nm}+\lambda_{\mathbf{c}}\mathbf{c}_{m} \tag{15}\]
together with the partial gradients in Eq. 9 and Eq. 10 and \(r_{n}\) the \(n\)th component of the vector \(\mathbf{r}=\mathbf{y}-\boldsymbol{\Phi}\mathbf{w}\). The number of parameters is in this case \(P=M\times D+M+D+\frac{D\times(D-1)}{2}\). Where in the penalty function we introduced the possibility to regularize the position of the centers, controlled by \(\lambda_{\mathbf{c}}\) as follows \(G(\mathbf{w},\mathbf{u},\mathbf{c})=\frac{1}{2}\lambda_{\mathbf{u}}||\mathbf{ u}||^{2}+\frac{1}{2}\lambda_{\mathbf{w}}||\mathbf{w}||^{2}+\frac{1}{2} \lambda_{\mathbf{c}}||\mathbf{c}||^{2}\).
### Extracting Insights from the GRBF-NN: Feature Importance and Active Subspace
After obtaining the parameters of the GRBF-NN, we can extract valuable information from the spectrum of the matrix \(\mathbf{M}\). Specifically, we aim to determine whether the variability of the fitted model \(f\) is restricted to a lower-dimensional space compared to the original space, as well as to identify the directions in which the function \(f\) is most sensitive. This allows us to establish the active subspace. It is easy to observe that the exponent of Eq. 3 is the following quadratic form also known as the squared Mahalanobis distance
\[d_{M}^{2}(\mathbf{x})=(\mathbf{x}_{i}-\mathbf{x}_{j})^{T}\mathbf{M}(\mathbf{x }_{i}-\mathbf{x}_{j}) \tag{16}\]
which expresses the functional dependence of the Gaussian kernel on the input variable \(\mathbf{x}\). More insights can be revealed by expanding Eq. 16 in terms of eigenvectors and eigenvalues
\[d_{M}^{2}(\mathbf{x})=(\mathbf{x}_{i}-\mathbf{x}_{j})^{T}\mathbf{M}(\mathbf{x }_{i}-\mathbf{x}_{j})=(\mathbf{x}_{i}-\mathbf{x}_{j})^{T}\mathbf{V}\boldsymbol {I}\mathbf{V}^{T}(\mathbf{x}_{i}-\mathbf{x}_{j})=(\mathbf{z}_{i}-\mathbf{z}_{ j})^{T}\boldsymbol{I}(\mathbf{z}_{i}-\mathbf{z}_{j}) \tag{17}\]
shows that the second derivatives of Eq. 17 are represented by the eigenvalues in the diagonal matrix \(\boldsymbol{I}\), after a rotation in the latent space \(\mathbf{z}\in\mathbb{Z}\subset\mathbb{R}^{K}\) (with \(K=D\)) under the new basis defined by the eigenvectors in the \((D\times K)\) matrix \(\mathbf{V}\). The presence of zero eigenvalues indicates that the factors of variation in \(f(\mathbf{x})\) are manifested in a lower-dimensional subspace than the input dimensionality \(D\). Additionally, the eigenvector \(\mathbf{v}_{k}\) corresponding to the largest eigenvalue \(\gamma_{k}\) identifies the direction of maximum curvature of the quadratic function in Eq. 16, thereby pinpointing the direction in which \(f\) is most globally sensitive. Furthermore, the Gaussian kernel in Eq. 3 in the latent space \(\mathbb{Z}\) is given by a product of \(D\) independent contributions
\[\begin{split}\varphi(||\mathbf{x}-\mathbf{x}_{j}||)& =\exp\left\{-\frac{1}{2}(\mathbf{x}-\mathbf{x}_{j})^{T}\mathbf{M}( \mathbf{x}-\mathbf{x}_{j})\right\}\\ &\quad=\exp\left\{-\frac{1}{2}\sum_{d=1}^{D}\gamma_{d}(z_{d}-z_{j, d})\right\}\\ &=\prod_{D=1}^{D}\exp\left\{-\frac{1}{2}\gamma_{d}(z_{d}-z_{j,d}) \right\}\end{split} \tag{18}\]
enhancing the fact that the variability of the model \(f\) is axis aligned within the latent space.
To identify which input variables \(x_{d}\) are more critical in the prediction task of our model, we can observe that the matrix \(\mathbf{V}\), which contains the eigenvectors of the matrix \(\mathbf{M}\), represents the Jacobian of the linear transformation that maps the input space to the latent space, as demonstrated in Eq. 17.
Considering the original input vector \(\mathbf{x}\) as generated from a linear combination of latent variables \(\mathbf{z}\) and the eigenvectors \(\mathbf{V}\)
\[\mathbf{x}=\mathbf{V}\mathbf{z} \tag{19}\]
they represent simply the following derivative
\[\frac{\partial\mathbf{x}}{\partial\mathbf{z}}=\mathbf{V} \tag{20}\]
showing that the \(k\)th eigenvector \(\mathbf{v}_{k}\) can be interpret as the contribution of the \(k\)th latent variable \(z_{k}\) in the variation of \(\mathbf{x}\).
Each element of the matrix \(\mathbf{V}\) has to be transformed in its absolute value to obtain meaningful results. So we can define the matrix \(\mathbf{\tilde{V}}\) where each component is given by \(\tilde{v}_{d,k}=|v_{d,k}|\). To obtain the feature importance ranking vector, we need to scale the eigenvectors \(\mathbf{v}_{k}\) by their corresponding eigenvalues \(\gamma_{k}\) as the eigenvectors are returned typically normalized to the unitary norm from numerical procedures. This scaling ensures that more importance is given to the directions with the most significant variation. The resulting \(D\)-dimensional feature importance ranking vector can be defined as follows
\[\text{Feature Importance}=\sum_{k=1}^{K}\gamma_{k}\mathbf{\bar{v}}_{k} \tag{21}\]
A final normalization step is performed so the feature importance vector ranges between zero and one.
### Numerical Examples
In this section, we want to provide some simple examples to highlight graphically the behavior of the proposed model. We first start with two simple classification problems with \(N=100\) and \(D=2\). In the first problem there two classes \(c_{1}\) and \(c_{2}\) that are normally distributed with mean \(\boldsymbol{\mu}_{c_{1}}^{T}=[1,1]^{T}\) and \(\boldsymbol{\mu}_{c_{2}}^{T}=[2.8,2.8]^{T}\), respectively, and same covariance matrix \(\boldsymbol{\Sigma}=\left[\begin{smallmatrix}0.81&0.72\\ 0.72&0.66\end{smallmatrix}\right]\). The scatter plot of the two classes is shown in Fig. 1 where the yellow and purple dotted points represent class \(c_{1}\) and \(c_{2}\), respectively. In Fig. 1 (upper right figure), we display the fitted GRBF-NN model with \(M=2\) centers (highlighted by the red points) obtained with the unsupervised selection strategy by \(k\)-means clustering. Furthermore, we plot the eigenvector \(\mathbf{v}_{1}\) corresponding to the dominant eigenvalue \(\gamma_{1}\) of the matrix \(\mathbf{M}\) as white arrows with the origin at the two centers. This shows that the fitted model \(f\) obtains most of its variability along the direction of \(\mathbf{v}_{1}\), which is orthogonal in this case to the contour levels of \(f\). In Fig. 1 (lower right figure), we show the fitted model in the latent space obtained by projecting the dataset \(\mathbf{X}\) to the new basis defined by the eigenvectors of \(\mathbf{M}\), defining the projected dataset \(\mathbf{Z}=\mathbf{X}\mathbf{V}\). We observe that all the variation of \(f\) is aligned to the first latent variable \(z_{1}\), which indicates that the fraction \(\frac{\gamma_{1}}{\sum_{k}^{K}\gamma_{k}}\) is approximately equal to 1. Fig. 1 (lower left figure) shows the feature importance estimated by our model using Eq. 21, which validates that the input feature \(x_{2}\) plays a more significant role in the discrimination power between the classes than \(x_{1}\).
Another example of a classification problem is shown in Fig. 2. In this case, we have two noisy interleaving half circles with \(N=100\) and \(D=2\), as seen in Fig. 2 (upper left figure). To achieve a stronger discriminative power from the model, we choose \(M=16\) centers. In contrast to the previous example, not all of the model \(f\) variability is concentrated along the direction identified by the eigenvector \(\mathbf{v}_{1}\) related to the dominant eigenvalue \(\gamma_{1}\). In this case, the fraction \(\frac{\gamma_{1}}{\sum_{k}^{K}\gamma_{k}}\) is approximately 0.8, meaning that the resulting feature importance in Eq. 21 includes the contribution of the eigenvector \(\mathbf{v}_{2}\). This is highlighted in the barplot in Fig 2 (lower left figure), where we decomposed the feature importance showing the contribution of each term in Eq. 21. Since \(\mathbf{v}_{2}\) is orthogonal to \(\mathbf{v}_{1}\), it gives more importance to the feature \(x_{2}\) because \(\mathbf{v}_{1}\) is quasi-parallel to the input \(x_{1}\).
We present an example of regression, where the function to be approximated is \(y=\sin(ax_{1}+bx_{2})\), with \(a\) and \(b\) being real scalars. Fig. 3 shows the case where \(a\) and \(b\) are equal to 0.5, and the true function
is depicted in Fig. 3 (upper left figure). We then use our proposed model to obtain an approximation, as shown in Fig. 3 (upper right figure) along the direction given by the eigenvector \(\mathbf{v}_{1}\), with the centers represented by dotted red points. Furthermore, we perform a supervised dimensionality reduction from the original two-dimensional space to the one-dimensional subspace defined by the first eigenvector \(\mathbf{v}_{1}\). This subspace captures the 'active' part of the function where most of the variation is realized, as illustrated in Fig. 3 (lower right figure). Finally, we estimate the feature importance using Eq. 21 and find that \(x_{1}\) and \(x_{2}\) contribute equally, as expected. This result is depicted in Fig. 3 (lower left figure). In this final example, we altered the values of the scalars \(a\) and \(b\) to \(0.1\) and \(0.9\), respectively. This modification resulted in a change in the feature importance estimated by our model, as depicted in Fig. 4.
In summary, the GRBF-NN model beyond solving a classical regression/classification model provides the user with valuable information about the model behavior such as allowing visualization of the fitted model \(f\) of the active subspace thereby recognizing the underlying factors of variation of the data, and in parallel allowing to discover which are the most important input features related to the prediction task.
## 4 Numerical Experiments
This section aims to provide a comprehensive evaluation of the proposed model by assessing its predictive performance and feature selection quality. We consider two variants of the GRBF-NN model, one with unsupervised center selection (GRBF-NN\({}_{k}\)) as given in Eq. 11 and the other with supervised center selection (GRBF-NN\({}_{c}\)) as given in Eq. 13. We compare the performance of these models with other popular models such as Multi-Layer Perceptron (MLP) and Support Vector Machines (SVMs) [18]. As the GRBF-NN model incorporates feature selection, it can be classified as an embedding method. To
Figure 1: The GRBF-NN behavior is graphically represented in four subfigures: the upper left figure shows the classification problem with purple and yellow dots representing the two classes. The upper right figure shows the fitted GRBF-NN in the input space, while the lower right figure shows the fitted GRBF-NN model in the active subspace. Contour levels show estimated class probabilities. The red dotted points represent the GRBF-NN centers. The white arrow highlights the direction of the dominant eigenvector \(\mathbf{v}_{1}\). Finally, the lower left subfigure shows the feature importance estimated from the GRBF-NN.
provide a comprehensive benchmark, we also include other widely used embedding methods such as Random Forest (RF) [25] and Gradient Boosting (GB) [26], which have shown strong performance for tabular data. In addition, we include state-of-the-art embedding deep learning methods such as Deep Feature Selection (DFS) [39] and the method proposed in [65], referred to as FIDL (Feature Importance for Deep Learning) in this comparison for simplicity.
### Datasets
To test the predictive performance of our model we consider 20 different real-world problems as summarized in Tab. 1. We have a total of 6 binary classifications, 4 multiclass, 1 time series, and 9 regression problems. In the following, we provide a description of each of them:
* **Digits**[4]: They created a digit database by collecting 250 samples from 44 writers. The samples written by 30 writers are used for training, cross-validation and writer-dependent testing, and the digits written by the other 14 are used for writer-independent testing. For the current experiment, we use the digits 3 and 8 for feature selection purposes so that \(N=357\) and \(D=64\).
* **Iris**[24]: One of the most famous datasets in the pattern recognition literature, contains 3 classes of 50 instances each (\(N=150,D=4\)), where each class refers to a type of iris plant. One class is linearly separable from the other 2; the latter are not linearly separable from each other.
* **Breast Cancer**[62]: Features are computed from a digitized image of a fine needle aspirate (FNA) of a breast mass. They describe the characteristics of the cell nuclei present in the image. In this dataset, \(N=569\), \(D=30\), and two classes.
Figure 2: The GRBF-NN behavior is graphically represented in four subfigures: the upper left figure shows the classification problem with purple and yellow dots representing the two classes. The upper right figure shows the fitted GRBF-NN in the input space, while the lower right figure shows the fitted GRBF-NN model in the active subspace. Contour levels show estimated class probabilities. The red dotted points represent the GRBF-NN centers. The white arrow highlights the direction of the dominant eigenvector \(\mathbf{v}_{1}\). Finally, the lower left subfigure shows the feature importance estimated from the GRBF-NN.
* **Wine**[3]: The data is the results of a chemical analysis of wines grown in the same region in Italy by three different cultivators. There are thirteen different measurements taken for different constituents found in the three types of wine. In this dataset, \(N=173\), \(D=13\), and three classes.
* **Australian**[21]: This is the famous Australian Credit Approval dataset, originating from the StatLog project. It concerns credit card applications. All attribute names and values have been changed to meaningless symbols to protect the confidentiality of the data. In this dataset, \(N=600\), \(D=15\), and two classes.
* **Credit-g**[21]: This dataset classifies people described by a set of attributes as good or bad credit risks, there are \(D=20\) features, \(N=1000\) data points, and two classes in this dataset.
* **Glass**[23] : The Glass identification database. The study of the classification of types of glass was motivated by criminological investigation. There are \(D=9\) features, \(N=214\) data points, and two classes in this dataset.
* **Blood**[67]: Data taken from the Blood Transfusion Service Center in Hsin-Chu City in Taiwan. The center passes its blood transfusion service bus to one university in Hsin-Chu City to gather blood donated about every three months. The target attribute is a binary variable representing whether he/she donated blood in March 2007. There are \(D=4\) features, \(N=748\) data points, and two classes in this dataset.
* **Heart Disease**[21]: This database contains 76 attributes, but all published experiments refer to using a subset of \(D=14\) of them and \(N=270\). The goal is to predict the presence of heart disease in the patient.
Figure 3: The GRBF-NN behavior is depicted in four subfigures: the upper left shows the regression problem \(y=t(\mathbf{x})=\sin(0.5x_{1}+0.5x_{2})\), while the upper right displays the fitted GRBF-NN in the input space. The dominant eigenvector \(\mathbf{v}_{1}\) is indicated by a white arrow, and the GRBF-NN centers are shown as red dotted points. The lower right shows the fitted GRBF-NN model projected in the one-dimensional active subspace. The function values at the input data and at the centers are represented by black and red dotted points, respectively. Finally, the lower left subfigure displays the feature importance estimated from the GRBF-NN. Function values are normalized.
* **Vowel**[19]: Speaker-independent recognition of the eleven steady state vowels of British English using a specified training set of lpc derived log area ratios. There are \(D=12\) features, \(N=990\) data points, and eleven classes in this dataset.
* **Dehli Weather**[36]: The Delhi weather dataset was transformed from a time series problem into a supervised learning problem by using past time steps as input variables and the subsequent time step as the output variable, representing the humidity. In this dataset \(N=1461\) and \(D=7\).
* **Boston**[49]: This dataset contains information collected by the U.S Census Service concerning housing in the area of Boston Mass and has been used extensively throughout the literature to benchmark algorithms for regression. In this dataset \(N=506\) and \(D=14\).
* **Diabetes**[60]: Ten baseline variables, age, sex, body mass index, average blood pressure, and six blood serum measurements were obtained for each of \(N=442\) diabetes patients, as well as the response of interest, a quantitative measure of disease progression one year after baseline.
* **Prostatic Cancer**[61]: The study examined the correlation between the level of prostate-specific antigen (PSA) and a number of clinical measures, in \(N=97\) men who were about to receive a radical prostatectomy. The goal is to predict the log of PSA (lpsa) from a number of measurements of \(D=4\) features.
* **Liver**[43]: It is a regression problem where he first 5 variables are all blood tests that are thought to be sensitive to liver disorders that might arise from excessive alcohol consumption. Each line in the dataset constitutes the record of a single male individual. There are \(D=5\) features, \(N=345\) data points.
* **Plasma**[47]: A cross-sectional study has been designed to investigate the relationship between personal characteristics and dietary factors, and plasma concentrations of retinol, beta-carotene, and
Figure 4: The GRBF-NN behavior is depicted in four subfigures: the upper left shows the regression problem \(y=t(\mathbf{x})=\sin(0.1x_{1}+0.9x_{2})\), while the upper right displays the fitted GRBF-NN in the input space. The dominant eigenvector \(\mathbf{v}_{1}\) is indicated by a white arrow, and the GRBF-NN centers are shown as red dotted points. The lower right shows the fitted GRBF-NN model projected in the one-dimensional active subspace. The function values at the input data and at the centers are represented by black and red dotted points, respectively. Finally, the lower left subfigure displays the feature importance estimated from the GRBF-NN. Function values are normalized.
other carotenoids. Study subjects (\(N=315\)) were patients who had an elective surgical procedure during a three-year period to biopsy or remove a lesion of the lung, colon, breast, skin, ovary, or uterus that was found to be non-cancerous.
* **Cloud**[21]: The data sets we propose to analyze are constituted of \(N=1024\) vectors, each vector includes \(D=10\) parameters. Each image is divided into super-pixels 16*16 and in each super-pixel, we compute a set of parameters for the visible (mean, max, min, mean distribution, contrast, entropy, second angular momentum) and IR (mean, max, min).
* **DTMB-5415**[22]: The DTMB-5415 datasets come from a real-world naval hydrodynamics problem. The 21 input variables represent the design variables responsible for the shape modification of the hull while the output variable represents the corresponding total resistance coefficient of the simulated hull through a potential flow simulator. We propose two versions of the same problem: DTMB-5415\({}^{1}\) where all the 21 design variables are related to the output variable and DTMB-5415\({}^{2}\) where 5 of the 21 design variables are not related to the output so that in this manner we can evaluate the models also from a feature selection perspective.
* **Body Fat**[55]: Estimates of the percentage of body fat are determined by underwater weighing and various body circumference measurements for \(N=252\) men and \(D=14\) different input features.
Furthermore, we use synthetic datasets to provide a deeper comparison of the feature importance and feature selection results obtained by the methods since the ground truth of the feature importance related to the learning task is known. This allows for the evaluation of the quality of the feature selection and ranking provided by the methods, as the true feature importance can be compared to the estimates obtained by the models. The synthetic datasets considered are the following:
* **Binary classification**[31] (P1): given \(y=-1\), the ten input features are generated with \((x_{1},\ldots,x_{10})\sim\mathcal{N}(0,\mathbf{I}_{10})\). Given \(y=1\), \(x_{1}\) through \(x_{4}\) are standard normal conditioned on \(9\leq\sum_{j=1}^{4}x_{j}^{2}\leq 16\), and
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Name & \(N\) & \(D\) & Task & Reference \\ \hline Digits & 357 & 64 & Binary classification & [4] \\ Iris & 150 & 4 & Multiclass classification & [24] \\ Breast Cancer & 569 & 30 & Binary classification & [62] \\ Wine & 173 & 13 & Multiclass classification & [3] \\ Australian & 600 & 15 & Binary classification & [21] \\ Credit-g & 1000 & 20 & Binary classification & [21] \\ Glass & 214 & 9 & Multiclass classification & [23] \\ Blood & 748 & 4 & Binary classification & [67] \\ Heart Disease & 270 & 13 & Binary classification & [21] \\ Vowel & 990 & 12 & Multiclass classification & [19] \\ Delhi Weather & 1461 & 7 & Time series & [36] \\ Boston Housing & 506 & 14 & Regression & [49] \\ Diabetes & 214 & 9 & Regression & [60] \\ Prostatic Cancer & 97 & 4 & Regression & [61] \\ Liver & 345 & 5 & Regression & [43] \\ Plasma & 315 & 16 & Regression & [47] \\ Cloud & 108 & 5 & Regression & [21] \\ DTMB-5415\({}^{1}\) & 42 & 21 & Regression & [22] \\ DTMB-5415\({}^{2}\) & 42 & 21 & Regression & [22] \\ Body Fat & 252 & 14 & Regression & [55] \\ \hline \hline \end{tabular}
\end{table}
Table 1: Datasets considered in the benchmark.
\((x_{5},\ldots,x_{10})\sim\mathcal{N}(0,\mathbf{I}_{6})\). The first four features are relevant for P1.
* 3-**dimensional XOR as 4-way classification**[14] (P2): Consider the 8 corners of the 3-dimensional hypercube \((v_{1},v_{2},v_{3})\in-1,1^{3}\), and group them by the tuples \((v_{1}v_{3},v_{2}v_{3})\), leaving 4 sets of vectors paired with their negations \(v^{(i)},-v^{(i)}\). Given a class \(i\), a point is generated from the mixture distribution \((1/2)\mathcal{N}(v^{(i)},0.5\mathbf{I}_{3})+(1/2)\mathcal{N}(-v^{(i)},0.5 \mathbf{I}_{3})\). Each example additionally has 7 standard normal noise features for a total of \(D=10\) dimensions. The first three features are relevant for P2.
* **Nonlinear regression**[25] (P3): The 10-dimensional inputs \(x\) are independent features uniformly distributed on the interval \([0,1]\). The output \(y\) is created according to the formula \(y=10\sin(\pi x_{1}x_{2})+20(x_{3}-0.5)^{2}+10x_{4}+5x_{5}+\epsilon\) with \(\epsilon\sim\mathcal{N}(0,1)\).
The first five features are relevant for P3.
In all those three cases we varied the number of data points with \(N\in\{100,500,1000\}\).
### Numerical Set-up
In this section, we present the numerical details of our experiment. We evaluate the performance of the GRBF-NN model for feature selection with unsupervised (GRBF-NN\({}_{k}\)) and supervised (GRBF-NN\({}_{c}\)) center selection, as defined in Eq.11 and Eq. 13, respectively. To compute the centers for GRBF-NN\({}_{k}\), we use the popular \(k\)-means clustering algorithm. Both GRBF-NN\({}_{c}\) and GRBF-NN\({}_{k}\) were optimized using Adam [37] for a maximum of 10000 epochs. We implemented the GRBF-NN in Pytorch [52].
We perform a grid search to approximately find the best set of hyperparameters of all the models considered in these numerical experiments.
For the GRBF-NN the grid search is composed as follows:
* Number of centers: For regression problems, the number of centers \(M\) can take the following values \(M\in\{8,32,128\}\), where for classification \(M\in\{2,4,8,16,32\}\).
* Regularizers: \((\lambda_{\mathbf{w}},\,\lambda_{\mathbf{u}})\in\{0,10^{-3},10^{-2},10^{-1},1,10^{1},10^{2},10^{3}\}\).
* Adam learning rate: \(\alpha\in\{10^{-3},10^{-2}\}\).
For the SVM model, we used a Gaussian kernel, and the grid search is composed as follows:
* Gaussian kernel width: \(\sigma^{2}\in\{0,10^{-3},10^{-2},10^{-1},1,10^{1},10^{2},10^{3}\}\).
* Regularizer: \(C\in\{0,10^{-3},10^{-2},10^{-1},1,10^{1},10^{2},10^{3}\}\).
For the RF model, the grid search is composed as follows:
* Depth of the tree: \(d_{t}\in\{2,4,8,16,32,64,128\}\).
* Minimum number of samples required to be a leaf node: \(s_{t}\in\{1,5,10,20\}\).
* Number of decision trees: \(n_{t}\in\{10,20,50,100,200,400,800\}\).
For the GB model, the grid search is composed as follows:
* Learning rate: \(l_{b}\in\{10^{-3},10^{-2},10^{-1},1\}\).
* Number of boosting stages: \(n_{b}\in\{10,20,50,100,200,400,800\}\).
* Maximum depth of the individual regression estimators: \(d_{b}\in\{2,4,8,16,32,64,128\}\).
For the MLP the grid search is composed as follows:
* Regularizer (\(L2\)-norm): \(\lambda\in\{0,10^{-3},10^{-2},10^{-1},1,10^{1},10^{2},10^{3}\}\)
* Network architecture: A two-hidden-layer architecture with the following combinations of the number of neurons in the two layers is considered \(\{(D,\lceil D/2\rceil)),(2D,D),(2D,\lceil D/2\rceil)\}\) with rectifiers activation functions [27].
* Adam learning rate: \(\alpha\in\{10^{-3},10^{-2}\}\).
For the DFS we use the same grid search hyperparameters as in the MLP case. This is because the DFS is the same as an MLP but with an additional sparse one-to-one layer added between the input and the first hidden layer, where each input feature is weighted. We use the implementation available on the following link 2.
Footnote 2: [https://github.com/cyustcer/Deep-Feature-Selection](https://github.com/cyustcer/Deep-Feature-Selection)
For the FIDL, we use the author's implementation of the algorithm available at the following link 3 and the following grid search hyperparameters:
Footnote 3: [https://github.com/maksym33/FeatureImportanceDL](https://github.com/maksym33/FeatureImportanceDL)
* Network architecture: A two-hidden-layer architecture with the following combinations of the number of neurons in the two layers is considered \(\{(D,\lceil D/2\rceil)),(2D,D),(2D,\lceil D/2\rceil)\}\).
* Number of important features: \(s=\lceil D/2\rceil\).
Always referring to FIDL, for datasets that are used also in their paper, we use the optimal set of hyperparameters found by them. In this method, the user has to choose in advance, and before training the model, the number of important features \(s\) that the problem might have. We fix this parameter to \(s=\lceil D/2\rceil\) as used in their paper for some datasets. We fix all the other hyperparameters to their default values provided by the authors.
We perform a 5-fold cross-validation to identify the best set of hyperparameters for the models. Once the best set of hyperparameters is determined, we conduct another 5-fold cross-validation using 20 different seeds to obtain statistically significant results. For the Dehli Weather dataset we used a 5-fold cross-validation on a rolling basis because it is a time series problem. For the FIDL model, we were able to run the cross-validation procedure with respect to its best set of hyperparameters varying only time the random seed due to the severe time and memory complexity of the model. For the regression problems we use a root mean squared error (RMSE) while for the classification problems, we use the accuracy as a metric to evaluate the models. For the SVM, MLP, RF, and GB we used the Python package [54].
### Numerical Results
#### 4.3.1 Evaluation of the Predictive Performance
Tab. 2 presents a summary of our numerical results, showing the mean from the cross-validation procedure for each model. Notably, the GRBF-NN demonstrates strong competitiveness compared to the other models. In twelve out of twenty datasets, the GRBF-NN shows a better performance. The SVM and RF are the best models on three different datasets, MLP and the FIDL in two datasets, the GB in one, and the DFS in none of them. Regarding the comparison between the GRBF-NN\({}_{c}\) and GRBF-NN\({}_{k}\), the numerical results indicate that there is not a clear winner and that both strategies for selecting centers are equally competitive and the user might consider trying both of them. In some datasets, the optimizer used for the training process of FIDL did not seem to converge to a decent stationary point and the resulting performance in those datasets is not reported.
In the classification tasks, the GRBF-NN achieves the best accuracy on the Digits, Iris, Breast Cancer, and Wine datasets. Furthermore, in all those cases the best results are obtained using an unsupervised selection of the centers namely the GRBF-NN\({}_{k}\) model. The SVM shows the best performance on the Credit-g and the Vowel datasets while the RF model achieves the highest accuracy on the Blood and the Australian datasets whereas in the latter the MLP achieves a comparable performance. The GB and FIDL achieve the best performance on the Glass and the Heart Disease datasets. The DFS model seems not to outperform the other models in any of the classification tasks.
In regression tasks, the GRBF-NN demonstrates even stronger competitiveness, with lower RMSE than other models in eight out of ten datasets. Differently from the classification cases, here the supervised selection of the centers, namely the GRBF-NN\({}_{c}\) seems to be more effective than GRBF-NN\({}_{k}\). In this case, the SVM, RF, and FIDL are the best performing in only one case each which are the Prostatic
Cancer, Diabetes, and the Cloud datasets respectively. The GB and the DFS do not outperform in any of the regression tasks considered.
The strip plots shown in Fig. 5 and in Fig. 6 illustrate the data obtained from our experiment. Notably, the RF and GB exhibit signs of overfitting in some datasets as they perform exceptionally well on the training data in multiple datasets yet fail to maintain the same level of performance on the estimated test error/accuracy. This discrepancy is particularly noticeable in datasets such as Breast cancer, Credit-g, Australian, Diabetes, Prostatic Cancer, Liver, and Boston Housing where there is a significant disparity between the estimated training and test error/accuracy. We can notice also that the RBF-NN\({}_{c}\) occasionally exhibits high standard deviation in both training and test metrics. This is due to the presence of outliers, which could signify potential challenges that arose during the training process, such as the optimizer converging to a suboptimal stationary point of the error function described in Eq. 13. This pattern was observed in the Digits, Iris, Wine, and Australian datasets.
We aim to provide further insights into the behavior of the GRBF-NN model by examining the relationship between its two regularizers, \(\lambda_{\mathbf{w}}\) and \(\lambda_{\mathbf{u}}\), through graphical analysis in Fig. 9. For every dataset, we show the results of the hyperparameter search procedure for training and test datasets. We only show the best model between GRBF-NN\({}_{k}\) and the GRBF-NN\({}_{c}\) for each dataset.
For the regression problems, dark colors indicate lower error while for classification problems lighter color indicates higher accuracy. The red frame indicates the best set of regularizers. Interestingly, in many datasets, we observe that the regularizer of the precision matrix \(\lambda_{\mathbf{u}}\) impacts the performance of the GRBF-NN more than the regularizer of the weights \(\lambda_{\mathbf{w}}\). This phenomenon occurs in the Digits, Breast Cancer (see Fig. 7b), Credit-g, Glass, Diabetes, Prostatic Cancer (see Fig. 8b), Cloud, DTMB-5415\({}^{(2)}\) and in the Body Fat dataset. In these cases, we obtain the best combination of regularizers on the test data when \(\lambda_{\mathbf{w}}\) is set to 0. This suggests that the regularization term \(\lambda_{\mathbf{w}}\) has minimal influence on the learning task on those datasets. This can indicate that promoting the 'flatness' of the Gaussian basis function through penalizing the entries of the precision matrix \(\mathbf{M}\) may have a more pronounced regularization and generalization impact on the model than merely penalizing the amplitudes via \(\lambda_{\mathbf{w}}\). Therefore, in situations where conducting a large hyperparameter search is not feasible due to computational constraints, it might be beneficial to prioritize the hyperparameter search solely on \(\lambda_{\mathbf{u}}\).
In section 3.1, we mentioned that valuable insights into the behavior of the GRBF-NN can be obtained by examining the eigenvalues and projecting the problem onto the active subspace defined by
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c c c} \hline \hline & \multicolumn{2}{c}{GRBF-NN\({}_{k}\)} & \multicolumn{2}{c}{GRBF-NN\({}_{k}\)} & \multicolumn{2}{c}{SVM} & \multicolumn{2}{c}{RF} & \multicolumn{2}{c}{GB} & \multicolumn{2}{c}{MLP} & \multicolumn{2}{c}{DFS} & \multicolumn{2}{c}{FIDL} \\ & Training & Test & Training & Test & Training & Test & Training & Test & Training & Test & Training & Test & Training & Test & Training & Test \\ \hline Digits & 1.000 & **0.994** & 0.995 & 0.960 & 1.000 & 0.992 & 1.000 & 0.990 & 1.000 & 0.983 & 1.000 & 0.993 & 1.000 & 0.990 & 0.992 \\ Iris & 0.982 & **0.971** & 0.943 & 0.941 & 0.974 & 0.958 & 0.969 & 0.969 & 0.969 & 0.951 & 0.984 & 0.936 & 0.945 & 0.936 & 0.936 & 0.932 & 0.957 \\ Breast Cancer & 0.987 & **0.978** & 0.988 & 0.976 & 0.977 & 1.000 & 0.971 & 0.980 & 0.966 & 1.000 & 0.945 & 0.936 & 0.976 & 0.929 & 0.974 \\ Wine & 0.996 & **0.985** & 0.991 & 0.979 & 0.992 & 0.983 & 1.000 & 0.971 & 1.000 & 0.980 & 1.000 & 0.976 & 1.000 & 0.984 & 1.000 & 0.961 \\ Australia & 0.991 & 0.861 & 0.853 & 0.847 & 0.884 & 0.858 & 0.924 & **0.868** & 0.998 & 0.865 & 1.000 & **0.868** & 0.818 & 0.821 & 0.883 & 0.851 \\ Creight & 0.987 & 0.760 & 0.982 & 0.758 & 0.841 & **0.785** & 0.874 & 0.744 & 1.000 & 0.700 & 0.900 & 0.906 & 0.764 & 0.778 & 0.757 & 0.793 & 0.747 \\ Glus & 0.992 & 0.657 & 0.960 & 0.686 & 0.857 & 0.865 & 0.978 & 0.714 & 1.000 & **0.780** & 1.000 & 0.748 & 0.947 & 0.710 & 0.699 & 0.830 \\ Blood & 0.791 & 0.781 & 0.794 & 0.782 & 0.829 & 0.784 & 0.801 & 0.792 & 0.818 & 0.792 & 0.810 & 0.855 & 0.771 & 0.789 & 0.777 & 0.762 & 0.762 \\ Heart Disease & 0.849 & 0.824 & 0.866 & 0.814 & 0.943 & 0.802 & 0.872 & 0.862 & 0.916 & 0.834 & 0.951 & 0.816 & 0.891 & 0.826 & 0.875 & **0.844** \\ Naval & 0.995 & 0.959 & 0.969 & 0.960 & 0.959 & **0.992** & 1.000 & 0.968 & 1.000 & 0.967 & 1.000 & 0.967 & 0.965 & 0.953 & - & - \\ \hline Daily Weather & 0.115 & 0.111 & 0.088 & **0.107** & 0.109 & 1.018 & 0.071 & 0.190 & 0.062 & 0.162 & 0.166 & 0.110 & 0.541 & 0.371 & - & - \\ Boston Housing & 0.224 & **0.344** & 0.217 & 0.361 & 0.181 & 0.361 & 0.211 & 0.359 & 0.135 & 0.364 & 0.042 & **0.344** & 0.280 & 0.388 & - & - \\ Diabetes & 0.103 & 0.718 & 0.609 & **0.708** & 0.630 & 0.711 & 0.655 & **0.706** & 0.662 & 0.745 & 0.400 & 0.775 & 0.641 & 0.711 & - & - \\ Prostate Cancer & 0.581 & 0.646 & 0.581 & 0.646 & 0.570 & **0.625** & 0.956 & 0.662 & 0.610 & 0.715 & 0.349 & 0.718 & 0.440 & 0.697 & 0.408 & 0.644 \\ Liver & 0.834 & **0.896** & 0.853 & 0.902 & 0.825 & 0.913 & 0.857 & 0.911 & 0.770 & 0.923 & 0.781 & 0.931 & 0.846 & 0.934 & - & - \\ Phoenix & 0.958 & 0.991 & 0.914 & **0.982** & 0.957 & 0.967 & 0.850 & 0.980 & 0.989 & 0.984 & 1.001 & 1.003 & 0.917 & 1.070 & - & - \\ Closl & 0.940 & 0.451 & 0.322 & 0.365 & 0.344 & 0.373 & 0.378 & 0.417 & 0.418 & 0.521 & 0.191 & 0.433 & 0.615 & 0.138 & **0.206** \\ DTMB-5415\({}^{(1)}\) & 0.513 & **0.772** & 0.103 & 0.781 & 0.006 & 0.886 & 0.000 & 0.816 & 0.413 & 1.014 & 0.054 & 0.919 & 0.777 & 0.808 & 0.890 & 0.932 \\ DTMB-5415\({}^{(2)}\) & 0.031 & 0.112 & 0.020 & **0.071** & 0.068 & 0.220 & 0.062 & 0.28 & 0.342 & 0.902 & 0.060 & 0.803 & 0.048 & 0.300 & 0.073 & 0.077 \\ Body Fat & 0.090 & 0.132 & 0.086 & **0.127** & 0
the eigenvectors of \(\mathbf{M}\). This enables us to visualize the function the GRBF-NN is attempting to model (for example, in \(2D\)) as demonstrated in Fig. 12, offering users an additional approach to comprehending the representation learned by our model.
The first row of the plots shows the reduced representation of the original problem in the active subspace or latent space \(\mathbb{Z}\subset\mathbb{R}^{K}\) once the training of the GRBF-NN has been completed. In particular, the first row of the plots shows the reduced representation of the original problem in the active subspace or latent space \(\mathbb{Z}\subset\mathbb{R}^{K}\). The second row of the plots shows the reduced representation of the original problem in the active subspace or latent space \(\mathbb{Z}\subset\mathbb{R}^{K}\). The second row of the plots shows the reduced representation of the original problem in the active subspace or latent space \(\mathbb{Z}\subset\mathbb{R}^{K}\). The third row shows the reduced representation of the original problem in the active subspace or latent space \(\mathbb{Z}\subset\mathbb{R}^{K}\).
we project the dataset \(\mathbf{X}\), along the first two eigenvectors relative to the largest eigenvalues of the matrix \(\mathbf{M}\), for visualization in \(2D\). We calculate the function value of the GRBF-NN simply by performing the inverse map of the latent variables \(\mathbf{z}\) in the input space.
The second row of figures shows the fraction \(\frac{\gamma_{1}}{\sum_{k}^{K}\gamma_{k}}\) sorted in descending order illustrating the eigenvalues decay. It is worth recalling that the eigenvalue \(\gamma_{k}\) of the matrix \(\mathbf{M}\) represents the second
Figure 6: Strip plots of the numerical results obtained in Tab. 2 regarding the last ten datasets (regression). In blue and red are the data for training and test sets respectively. The RMSE on the y-axis.
derivative of the argument of the Gaussian basis function after rotating it in the latent space, along the corresponding principal axis \(\mathbf{v}_{k}\). Therefore, very low magnitudes eigenvalues mean a lack of variability of our model in those principal directions, indicating that the true underlying factors of variation of the original problem may develop in a lower dimensional space.
For example, for the Digits, Iris, and Breast Cancer datasets, the embedding shows that the function \(f\) provides a remarkable discriminative power where most of the variability is obtained in just one dimension namely along the latent variable \(z_{1}\). This is confirmed by the relative eigenvalues decays, which show that the first eigenvalue \(\gamma_{1}\) is the only one significantly different from zero. In general, for almost linearly separable classification problems is likely that the GRBF-NN detects a one-dimensional active subspace as is also the case for the Heart Disease dataset. In the case of the Wine dataset instead (see Fig. 10), our model identifies an active subspace of dimension \(K=2\). This indicates that in order to achieve high accuracy in discriminating among classes, the function \(f\) mainly varies in two dimensions. This is further supported by the fact that the first two eigenvalues account for almost the entire model variability. In
Figure 8: Graphical interpretation of the sensitivity analysis with respect to the two regularizers \(\lambda_{\mathbf{w}}\) and \(\lambda_{\mathbf{u}}\) on the Prostate Cancer dataset (regression). The red frame highlights the best combination of hyperparameters. Darker color indicates lower error.
Figure 7: Graphical interpretation of the sensitivity analysis with respect to the two regularizers \(\lambda_{\mathbf{w}}\) and \(\lambda_{\mathbf{u}}\) on the Breast Cancer dataset (binary classification). The red frame highlights the best combination of hyperparameters. Lighter color indicates higher accuracy.
Figure 9: GRBF-NN’s behavior with respect to regularization: dark color for lower RMSE (regression) and lighter for higher accuracy (classification). The red frame indicates the best regularization combination.
the regression cases, we can, for example, observe that the GRBF-NN model identifies an active subspace of dimension \(K=1\) in many datasets such as the Prostate Cancer, Plasma, Cloud, DTMB-5415\({}^{(1)}\) and the DTMB-5415\({}^{(2)}\) dataset. For the remaining regression problems, the form of \(f\) is more complex, with its variation occurring in more than two dimensions as can be seen in the Liver dataset in Fig. 11.
#### 4.3.2 Evaluation of the Feature Importance Ranking
In addition to analyzing the predictive performance and performing a supervised dimensionality reduction in the active subspace for visualization, we can also obtain information about the importance of input features \(\mathbf{x}\). This provides additional insights into the model behavior and enables the user to perform feature selection. For the purpose of evaluating the significance of the feature importance ranking, we will only consider the embedding methods. This means that SVM and MLP will not be taken into account in this part of the benchmark since they do not provide information on the importance of each input feature.
Some of the datasets used to evaluate the predictive performance of the models can also be used to evaluate the quality of the feature importance ranking obtained once the model training has terminated
Figure 11: Graphical interpretation of the active subspace in two dimensions (a) and corresponding eigenvalues decay (b) for the Liver dataset (regression). Function values are normalized between zero and one.
Figure 10: Graphical interpretation of the active subspace in two dimensions (a) and corresponding eigenvalues decay (b) for the Wine dataset (multiclass classification). Function values are normalized between zero and one.
Figure 12: Graphical interpretation of the active subspace in two dimensions in the contour plots and corresponding eigenvalues decay. Function values are normalized.
such as the Digits and DTMB-5415\({}^{(2)}\) datasets. We train all the models using their best set of hyperparameters from the cross-validation procedure performed precedently with respect to the whole dataset.
We can easily show the feature importance obtained from the Digits dataset, which is composed only of the digit '8' and '3'. The mean of those two classes is visible in Fig. 12(a) and Fig. 12(b), where each feature corresponds to a particular pixel intensity of greyscale value. This suggests that is easy to interpret the feature importance identified from the models because the important features should highlight where the digits '8' and '3' differ the most. In Fig. 13 we can verify the feature importance detected from the models. The GRBF-NN employs an unsupervised center selection method for this dataset, as evidenced by the superior performance in Tab. 2 compared to the model with supervised center selection. Meaningful feature importance is observable for the GRBF-NN, RF, and GB models. The GRBF-NN (Fig. 12(c)), similar to the RF (Fig. 12(d)) the feature importance enhances pixels where those two classes differ, while the GB (Fig. 12(e)) provides a sparser representation since the almost all the importance is concentrated in only one pixel. From Fig. 12(f) and Fig. 12(g) seems that the two methods for feature importance learning for deep learning fail to provide explainable feature importances.
We provide a similar validation analysis for the DTMB-5415\({}^{(2)}\) dataset. Based on the results presented in Tab. 2, the GRBF-NN model achieved better performance on this dataset using the supervised selection of the centers compared to the unsupervised selection of the centers. In these cases, we can estimate an approximate ground truth for the feature importance of the true function, by computing the gradients at specific input variables \(\mathbf{x}\). To do this, we employ a finite difference process, evaluating gradients at 84 different points. The estimated ground truth is obtained by averaging the absolute values of the gradient vectors at these 84 points. In Fig. 14, we show the approximated ground truth in blue in each of the bar plots. It should be noted that the feature importance of the last five features is zero as these are not related to the true function, as explained in section 4.1. The feature importance obtained from the models shown in green should be able to detect this. In this case, only the GRBF-NN and the GB are able to recognize that the last five features are not related to the true function \(y\), while the RF (Fig. 12(b)), DFS (Fig. 12(d)) and FIDL (Fig. 13(e)) fail to identify that. To summarize the results, in Fig. 15 we show the bar plot regarding the mean squared error between the approximated ground truth feature importance and the one estimated from all the methods, showing that in this case, the GRBF-NN obtains the lowest error in detecting the underlying important variables of the regression task.
We test the same models on other synthetic datasets presented at the end of the section 4.1 designed specifically to evaluate feature selection and feature importance ranking models. In Tab. 3 we resume
Figure 13: Graphical interpretation of the feature importance for the Digits dataset for all the models considered in this experiment. The feature importance should highlight the pixels where (a) and (b) differ.
the numerical results obtained with the same cross-validation procedure as in the previous numerical experiments.
Problem P1 is a binary classification problem and the GB shows the best performance for \(N=100\) and \(N=500\) together with FIDL, while for \(N=1000\) the GRBF-NN\({}_{k}\) obtained the higher accuracy.
Figure 14: Graphical interpretation of the feature importance for the DTMB-\(5415^{(2)}\) dataset for all the models considered in this experiment. Blue bars represent the approximated ground truth feature importance, in green is the one estimated by the models.
Figure 15: Bars represent the MSE (\(y\)-axis) between the approximated ground truth feature importance and the one estimated from the models (\(x\)-axis).
In Tab. 4, we have the feature importance related to problem P1. We discuss GRBF-NN\({}_{c}\) for \(N=100\) while for \(N=500\) and \(N=1000\) we show GRBF-NN\({}_{k}\) and rename as GRBF-NN. For \(N=100\), the GRBF-NN provides meaningless feature importance across the methods due also to the lack of predictive performance obtained in this case, while the GB and FIDL are the only models to recognize that the only first four features are related to the output \(y\). For \(N=500\) and \(N=1000\), the feature importance from the GRBF-NN improves substantially together with its predictive performance. The DFS even if provides competitive accuracy compared with other methods has some difficulty in highlighting the importance of the first four features from the remaining ones especially for \(N=500\). As further support and analysis, we can evaluate the eigenvalues decay that can help us to detect if the degrees of freedom of the variation of \(f\) match approximately the correct number of the underlying factors of variation of the data. In Fig. 16, we have the eigenvalues decay for problem P1 for the values of \(N\) considered. It is possible to notice that for \(N=1000\) and \(N=500\) the GRBF-NN varies mainly along four components which are also the number of the independent important features for P1, while for \(N=100\), there is not a clear identification of those factors within the latent space/active subspace due to the low predictive performance of the model.
Problem P2 is a difficult multiclass classification problem and FIDL shows the best performance for \(N=100\) while for \(N=500\) and \(N=1000\) the GRBF-NN\({}_{c}\) obtains the highest accuracy. In Tab.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{GRBF-NN\({}_{k}\)} & \multicolumn{2}{c}{GRBF-NN\({}_{c}\)} & \multicolumn{2}{c}{RF} & \multicolumn{2}{c}{GB} & \multicolumn{2}{c}{DFS} & \multicolumn{2}{c}{FIDL} \\ & \(N\) & Training & Test & Training & Test & Training & Test & Training & Test & Training & Test & Training & Test & Training & Test \\ \hline & 100 & 0.980 & 0.609 & 1.000 & 0.652 & 0.992 & 0.672 & 1.000 & **0.836** & 1.000 & 0.762 & 0.992 & 0.820 \\ P1 & 500 & 0.958 & 0.898 & 0.981 & 0.892 & 1.000 & 0.882 & 1.000 & **0.914** & 1.000 & 0.905 & 0.942 & **0.914** \\ & 1000 & 0.962 & **0.931** & 0.973 & 0.925 & 1.000 & 0.901 & 1.000 & 0.921 & 0.991 & 0.897 & 0.934 & 0.918 \\ \hline & 100 & 0.235 & 0.310 & 0.250 & 0.249 & 1.000 & 0.290 & 1.000 & 0.273 & 0.973 & 0.338 & 0.762 & **0.340** \\ P2 & 500 & 0.710 & 0.428 & 0.706 & **0.508** & 1.000 & 0.486 & 1.000 & 0.465 & 0.898 & 0.497 & 0.568 & 0.496 \\ & 1000 & 0.643 & 0.538 & 0.645 & **0.551** & 0.871 & 0.546 & 1.000 & 0.492 & 0.633 & 0.537 & 0.497 & 0.455 \\ \hline & 100 & 0.499 & 0.570 & 0.497 & 0.570 & 0.236 & 0.615 & 0.000 & 0.520 & 0.157 & **0.447** & - & - \\ P3 & 500 & 0.232 & 0.283 & 0.184 & **0.255** & 0.163 & 0.440 & 0.105 & 0.324 & 0.190 & 0.289 & - & - \\ & 1000 & 0.258 & 0.284 & 0.198 & **0.221** & 0.142 & 0.387 & 0.094 & 0.270 & 0.196 & 0.240 & - & - \\ \hline \hline \end{tabular}
\end{table}
Table 3: Numerical results summary for the three synthetic problems. Problem 1 (P1) and Problem 2 (P2) are binary and multiclass classification tasks respectively and numbers represent the accuracy values. Problem 3 (P3) is a regression task and the numbers represent RMSE values. Bold numbers indicate the best performance on the test datasets.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{GRBF-NN\({}_{k}\)} & \multicolumn{2}{c}{GRBF-NN\({}_{c}\)} & \multicolumn{2}{c}{RF} & \multicolumn{2}{c}{GB} & \multicolumn{2}{c}{DFS} & \multicolumn{2}{c}{FIDL} \\ & \(N\) & Training & Test & Training & Test & Training & Test & Training & Test & Training & Test & Training & Test \\ \hline & 100 & 0.980 & 0.609 & 1.000 & 0.652 & 0.992 & 0.672 & 1.000 & **0.836** & 1.000 & 0.762 & 0.992 & 0.820 \\ P1 & 500 & 0.958 & 0.898 & 0.981 & 0.892 & 1.000 & 0.882 & 1.000 & **0.914** & 1.000 & 0.905 & 0.942 & **0.914** \\ & 1000 & 0.962 & **0.931** & 0.973 & 0.925 & 1.000 & 0.901 & 1.000 & 0.921 & 0.991 & 0.897 & 0.934 & 0.918 \\ \hline & 100 & 0.235 & 0.310 & 0.250 & 0.249 & 1.000 & 0.290 & 1.000 & 0.273 & 0.973 & 0.338 & 0.762 & **0.340** \\ P2 & 500 & 0.710 & 0.428 & 0.706 & **0.508** & 1.000 & 0.486 & 1.000 & 0.465 & 0.898 & 0.497 & 0.568 & 0.496 \\ & 1000 & 0.643 & 0.538 & 0.645 & **0.551** & 0.871 & 0.546 & 1.000 & 0.492 & 0.633 & 0.537 & 0.497 & 0.455 \\ \hline & 100 & 0.499 & 0.570 & 0.497 & 0.570 & 0.236 & 0.615 & 0.000 & 0.520 & 0.157 & **0.447** & - & - \\ P3 & 500 & 0.232 & 0.283 & 0.184 & **0.255** & 0.163 & 0.440 & 0.105 & 0.324 & 0.190 & 0.289 & - & - \\ & 1000 & 0.258 & 0.284 & 0.198 & **0.221** & 0.142 & 0.387 & 0.094 & 0.270 & 0.196 & 0.240 & - & - \\ \hline \hline \end{tabular}
\end{table}
Table 4: Summary of the models feature importance obtained on problem P1. Note that the first four features are relevant in P1.
5, we have the feature importance related to problem P2. We discuss GRBF-NN\({}_{k}\) for \(N=100\) while for \(N=500\) and \(N=1000\) we show GRBF-NN\({}_{c}\) and both renamed as GRBF-NN. Similarly as in the previous case for \(N=100\), the GRBF-NN and RF have some difficulty to detect that the first three features are the most important while the FIDL provides the best feature ranking. Similar to problem P1, the feature importance from the GRBF-NN improves significantly for \(N=500\) and \(N=1000\) along with its predictive performance. The GRBF-NN and FIDL provide the most meaningful feature importance ranking in these cases, while the other methods fail to provide a clear separation between important and non-important variables. In Fig. 17, we have the eigenvalues decay for problem P2 for the values of \(N\) considered. For \(N=1000\), the GRBF-NN varies mainly along three components which are also the number of the independent important features for P2, this behavior is less visible but still present for \(N=100\) and \(N=500\).
Problem P3 is a nonlinear regression problem with the DFS showing the best performance for \(N=100\) while for \(N=500\) and \(N=1000\) the GRBF-NN\({}_{c}\) obtains the lowest RMSE. In Tab. 6, we can analyze the feature importance related to problem P3. We discuss GRBF-NN\({}_{c}\) for \(N=100\), \(N=500\) and \(N=1000\) and renamed as GRBF-NN. For \(N=100\), seems that all the models recognize that only the first five features are important in problem P3. Also for \(N=500\) and \(N=1000\) the GRBF-NN, RF, GB, and FIDL correctly ignore the contribution of the last five features, differently from DFS. Interestingly, always for \(N=500\) and \(N=1000\), the feature importance from GRBF-NN differs from all the other models where they recognize the feature \(x_{4}\) as the most important only for \(N=100\). In Fig. 18, we have
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Model & GRBF-NN & RF & GB & DFS & FIDL \\ \hline P2 (\(N=100\)) & & & & & \\ P2 (\(N=500\)) & & & & & \\ P2 (\(N=1000\)) & & & & & \\ P2 (\(N=1000\)) & & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 5: Summary of the models feature importance obtained on problem P2. Note that the first three features are relevant in P2.
Figure 16: Eigenvalues decay for problem P1.
the eigenvalues decay for problem P3 for the values of \(N\) considered. For \(N=100\) the model does not recognize that P3 varies along five features. For \(N=500\) and \(N=1000\), we have the first five eigenvalues significantly different from zero as expected.
To summarize, the GRBF-NN model shows better performance in six out of nine datasets with respect to the other models while providing meaningful and interpretable feature importance.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Model & GRBF-NN & RF & GB & DFS & FIDL \\ \hline P3 (\(N=100\)) & & & & & \\ P3 (\(N=500\)) & & & & & \\ P3 (\(N=1000\)) & & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 6: Summary of the models feature importance obtained on problem P3. Note that the first five features are relevant in P3.
Figure 17: Eigenvalues decay for problem P2.
## 5 Conclusion and Future Work
In this paper, we proposed modifications to the classical RBF-NN model, to enhance its interpretability and uncover the underlying factors of variation known as the active subspace. Our approach involved incorporating a learnable precision matrix into the Gaussian kernel, allowing us to extract latent information about the prediction task from the eigenvectors and eigenvalues.
Our extensive numerical experiments covered regression, classification, and feature selection tasks, where we compared our proposed model with widely used methods such as SVM, RF, GB, and state-of-the-art deep learning-based embedding methods. The results demonstrated that our GRBF-NN model achieved attractive prediction performance while providing meaningful feature importance rankings. One of the key observations from our experiments was the impact of the regularizer \(\lambda_{\mathbf{u}}\) on the performance of the GRBF-NN, which often prevails the effect of the weight regularizer \(\lambda_{\mathbf{w}}\). This finding suggests that prioritizing the regularization of the precision matrix yields more significant improvements in the model generalization performance.
By combining predictive power with interpretability, the GRBF-NN offers a valuable tool for understanding complex nonlinear relationships in the data. Moreover, the model enables supervised dimensionality reduction, facilitating visualization and comprehension of complex phenomena. Overall, our work contributes to bridging the gap between black-box neural network models and interpretable machine learning, enabling users to not only make accurate predictions but also gain meaningful insights from the model behavior and improve decision-making processes in real-world applications.
Looking ahead, we plan to apply our model to tackle the so-called curse of dimensionality in expensive engineering optimization problems. By leveraging the active subspace estimation, we aim to reduce the dimensionality of optimization problems without relying on direct gradient computations as in the classical ASM, which is not desirable in noisy gradient scenarios.
|
2304.01669 | Re-thinking Model Inversion Attacks Against Deep Neural Networks | Model inversion (MI) attacks aim to infer and reconstruct private training
data by abusing access to a model. MI attacks have raised concerns about the
leaking of sensitive information (e.g. private face images used in training a
face recognition system). Recently, several algorithms for MI have been
proposed to improve the attack performance. In this work, we revisit MI, study
two fundamental issues pertaining to all state-of-the-art (SOTA) MI algorithms,
and propose solutions to these issues which lead to a significant boost in
attack performance for all SOTA MI. In particular, our contributions are
two-fold: 1) We analyze the optimization objective of SOTA MI algorithms, argue
that the objective is sub-optimal for achieving MI, and propose an improved
optimization objective that boosts attack performance significantly. 2) We
analyze "MI overfitting", show that it would prevent reconstructed images from
learning semantics of training data, and propose a novel "model augmentation"
idea to overcome this issue. Our proposed solutions are simple and improve all
SOTA MI attack accuracy significantly. E.g., in the standard CelebA benchmark,
our solutions improve accuracy by 11.8% and achieve for the first time over 90%
attack accuracy. Our findings demonstrate that there is a clear risk of leaking
sensitive information from deep learning models. We urge serious consideration
to be given to the privacy implications. Our code, demo, and models are
available at
https://ngoc-nguyen-0.github.io/re-thinking_model_inversion_attacks/ | Ngoc-Bao Nguyen, Keshigeyan Chandrasegaran, Milad Abdollahzadeh, Ngai-Man Cheung | 2023-04-04T09:58:07Z | http://arxiv.org/abs/2304.01669v2 | # Re-thinking Model Inversion Attacks Against Deep Neural Networks
###### Abstract
Model inversion (MI) attacks aim to infer and reconstruct private training data by abusing access to a model. MI attacks have raised concerns about the leaking of sensitive information (e.g. private face images used in training a face recognition system). Recently, several algorithms for MI have been proposed to improve the attack performance. In this work, we revisit MI, study two fundamental issues **pertaining to all state-of-the-art (SOTA) MI algorithms**, and propose solutions to these issues which lead to a significant boost in attack performance for all SOTA MI. In particular, our contributions are two-fold: 1) We analyze the optimization objective of SOTA MI algorithms, argue that the objective is sub-optimal for achieving MI, and propose an improved optimization objective that boosts attack performance significantly. 2) We analyze "MI overfitting", show that it would prevent reconstructed images from learning semantics of training data, and propose a novel "model augmentation" idea to overcome this issue. Our proposed solutions are simple and improve all SOTA MI attack accuracy significantly. E.g., in the standard CelebA benchmark, our solutions improve accuracy by **11.8%** and achieve for the first time over 90% attack accuracy. **Our findings demonstrate that there is a clear risk of leaking sensitive information from deep learning models**. We urge serious consideration to be given to the privacy implications. Our code, demo, and models are available at [https://ngoc-nguyen-0.github.io/re-thinking_model_inversion_attacks/](https://ngoc-nguyen-0.github.io/re-thinking_model_inversion_attacks/).
## 1 Introduction
Privacy of deep neural networks (DNNs) has attracted considerable attention recently [2, 3, 31, 41, 42]. Today, DNNs are being applied in many domains involving private and sensitive datasets, e.g., healthcare, and security. There is a growing concern of privacy attacks to gain knowledge of confidential datasets used in training DNNs. One important category of privacy attacks is Model Inversion (MI) [7, 10, 13, 14, 22, 47, 49, 52, 53] (Fig. 1). Given access to a model, MI attacks aim to infer and reconstruct features of the private dataset used in the training of the model. For example, a malicious user may attack a face recognition system to reconstruct sensitive face images used in training. Similar to previous work [7, 47, 52], we will use face recognition models as the running example.
**Related Work.** MI attacks were first introduced in [14], where simple linear regression is the target of attack. Recently, there is a fair amount of interest to extend MI to complex DNNs. Most of these attacks [7, 47, 52] focus on the _whitebox_ setting and the attacker is assumed to have complete knowledge of the model subject to attack. As many platforms provide downloading of entire trained DNNs for users [7, 52], whitebox attacks are important. [52] proposes Generative Model Inversion (GMI) attack, where generic public information is leveraged to learn a distributional prior via generative adversarial networks (GANs) [15, 45], and this prior is used to guide reconstruction of private training samples. [7] proposes Knowledge-Enriched Distributional Model Inversion (KEDMI), where an inversion-specific GAN is trained by leveraging knowledge provided by the target model. [47] proposes Variational Model Inversion (VMI), where a probabilistic interpretation of MI leads to a variational objective for the attack. KEDMI and VMI achieve SOTA attack performance (See Supplementary E for further discussion of related work).
**In this paper**, we revisit SOTA MI, study two issues pertaining to all SOTA MI and propose solutions to these issues that are complementary and applicable to all SOTA MI (Fig. 1). In particular, despite the range of approaches proposed in recent works, common and central to all these approaches is an _inversion step_ which formulates reconstruction of training samples as an optimization. The optimization objective in the inversion step involves the _identity loss_, which is the _same_ for all SOTA MI and is formulated as the negative log-likelihood for the reconstructed samples under the model being attacked. While ideas have been proposed to advance other aspects of MI, _effective design of the identity loss has not been studied_.
To address this research gap, our work studies subtleties of identity loss in all SOTA MI, analyzes the issues and proposes improvements that boost the performance of all SOTA significantly. In summary, our contributions are as follows:
* We analyze existing identity loss, argue that it could be sub-optimal for MI, and propose an improved identity loss that aligns better with the goal of MI (Fig. 1 2).
* We formalize the concept of _MI overfitting_, analyze its impact on MI and propose a novel solution based on _model augmentation_. Our idea is inspired by the conventional issue of overfitting in model training and data augmentation as a solution to alleviate the issue (Fig. 1 3).
* We conduct extensive experiments to demonstrate that our solutions can improve SOTA MI algorithms (GMI [52], KEDMI [7], VMI [47]) significantly. Our solutions achieve for the first time over 90% attack accuracy under standard CelebA benchmark (Fig. 1 4).
Our work sounds alarm over the rising threats of MI attacks, and urges more attention on measures against the leaking of private information from DNNs.
## 2 General Framework of SOTA MI Attacks
**Problem Setup.** In MI, an attacker abuses access to a model \(M\) trained on a private dataset \(\mathcal{D}_{priv}\). The attacker can access \(M\), but \(\mathcal{D}_{priv}\) is not intended to be shared. The goal of MI is to infer information about private samples in \(\mathcal{D}_{priv}\). In existing work, for the desired class (label) \(y\)
Figure 1: _Overview and our contributions._**1** We consider the problem of the Model Inversion (MI) attack to reconstruct private training data based on model parameters. Our work makes two foundational contributions to MI attacks. 2 First, we analyse the optimization objective of existing SOTA MI algorithms and show that they are sub-optimal. Further, we propose an improved optimization objective that boosts MI attack performance significantly (Sec 3.1). 3 Second, we formalize the concept of “MI overfitting” showing that it prevents reconstructed images from learning identity semantics of training data. Further, we propose a novel “model augmentation” idea to overcome this issue (Sec 3.2). 4 Our proposed method significantly boosts MI attack accuracy. _E.g._ In the standard CelebA benchmark, our method boosts attack accuracy by _11.8%, achieving above 90% attack accuracy for the first time in contemporary MI literature_.
MI is formulated as the reconstruction of an input \(\mathbf{x}\) which is most likely classified into \(y\) by the model \(M\). For instance, if the problem involves inverting a facial recognition model, given the desired identity, MI is formulated as the reconstruction of facial images that are most likely to be recognized as the desired identity. The model subject to MI attacks is called _target model_. Following previous works [7, 47, 52], we focus on _whitebox_ MI attack, where the attacker is assumed to have complete access to the target model. For high-dimensional data such as facial images, this reconstruction problem is ill-posed. Consequently, various SOTA MI methods have been proposed recently to constrain the search space to the manifold of meaningful and relevant images using a GAN: using a GAN trained on some public dataset \(\mathcal{D}_{pub}\)[52], using an inversion-specific GAN [7], and defining variational inference in latent space of GAN [47].
Despite the differences in various SOTA MI, common and central to all these methods is an _inversion step_ -called _secret revelation_ in [52]-, which performs the following optimization:
\[q^{*}(\mathbf{z})=\arg\min_{q(\mathbf{z})}\mathbb{E}_{\mathbf{z}\sim q( \mathbf{z})}\{L_{id}(\mathbf{z};y,M)+\lambda L_{prior}(\mathbf{z})\} \tag{1}\]
Here \(L_{id}(\mathbf{z};y,M)=-\log\mathbb{P}_{M}(y|G(\mathbf{z}))\) is referred to as _identity loss_ in MI [52], which guides the reconstruction of \(\mathbf{x}=G(\mathbf{z})\) that is most likely to be recognized by model \(M\) as identity \(y\), and \(L_{prior}\) is some prior loss, and \(q^{*}(\mathbf{z})\) is the optimal distribution of latent code used to generate inverted samples by GAN (\(\mathbf{x}=G(\mathbf{z})\); \(\mathbf{z}\sim q^{*}(\mathbf{z})\)). Importantly, all SOTA MI methods use the _same_ identity loss \(L_{id}(\mathbf{z};y,M)\), although they have different assumption about \(q(\mathbf{z})\) and the prior loss \(L_{prior}\) (see Table 1 and Supplementary for more details on each algorithm). While advances observed by improving \(q(\mathbf{z})\) and \(L_{prior}\), _the design of more effective \(L_{id}\) has been left unnoticed_ in all SOTA MI algorithms. Therefore, our work instead focuses on \(L_{id}\), analyzes issues, and proposes improvement for \(L_{id}\) that can lead to a performance boost in all SOTA MI. To simplify notations, we denote \(L_{id}(\mathbf{z};y,M)\) by \(L_{id}(\mathbf{x};y)\) when appropriate, where \(\mathbf{x}=G(\mathbf{z})\) is the reconstructed image.
## 3 A Closer Look at Model Inversion Attacks
### An Improved Formulation of MI Identity Loss
In this section, we discuss our first contribution and take a closer look at the optimization objective of _identity loss_, \(L_{id}(\mathbf{x};y)\). Existing SOTA MI methods, namely GMI [52], KEDMI [7] and VMI [47] formulate the identity loss as an optimization to minimize the negative log likelihood of an identity under model parameters (cross-entropy loss). Particularly, the \(L_{id}(\mathbf{x};y)\) introduced in Eqn. 1 for an inversion targeting class \(k\) can be re-written as follows:
\[L_{id}(\mathbf{x};y=k)=-\log\frac{\exp(\mathbf{p}^{T}\mathbf{w}_{k})}{\exp( \mathbf{p}^{T}\mathbf{w}_{k})+\sum_{j=1,j\neq k}^{N}\exp(\mathbf{p}^{T} \mathbf{w}_{j})} \tag{2}\]
where \(\mathbf{p}\) refers to penultimate layer activations [6, 35] for sample \(\mathbf{x}\) and \(\mathbf{w}_{i}\) refers to the last layer weights for the \(i^{th}\) class 1in target model \(M\).
Footnote 1: \(\mathbf{p}\) is concatenated with 1 at the end to include bias as \(\mathbf{w}_{i}\) includes biases at the end.
**Existing identity loss (Eqn. 2) used in SOTA MI methods [7, 47, 52] is sub-optimal for MI** (Fig. 1 ). Although the optimization in Eqn. 2 accurately captures the essence of a classification problem (face recognition), we postulate that such formulation is sub-optimal for MI. We provide our intuition through the lens of penultimate layer activations, \(\mathbf{p}\) (Fig. 1 ). In a classification setting, the main expectation for \(\mathbf{p}\) is to be sufficiently discriminative for class \(k\) (recognize between 'Peter', 'Simon' and 'David'). This objective can be achieved by both maximizing \(\exp(\mathbf{p}^{T}\mathbf{w}_{k})\) and/or minimizing \(\sum_{j=1,j\neq k}^{N}\exp(\mathbf{p}^{T}\mathbf{w}_{j})\) in Eqn. 2. On the contrary, _the goal of MI is to reconstruct training data_. That is, in addition to \(\mathbf{p}\) being sufficiently discriminative for class \(k\), successful inversion also requires \(\mathbf{p}\) to be close to the training data representations for class \(k\) represented by \(\mathbf{w}_{k}\) (an inversion targeting 'Simon' needs to reconstruct a sample close to the private training data of 'Simon'; Fig. 1 ). Specifically, we argue that MI requires a lot more attention on maximizing \(\exp(\mathbf{p}^{T}\mathbf{w}_{k})\) compared to minimizing \(\sum_{j=1,j\neq k}^{N}\exp(\mathbf{p}^{T}\mathbf{w}_{j})\) in Eqn. 2.
Motivated by this hypothesis, we conduct an analysis to investigate the proximity between private training data and reconstructed data in SOTA MI methods using penultimate layer representations [6, 34, 35, 39]. Particularly, our analysis using KEDMI [7] (SOTA) shows several instances where using Eqn. 2 for identity loss is unable to reconstruct data close to the private training data. We show this in Fig. 2 (top row). Consequently, our analysis motivates the search for an improved identity loss focusing on maximizing \(\exp(\mathbf{p}^{T}\mathbf{w}_{k})\) for MI.
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Method** & **Latent distribution \(q(\mathbf{z})\)** & **Prior loss \(L_{prior}\)** \\ \hline GMI [52] & Point estimate \(\delta(\mathbf{z}-\mathbf{z}_{0})\) & \(-D(G(\mathbf{z}))\) \\ \hline KEDMI [7] & Gaussian \(\mathcal{N}(\boldsymbol{\mu},\mathbf{\Sigma})\) & \(-\log D(G(\mathbf{z}))\) \\ \hline VMI [47] & Gaussian \(\mathcal{N}(\boldsymbol{\mu},\mathbf{\Sigma})\)_or_ Normalizing Flow [26] & Distance _w.r.t._ GAN prior \(D_{\text{K}}(q(\mathbf{z})||p_{\text{gan}}(\mathbf{z}))\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Categorizing SOTA MI attacks based on their difference in latent code distribution and prior loss. \(p_{\text{gan}}(\mathbf{z})\) is a GAN prior. \(G\) and \(D\) are generator and discriminator of a GAN.
**Logit Maximization as an improved MI identity loss.** In light of our analysis / observations above, we propose to directly maximize the logit, \(\textbf{p}^{T}\textbf{w}_{k}\), instead of maximizing the log likelihood of class \(k\) for MI. Our proposed identity loss objective is shown below:
\[L_{id}^{logit}(\textbf{x};y=k)=-\textbf{p}^{T}\textbf{w}_{k}+\lambda||\textbf{ p}-\textbf{p}_{reg}||_{2}^{2} \tag{3}\]
where \(\lambda(>0)\) is a hyper-parameter and \(\textbf{p}_{reg}\) is used for regularizing **p**. Particularly, if the regularization in Eqn. 3 is omitted and hence \(||\textbf{p}||\) is unbounded, a crude simplified way to solve Eqn. 3 is to maximize \(||\textbf{p}||\). Hence, we use \(\textbf{p}_{reg}\) to regularize **p**. Given that the attacker has no access to private training data, we estimate \(\textbf{p}_{reg}\) by a simple method using _public_ data (See Supplementary C.3). We remark that \(\textbf{p}=M^{\texttt{pen}}(\textbf{x})\) where \(\textbf{x}=G(\textbf{z})\) and \(M^{\texttt{pen}}()\) operator returns the penultimate layer representations for a given input.
Our analysis shows that our proposed identity loss, \(L_{id}^{logit}\) (Eqn. 3), can significantly improve reconstruction of private training data compared to existing identity loss used in SOTA MI algorithms [7, 47, 52]. This can be clearly observed using both penultimate layer representations and KNN distances in Fig. 2 (bottom row). Here KNN Dist refers to the shortest Euclidean feature distance from a reconstructed image to private training images for a given identity [7, 52]. Our proposed \(L_{id}^{logit}\) can be easily plugged in to all existing SOTA MI algorithms by replacing \(L_{id}\) with our proposed \(L_{id}^{logit}\) in Eqn. 1 (in the inversion step) with minimal computational overhead.
### Overcoming MI Overfitting in SOTA methods
In this section, we discuss our second contribution. In particular, we formalize a concept of _MI overfitting_, observe its considerable impacts even in SOTA MI methods [7, 47, 52], and propose a new, simple solution to overcome this issue (Fig. 1 3). To better discuss our MI overfitting concept, we first review the conventional concept of overfitting in machine learning: Given the fixed training dataset and our goal of learning a model, conventionally, overfitting is defined as instances which during model learning (training stage), the model fits too closely to the training data and adapts to the random variation and noise of training data, failing to adequately learn the semantics of the train
Figure 2: Visualization of the penultimate layer representations (\(\mathcal{D}_{priv}\) = CelebA [33], \(\mathcal{D}_{pub}\) = CelebA [33], Target Model = IR152 [7], Evaluation Model = face.evoLve [8], Inversion iterations = 2400) for private training data and reconstructed data using KEDMI [7]. Following the exact evaluation protocol in [7], we use face.evoLve [8] to extract representations. We show results for 3 randomly chosen identities. We include KNN distance (for different iterations) and final attack accuracy following the protocol in [7]. For each identity, we also include randomly selected private training data and the closest reconstructed sample at iteration=2400. **D Identity loss in SOTA MI methods [7, 47, 52] (Eqn. 2) is sub-optimal for MI (Top).** Using penultimate representations during inversion, we observe 2 instances (_e.g_ target identity 57 and 143) where KEDMI [7] (using Eqn. 2 for identity loss) is unable to reconstruct data close to private training data. Hence, private and reconstructed facial images are qualitatively different. **Q Our proposed identity loss, \(L_{id}^{logit}\) (Eqn. 3), can effectively guide the reconstruction of data close to private training data (Bottom)**. This can be clearly observed using both penultimate layer representations and KNN distances for all 3 target classes 57, 143 and 252. We show similar results using additional MI algorithms (GMI [52], VMI [47]) and target classifiers (face.evoLve, VGG16) in Supplementary Figures D.2, D.5 and D.8. Best viewed in color.
ing data [1, 38, 44, 50, 54]. As the model lacks semantics of training data, it could be observed that the model performs poorly under unseen data (Fig. 1 3 4 3).
Analysis.In what follows, we discuss our analysis to demonstrate MI overfitting and understand its impact in SOTA. See Fig. 3 for analysis setups and results. In particular, in Fig. 3 1, we show some reconstructed samples which achieve low identity loss under the target model \(M\), yet they lack identity semantics. In Fig. 3 2, we show that for a considerable percentage of reconstructed samples from target model \(M\) with low identity loss under \(M\), their identity loss under another unseen model \(M^{\prime}\) is large as shown in the scatter plot and histograms, hinting that these samples might have suffered from MI overfitting and lack identity semantics. We note that the identity loss under \(M^{\prime}\) is obtained by feeding the reconstructed sample into \(M^{\prime}\) in a forward pass. We also note that SOTA KEDMI [7] is used in this analysis but the issue persists in [47, 52].
Our proposed solution to MI overfitting.We propose a novel solution based on _model augmentation_. Our idea is inspired by the conventional issue of overfitting in model training and data augmentation as a solution to alleviate the issue. In particular, for conventional overfitting, augmenting the training dataset could alleviate the issue [29]. Therefore, we hypothesize that by augmenting the target model we can alleviate MI overfitting.
Specifically, we propose to apply knowledge distillation (KD) [19], with target model \(M_{t}\) as the teacher, to train augmented models \(M_{aug}^{(i)}\). Importantly, as we do not have access to the private data, during KD, each \(M_{aug}^{(i)}\) is trained on the _public dataset_ to match its output to the output of \(M_{t}\). We select different network architectures for \(M_{aug}^{(i)}\) and they are different from \(M_{t}\) (Detailed discussion in the Supplementary B.1 and B.2). After performing KD, we apply \(M_{aug}^{(i)}\) together with the target model \(M_{t}\) in the inversion step and compute the identity loss (with model augmentation):
\[\begin{split} L_{id}^{aug}(\mathbf{x};y)&=\gamma_{t }\cdot L_{id}(\mathbf{x};y,M_{t})\\ &+\gamma_{aug}\cdot\sum_{i=1}^{N_{aug}}L_{id}(\mathbf{x};y,M_{ aug}^{(i)})\end{split} \tag{4}\]
Here, \(\gamma_{t}\) and \(\gamma_{aug}\) are two hyper-parameters. In particular, we use \(\gamma_{t}=\gamma_{aug}=\frac{1}{N_{aug}+1}\), where \(N_{aug}\) is the number of augmented models. \(L_{id}^{aug}\) in Eqn. 4 is used to replace \(L_{id}\) in the inversion step in Eqn. 1. Furthermore, our proposed \(L_{id}^{logit}\) in Eqn. 3 can be used in Eqn. 4 to combine the improvements. See details in Supplementary C.1.
In Fig. 3 3, we analyze the performance of \(M_{aug}^{(i)}\). Similar to using the unseen model \(M^{\prime}\), we observe samples with large identity loss under \(M_{aug}^{(i)}\), suggesting that samples with MI overfitting perform poorly under \(M_{aug}^{(i)}\) as these samples lack identity semantic.
Figure 3: Qualitative / Quantitative studies to demonstrate MI overfitting in SOTA methods. We demonstrate this observation using KEDMI [7]. We use \(\mathcal{D}_{priv}\) = CelebA [33], \(\mathcal{D}_{pub}\) = CelebA [33] and \(M\) = IR152 [7]. 1 We show qualitative results to illustrate MI overfitting. We show 6 identities, top: private data, bottom: reconstructed data from \(M\). The reconstructed samples have fit too closely to \(M\) during inversion resulting in samples with lack of identity semantics. Particularly, we remark that these samples have very low identity loss under the target model \(M\). 2 **Quantitative results validating the prevalence of MI overfitting in SOTA MI methods.** We use an additional target classifier \(M^{\prime}\) = VGG16 released by [7, 52] to quantitatively verify the presence of MI overfitting using identity loss. For 1,500 reconstructed samples from \(M\), we visualize their identity loss w.r.t. \(M\) and \(M^{\prime}\) in the scatter plot and respective histograms. Particularly, we find that there are 26.7% of samples with low identity loss under the target model \(M\), but large identity loss under unseen VGG16 model \(M^{\prime}\), hinting that these samples might lack identity semantics. This result shows that MI overfitting is a considerable issue even in SOTA MI methods. Note that VGG16 is used here only for analysis and is not part of our solution, as private data is not available. 3 **Model Augmentation to alleviate MI overfitting during inversion.** We repeat the above analysis, with \(M^{\prime}\) = VGG16 replaced by \(M_{aug}\) = EfficientNet-B0. Importantly, \(M_{aug}\) is trained by _public data using knowledge distillation_[19]. We similarly observe samples with large identity loss under \(M_{aug}\).
## 4 Experiments
In this section, we evaluate the performance of the proposed method in recovering a representative input from the target model, against current SOTA methods: GMI [52], VMI [47], and KEDMI [7]. More specifically, as our proposed method identifies two major limitations in current \(L_{id}(\mathbf{x};y)\) -used commonly in all SOTA MI approaches- we will evaluate the improvement brought by our improved identity loss \(L_{id}^{logit}\), and model augmentation \(L_{id}^{aug}\) for all SOTA MI approaches.
### Experimental Setup
In order to have a fair comparison, when evaluating our method against each SOTA MI approach, we follow the exactly same experimental setup of that approach. In what follows, we discuss the details of these setups.
**Dataset.** Following previous works, we evaluate the proposed method on different tasks: face recognition and digit classification is used for comparison with all three SOTA approaches, and image classification is used for comparison with GMI [52], and KEDMI [7]. For the face recognition task, we use CelebA dataset [33] that includes celebrity images, and the FFHQ dataset [24] which contains images with larger variation in terms of background, ethnicity, and age. The MNIST handwritten digits dataset [30] is used for digit classification. We utilize the CIFAR-10 dataset [28] for image classification.
**Data Preparation Protocol.** Following previous SOTA approaches [7, 47, 52], we split each dataset into two disjoint parts: one part is used as private dataset \(\mathcal{D}_{priv}\) for training target model, and another part is used as a public dataset \(\mathcal{D}_{pub}\) to extract the prior information. Most importantly, _throughout all experiments, public dataset \(\mathcal{D}_{pub}\) has no class intersection with private dataset \(\mathcal{D}_{priv}\) used for training target model_. Note that this is essential to make sure that adversary uses \(\mathcal{D}_{pub}\) only to gain prior knowledge about features that are general to that task (i.e., face recognition), and does not have access to information about class-specific and private information used for training target model.
**Models.** Following previous works, we implement several different models with varied complexities. As GMI [52] and KEDMI [7] use exactly similar model architecture in experiments, for comparison with these two algorithms, we use the same models. More specifically, for face recognition on CelebA and FFHQ, we use VGG16 [40], IR152 [17], and face.evoLve [9]-as SOTA face recognition model. For digit classification on MNIST, we use a CNN with 3 convolutional layers and 2 pooling layers. Finally, for image classification, following [7] we use VGG16 [40]. For a fair comparison with VMI, we follow its design in [47] and use ResNet-34 for face recognition CelebA, and ResNet-10 for digit classification on MNIST. The details of the target models, augmented models and datasets used in experiments are summarized in Table 2. We remark that when comparing our proposed method with each of the SOTA MI approaches, we use exactly the same target model and GAN for both SOTA and our approach.
**Evaluation Metrics.** To evaluate the performance of a MI attack, we need to assess whether the reconstructed image exposes private information about a target label/identity. In this work, following the literature, we conduct both qualitative evaluations by visual inspection, and quantitative evaluations using different metrics, including:
* **Attack Accuracy (Attack Acc).** Following [7, 47, 52], we use an _evaluation model_ that predicts the label/identity of the reconstructed image. Similar to previous works, the evaluation model is different from the target model (different structure/ initialization seed), but it is trained on the same private dataset (see Table 2). Intuitively, considering a highly accurate evaluation model, it can be viewed as a proxy for human inspection [52]. Therefore, if the evaluation model infers high accuracy on reconstructed images, it means these images are exposing private information about the private dataset, i.e. high attack accuracy.
* **K-Nearest Neighbors Distance (KNN Dist).** KNN Dist indicates the distance between the reconstructed image for a specific label/id and corresponding images in the private training dataset. More specifically, it measures the
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**Method** & **Private Dataset** & **Public Dataset** & **Target model** & **Evaluation Model** & **Model Augmentation** \\ \hline \multirow{3}{*}{GMI [52] /} & \multirow{3}{*}{CelebA [33]} & CelebA / & VGG16 [40] / & \multirow{3}{*}{face.evoLve} & EfficientNet-B0 [43], \\ & & FFHQ [24] & & & \\ & & & face.evoLve [9] & & \\ \cline{1-1} \cline{2-6} & CIFAR-10 [28] & CIFAR-10 & VGG16 & ResNet-18 [17] & \\ \cline{1-1} \cline{2-6} & MNIST [30] & MNIST & CNN(Conv3) & CNN(Conv5) & CNN(Conv2), CNN(Conv4) \\ \hline \multirow{3}{*}{VMI [47]} & \multirow{3}{*}{CelebA} & \multirow{3}{*}{CelebA} & \multirow{3}{*}{ResNet-34 [17]} & EfficientNet-B0, \\ & & & & & \\ \cline{1-1} \cline{2-6} & & & & & \\ \cline{1-1} \cline{2-6} & MNIST & EMNIST [11] & ResNet-10 & ResNet-10 & CNN(Conv2), CNN(Conv4) \\ \hline \hline \end{tabular}
\end{table}
Table 2: We follow the exact the experiment setups in [7] for GMI [52] and KEDMI [7]. For VMI [47], we follow the exact experiment setups in [47]. In total, we conduct 72 experiments spanning 18 setups to demonstrate the effectiveness of our proposed method.
shortest feature distance from the reconstructed image to the real images in the private dataset, given a class/id. It is measured as \(l_{2}\) distance between two images in the feature space, i.e., the penultimate layer of the evaluation model.
### Experimental Results
**Comparison with previous state-of-the-art.** We use GMI [52], KEDMI [7], and VMI [47] as SOTA MI baselines. We reproduce all baseline results using official public implementations. We report results for GMI and KEDMI for CelebA/ CelebA experiments in Table 3. We report VMI results for CelebA/ CelebA experiments in Table 4. For each baseline setup, we report results for 3 variants: \(\bullet\)_LOM_ (Logit Maximization, Sec. 3.1), \(\bullet\)_MA_ (Model Augmentation, Sec. 3.2), \(\bullet\)_LOMMA_ (Logit Maximization + Model Augmentation). The details are as follows:
1. + LOM (Ours): We replace existing identity loss, \(L_{id}\) with our improved identity loss \(L_{id}^{logit}\) (Sec. 3.1).
2. + MA (Ours): We replace existing identity loss, \(L_{id}\) with our proposed \(L_{id}^{aug}\) (Sec. 3.2).
3. + LOMMA (Ours): We combine both \(L_{id}^{logit}\) (Sec. 3.1) and \(L_{id}^{aug}\) (Sec. 3.2) for model inversion.
As one can clearly observe from Table 3 and Table 4, our proposed methods yield significant improvement in MI attack accuracy in _all experiment setups_ showing the efficacy of our proposed methods. Further, by combining both our proposed methods, we significantly boost attack accuracy. The KNN results also clearly show that our proposed methods are able to reconstruct data close to the private training data compared to existing SOTA MI algorithms. Particularly, we improve the KEDMI baseline [7] attack accuracy by 12.4% under IR152 target classifier. We show private training data and reconstructed samples for KEDMI [7] under IR152 target model including all 3 variants in Fig. 4. We remark that in the standard CelebA benchmark, our method boosts attack accuracy significantly thereby achieving more than 90% attack accuracy (Table 3) for the first time in contemporary MI literature. We also include CIFAR-10, MNIST and additional results in Supplementary A.1.
**Cross-dataset.** Following [7], we conduct a series of experiments to study the effect of distribution shift between public and private data on attack performance and KNN distance. We use FFHQ [24] as the public dataset. In particular, we use FFHQ as public data for CelebA experiments. We train GAN models and three model augmentations using the public data. We remark that such setups closely replicate real-world MI attack scenario. We report top 1 accuracy and KNN distance for IR152, face.evolLve, and VGG16 target classifiers in Table 6. It is well known that baseline
Figure 4: Qualitative / Quantitative (Top1 Attack Acc., KNN Dist) results to demonstrate the efficacy of our proposed method. We use KEDMI [7] (SOTA), \(\mathcal{D}_{priv}\) = CelebA [33], \(\mathcal{D}_{pub}\) = CelebA [33] and \(M\) = IR152 [17]. As one can observe, our proposed method achieves better reconstruction of private data both visually and quantitatively (validated by KNN results) resulting in a significant boost in attack performance.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Method & Attack Acc \(\uparrow\) & Imp. \(\uparrow\) & KNN Dist \(\downarrow\) \\ \hline \multicolumn{4}{c}{**CelebA/CelebA/IR152**} \\ \hline KEDMI & 80.53 \(\pm\) 3.86 & - & 1247.28 \\ + LOM (Ours) & 92.47 \(\pm\) 1.41 & 11.94 & 1168.55 \\ + MA (Ours) & 84.73 \(\pm\) 3.76 & 4.20 & 1220.23 \\ + LOMMA (Ours) & **92.93 \(\pm\) 1.15** & **12.40** & **1138.62** \\ \hline GM & 30.60 \(\pm\) 6.54 & - & 1609.29 \\ + LOM (Ours) & 78.53 \(\pm\) 3.41 & 47.93 & 1289.62 \\ + MA (Ours) & 61.20 \(\pm\) 4.34 & 30.60 & 1389.99 \\ + LOMMA (Ours) & **82.40: 4.37** & **51.80** & **1254.32** \\ \hline \multicolumn{4}{c}{**CelebA/CelebA/face.evol.ve**} \\ \hline KEDMI & 81.40 \(\pm\) 3.25 & - & 1248.32 \\ + LOM (Ours) & 92.53 \(\pm\) 1.51 & 11.13 & 1183.76 \\ + MA (Ours) & 85.07 \(\pm\) 2.71 & 3.67 & 1222.02 \\ + LOMMA (Ours) & **93.20 \(\pm\) 0.85** & **11.80** & **1154.32** \\ \hline GM & 27.07 \(\pm\) 6.72 & - & 1635.87 \\ + LOM (Ours) & 61.67 \(\pm\) 4.92 & 34.60 & 1405.35 \\ + MA (Ours) & 74.13 \(\pm\) 4.32 & 47.06 & 1352.25 \\ + LOMMA (Ours) & **82.33 \(\pm\) 3.51** & **55.26** & **1257.50** \\ \hline \multicolumn{4}{c}{**CelebA/CelebA/VGG16**} \\ \hline KEDMI & 74.00 \(\pm\) 3.10 & - & 1289.88 \\ + LOM (Ours) & 89.07 \(\pm\) 1.46 & 15.07 & 1218.46 \\ + MA (Ours) & 82.00 \(\pm\) 3.85 & 8.00 & 1248.33 \\ + LOMMA (Ours) & **90.27 \(\pm\) 1.36** & **16.27** & **1147.41** \\ + LOM (Ours) & 91.07 \(\pm\) 4.47 & - & 1715.60 \\ + LOM (Ours) & 69.67 \(\pm\) 4.80 & 50.60 & 1363.81 \\ + MA (Ours) & 51.73 \(\pm\) 6.03 & 32.66 & 1467.68 \\ + LOMMA (Ours) & **77.60 \(\pm\) 4.64** & **58.53** & **1296.26** \\ \hline \hline \end{tabular}
\end{table}
Table 3: We report the results for KEDMI and GMI for IR152, face.evolLve and VGG16 target model. Following exact experiment setups in [7], here \(\mathcal{D}_{priv}\) = CelebA, \(\mathcal{D}_{pub}\) = CelebA, evaluation model = face.evolLve. We report top 1 accuracies, the improvement compared to the SOTA MI (Imp.), and KNN distance. Top 5 attack accuracies are included in the Supplementary A.2. The best results are in **bold**. By alleviating both these major problems in MI algorithms, we achieve new SOTA MI performance (face.evolLve: 81.40% \(\rightarrow\) **93.20%**).
\(\mathcal{D}_{pub}\) = CelebA, a validation model = face.evoLve, target model = BiDO-HSIC. We report top 1 attack accuracies (Attack Acc.), and KNN distance (KNN Dist). |
2304.00952 | Optimizing data-flow in Binary Neural Networks | Binary Neural Networks (BNNs) can significantly accelerate the inference time
of a neural network by replacing its expensive floating-point arithmetic with
bitwise operations. Most existing solutions, however, do not fully optimize
data flow through the BNN layers, and intermediate conversions from 1 to 16/32
bits often further hinder efficiency. We propose a novel training scheme that
can increase data flow and parallelism in the BNN pipeline; specifically, we
introduce a clipping block that decreases the data-width from 32 bits to 8.
Furthermore, we reduce the internal accumulator size of a binary layer, usually
kept using 32-bit to prevent data overflow without losing accuracy.
Additionally, we provide an optimization of the Batch Normalization layer that
both reduces latency and simplifies deployment. Finally, we present an
optimized implementation of the Binary Direct Convolution for ARM instruction
sets. Our experiments show a consistent improvement of the inference speed (up
to 1.91 and 2.73x compared to two state-of-the-art BNNs frameworks) with no
drop in accuracy for at least one full-precision model. | L. Vorabbi, D. Maltoni, S. Santi | 2023-04-03T13:16:33Z | http://arxiv.org/abs/2304.00952v1 | # Optimizing data-flow in Binary Neural Networks
###### Abstract
Binary Neural Networks (BNNs) can significantly accelerate the inference time of a neural network by replacing its expensive floating-point arithmetic with bit-wise operations. Most existing solutions, however, do not fully optimize data flow through the BNN layers, and intermediate conversions from 1 to 16/32 bits often further hinder efficiency. We propose a novel training scheme that can increase data flow and parallelism in the BNN pipeline; specifically, we introduce a clipping block that decreases the data-width from 32 bits to 8. Furthermore, we reduce the internal accumulator size of a binary layer, usually kept using 32-bit to prevent data overflow without losing accuracy. Additionally, we provide an optimization of the Batch Normalization layer that both reduces latency and simplifies deployment. Finally, we present an optimized implementation of the Binary Direct Convolution for ARM instruction sets. Our experiments show a consistent improvement of the inference speed (up to \(\mathbf{1.91}\) and \(\mathbf{2.73\times}\) compared to two state-of-the-art BNNs frameworks) with no drop in accuracy for at least one full-precision model.
Lorenzo Vorabbi\({}^{*+}\), Davide Maltoni\({}^{+}\), Stefano Santi\({}^{*}\)
\({}^{*}\)Datalogic Labs
\({}^{*}\)University of Bologna, Department of Computer Science and Engineering (DISI)
## 1 Introduction
In the last decade deep neural networks (DNNs) have come to demonstrate high accuracy on many datasets like ImageNet, outperforming legacy methods and sometimes even human experts ([21], [9]). These improvements have been achieved by increasing the depth and complexity of the network, leading to intensive usage of computational resources and memory bandwidth. Large DNN models run smoothly on expensive GPU-based machines but cannot be easily deployed to edge devices (i.e., small mobile or IoT systems), which are typically more resource-constrained. Various techniques have been introduced to mitigate this problem, including network quantization, network pruning and efficient architecture design.
Recent work on quantization (e.g. [5, 11, 14, 17]) has shown that a DNN model can be even quantized to 1-bit (also known as binarization) thus achieving a remarkable speedup compared to the full precision network. The memory requirement of such a binarized DNN (BNN) is drastically reduced compared to a DNN of the same structure,
since a significant proportion of weights and activations can be represented by 1-bit, usually \(\{-1,+1\}\). In addition, high-precision multiply-and-accumulate operations can be replaced by faster XNOR and popcount operations.
However, the aggressive quantization can make BNN's less accurate than their full-precision counterparts. Some researchers showed that the performance loss often arises from the gradient mismatch problem caused by the non-differentiable binary activation function [14]. This non-differentiability of the quantization functions prevents gradient back-propagation through the quantization layer. Therefore, previous works used straight-through-estimator (STE) to approximate the gradient on non-differentiable layers [11].
Furthermore, to prevent that the binarization of weights and activations leads to feature maps of lower quality and capacity, a combination of binary and floating-point layers is usually adopted. Unfortunately, each time a binary layer is connected to a floating-point one, the efficiency of the pipeline is compromised by input/output layer data type conversion. In addition, the internal parallelism of a binary layer depends on the encoding of the accumulator, which is often maintained at 32 bits to prevent overflow. In this paper we present several optimizations that allow training a BNN with an inter-layer data width of 8 bits. Most prior work on BNN's emphasize overall network accuracy; in contrast, our aim is to preserve initial accuracy while improving efficiency. Our contributions (graphically highlighted in Figure 0(i) and 0(ii)) can be summarized as follows:
* a novel training scheme is proposed to improve the data-flow in the BNN pipeline (Section 3.1); specifically, we introduce a clipping block to shrink the data width from 32 to 8 bits while simultaneously reducing the internal accumulator size.
* we provide (Section 3.2) an optimization of the Batch Normalization layer when executed after a binary operation that decreases latency and simplifies deployment.
* we optimize the Binary Direct Convolution method for ARM instruction sets (Section 3.3).
To prove the effectiveness of the proposed optimizations in Section 4 we provide experimental evaluations that show's the speed-up relative to state-of-the-art BNN engines like LCE [1] and DaBNN [24].
## 2 Related Work
BNNs were first introduced in [5], whose authors established an end-to-end gradient back-propagation framework for training the binary weights and activations. They achieved good success on small classification datasets including CIFAR10 and MNIST, but encountered a severe accuracy drop on ImageNet.
Many subsequent studies focused on enhancing BNN accuracy. The authors of [19] proposed XNOR-Net, subsequently improved in [4], where real-valued scaling factors are used to multiply the binary weight kernels, and this methodology then became a
representative binarization approach to bridge the gap between BNN's and their real-valued counterparts. The Bi-Real Net [14] added shortcuts to propagate values along the feature maps, which further boosted the top-1 accuracy on ImageNet; nevertheless, the model still relies on 32-bit floating point to execute batch normalization and addition operator (as shown in Fig. 1ii).
One of the major weaknesses of BNN's is the gradient approximation by the STE binarization function [5]. In fact, STE computes the derivative of sign as if the binary operation was a linear function, as reported in the following formula :
\[\begin{split} A(x)=\max(-1,\min(1,x))\\ STE(x)=A^{\prime}(x)=[-1\leq x\leq 1]\end{split} \tag{1}\]
This implementation of STE cancels the gradients when the inputs get too large [11]. STE provides a coarse approximation of the gradient that inevitably affects the testing accuracy of the BNN. To address this issue, other recent studies tried to improve the performance of BNNs by adopting a proper optimization method for the quantization. Inspired by STE, many works update the parameters approximately introducing auxiliary loss functions; PCNN [8] proposes a projection convolutional network with a discrete back propagation via projection. IR-Net [18] introduces a new parametrized binarization function to minimize both quantization error and information loss of parameters by balanced and standardized weights in forward propagation. RBNN [13] proposes a training-aware approximation of the sign function for gradient backpropagation. Similarly, AdamBNN [15] analyzes the influence of Adam and weight decay when training BNNs, showing that the regularization effect of second-order momentum in Adam is crucial to revitalize dead weights.
To close the accuracy gap with real-valued networks other works propose to add a distribution loss or special regularization to the overall loss function. Real-to-Bin [17] makes use of an additional loss function by matching the spatial attention maps computed at the output of the binary and real-valued convolutions. ReActNet [16] adopts a distributional loss to further enforce the binary network to learn output distributions similar to those of a real-valued network.
Recently, some works proposed new network topology structures to increase BNN performance. High-Capacity Expert Binary Networks [3] addresses the information bottleneck in binary networks by introducing an efficient width expansion mechanism which keeps the binary operations within the same budget. RepBNN [20] proposes a new replaceable convolution module which enhances feature maps by replicating input or output along channel dimension without extra cost with respect to the number of parameters and convolutional computation.
8-bit quantization of weights and activations in a neural network is a well-known topic, as reported in [12]. In BNNs, 8-bit quantization is not widespread and the Batch Normalization (BN) layer is usually executed in floating point arithmetic using off-the-shelf inference engines [24, 1]. In contrast, we show that, the quantization of BN layer and the reduced width size of the accumulator inside the binary operator, can lead to substantial speed up of the binary layer (binary operation + BN).
In contrast with previous works [19, 4, 14, 16], where binary convolution is used with scaling factors, we directly binarize input activations and weights, then we quantize BN layer avoiding to execute floating-point arithmetic. When scaling factor is
applied only to the weights and BN layer is inserted immediately after binary operation, the scaling factor multiplication can be absorbed by BN multiplication while in the previously cited works this operation is executed in floating point. Furthermore, the adoption of learnable biases (_ReAct Sign_), as adopted in ReactNet [16], during inputs binarization further increases the usage of floating-point computation.
Besides many efforts to develop more efficient and accurate architectures, a few works have provided benchmarks on real devices such as ARM processors. Based on the analysis provided in [1], the fastest inference engines for binary neural networks, with proven benchmarks (Section 4 of [1]), are LCE and DaBNN.
## 3 Data-Flow Optimizations
As illustrated in Figs. 0(i) and 0(ii) (a), the most commonly used BNN architectures (e.g., VGG and ResNet) have four essential blocks in each convolution/fully-connected (CONV/FC) layer: sign (binarization), XNOR, popcount and Batch Normalization (BN). Since the weights, inputs and outputs are all binary, the traditional multiply-and-accumulate operation is replaced by XNOR and bit counting (i.e., popcount). XNOR and popcount are usually fused to improve efficiency. The use of Batch Normalization after each binarized layer is very important in BNN's because it makes the optimization landscape significantly smoother; this smoothness induces a more predictive and stable behavior of the gradients, allowing for faster training. Figures 0(i) and 0(i) (b and c) point out our proposed BNN optimizations during training and inference. Before discussing them in detail, we show the data-flow bottlenecks that affect existing solutions and then describe how to reduce them.
In Figs. 1(i) and 1(ii) we report an example of binary convolutional layer outputs for a VGG and a ResNet model. The ranges of activation values after popcount (green
Figure 1: a) Standard BNN blocks used in [19] and [14]. b) BNN block with output convolution clipping used during training. c) Optimized BNN block adopted during inference. Popcount operation is performed using saturation arithmetic in order to keep the data width to 8 bits at inference time. BN is replaced in by a comparison in case i, while in ii BN is 8-bit quantized.
histograms) exceed the interval \([-128;+127]\)1, so adopting an 8-bit encoding would lead to overflow. To prevent such a data loss, most of the existing BNN frameworks (including [1, 24]) encode such data in 32-bit floating point. On the other hand, the ranges of values after BN (red histograms in Fig. 2) are more limited.
Footnote 1: We actually consider the symmetric quantization interval \([-127;+127]\) because this choice enables a substantial optimization opportunity, as reported in Appendix B of [12].
In this paper, we propose to accumulate the popcount output with 8-bit integers (using saturation arithmetic) through a two-stages training procedure, which is designed to preserve model accuracy. In the next subsection we show how to apply this technique to VGG and ResNet models.
### Two-stage Clipping
Our training procedure selectively executes or skips a clipping operation at each binary layer (row \(b\) of Figs. 0(i) and 0(ii), blue blocks). A two-stage training method is introduced to avoid accuracy loss when clipping is enabled: during a first warm-up stage, the model is trained without any range constraints, while in the second stage (details are reported in Algorithm 1) the network is trained with the clipping block enabled. Based on the high accuracy reached at the end of the first training stage, in the second training stage the model better tolerates clipping 8-bit quantization; we experimentally found that this approach preserves the accuracy of a model that does not contain clipping.
### Batch Normalization Optimization
The BN layers after the clipping are also optimized/8-bit quantized to further increase the data-flow of the inference pipeline. The Batch Normalization layer scales and shifts the output of the CONV/FC layer as follows:
\[BN\left(F_{out}^{l}\right)=\gamma\frac{F_{out}^{l}-\mu}{\sigma}+\beta \tag{2}\]
Figure 2: Example of output distributions after binary convolution. i, refers to a VGG style network while ii to a ResNet architecture. Green shows the distribution before the BN layer and red afterward.
where \(\gamma,\ \mu,\ \sigma\ and\ \beta\) are learned parameters and \(F_{out}^{l}\) is the ouput feature of layer \(l\) that is the input of BN function.
The BN optimization depends on the network model: VGG or ResNet. In both cases we show that it is possible to keep the inter-layer data type to 8-bit with appropriate changes to the binary layer structure.
* **VGG style block**. When the BN layer is inserted in a pipeline similar to Fig. 1i, where the following block is still binary, the BN operation can be simplified replacing multiplication and division in Eq. 2 with a simple comparison with a threshold \(\tau\). The simplification of Eq. 2 leads to: \[sign\left(BN\left(F_{out}^{l}\right)\right)=\begin{cases}+1&if\ \ BN\left(F_{out}^{l}\right)\geq 0\\ -1&otherwise\end{cases}\] \[\gamma\frac{F_{out}^{l}-\mu}{\sigma}+\beta\geq 0\Rightarrow\tau\doteq \mu-\beta\frac{\sigma}{\gamma}\] (3) \[sign\left(BN\left(F_{out}^{l}\right)\right)=\begin{cases}+1\ if\ F_{out}^{l} \geq\tau\ else\ -1\ \left(when\ \frac{\gamma}{\sigma}\geq 0\right)\\ -1\ if\ F_{out}^{l}\leq\tau\ else\ +1\ \left(when\ \frac{\gamma}{\sigma}<0\right) \end{cases}\] The threshold \(\tau\) of Eq. 3 can be computed offline and easily quantized to 8
bits in order to exploit the output features of layer \(l\). Therefore, when multiple BNN modules are stacked, Batch Normalization can be replaced by a threshold comparison according to Eq. 3. Even if BN can be replaced with a threshold comparison, 8-bit data flow is still important because it allows to accumulate the binary xnor and popcount operations direclty on 8-bit using saturation arithmetic instead of the standard 32-bit.
* **ResNet style block**. When a BNN block is placed in a ResNet style pipeline, followed by an addition operator, Fig. 1ii, the BN layer can be executed with both scaling and bias factors to 8 bits. As reported in Fig. 3, the internal data representation of a quantized BN layer is expanded to 16-bit to preserve accuracy during quantization but the input/output data type still remains within 8 bits. The iterative quantization procedure we adopted is symmetric and keeps unaltered the zero point representation, as reported in Algorithm 2. The procedure iterates over the BN floating-point layers and, for each one: computes the quantization scale; quantizes; freezes the weights; and retrains the remaining layers.
### Binary Direct Convolution optimization on ARM
The GEMM (GEneral Matrix Multiplication) is a widely adopted method to efficiently implement convolutions. However, as reported in [25], the GEMM approach increases the memory footprint of the model, making a model's port to an embedded device more difficult. Furthermore, GEMM routines are not always optimized for convolutions on ARM devices, in particular ARMv8 (64-bit ARM architecture) and its relevant operations such as _vocount_ and _addv_.
_vocount_ takes an N-byte vector as input and outputs an N-byte vector containing the number of \(1s\) present in each input byte. _addv_ takes an N-byte vector as input and outputs the sum of the N bytes as one single value.
Figure 3: **a)**: 8-bit symmetric quantization procedure that reserves fractional/integer bits based on the range of input 32-bit floating point values. **b)**: implementation of the BN layer with 8-bit quantization using an internal 16-bit representation to preserve accuracy.
Inspired by [25] and [24] we propose a hybrid direct binary convolution (see Fig. 4) that uses both the _addv_ instruction and the common _add_ operations. The binary convolution is usually composed of three different steps: binarization/bit-packing, padding and convolution. [24], executes these steps in a sequential way. In contrast, we devise a more cache-friendly approach that collapses the previous steps in one operation executed with tiling. We also devise a different kernel memory layout that better fits ARMv8 SIMD processing instructions, as illustrated in Fig. 4.
The implementation details of our binary convolution are reported in Fig. 5. The operation _Extract sign bit_ executes the binarization, bit-packing and padding. Then,
the (bit-wise) XNOR output is consumed by the popcount operation (_vcnt 8-bit wise_,
Figure 4: The \(7\times 7\) input image with \(3\) different channels (denoted by color) is convolved with two separate kernels to obtain a \(5\times 5\) output with two output channels. To better exploit the SIMD 128-bit registers a different memory layout for kernel is devised: \([out_{channels},\ H_{filter},\ W_{filter},\ in_{channels}]\).
Figure 5: The \(3\times 3\times 128\) input patch is convolved (XNOR + popcount) with one kernel through the Extract sign bit, XNOR and then popcount operations. Popcount is performed using _vcnt_, summing in pairs the _vcnt_ output and the last step uses the _addv_ operation. TL (top left), TM (top middle), TR (top right) and ML (middle left) indicate the position of elements inside the \(3\times 3\) patch.
_add_ and _addv_). On the ARM architecture, the latter can be implemented with _vcount_ and a sequence of additions (_addv_ instructions). We decided to implement several pairwise additions and only a final _addv_ instruction (which is more expensive). The entire convolution process does not provide intermediate outputs but instead processes the input data as a whole. It is worth noting that the clipping operation can be obtained for free on ARM devices by exploiting its saturation arithmetic; all the addition operations (_add_ and _addv_) can be limited to the fixed range \([-127;+127]\) by simply adding the postfix \(q\) to the operations and executing \(\max\) to avoid \(-128\) value.
## 4 Experimental Results
In this section, we first evaluate the efficiency result of our approach compared to the state-of-art BNN frameworks such as LCE and DaBNN; the comparison is carried out on real hardware devices like Raspberry Pi Model 3B and 4B with 64-bit OS. Then, we present various accuracy benchmarks of the proposed two-stage training procedure focusing on CIFAR-10, SVHN and ImageNet, and to two different architectures: VGG and Resnet-18.
### Efficiency Analysis
To validate the efficiency of our method we focused on the convolution macro-block (to extend the results reported in [1] also to Raspberry Pi 3) of Fig. 1 and compared the efficiency of the proposed approach with LCE and DaBNN, which, to the best of our knowledge, are the fastest inference engines for binary neural networks.
Our assessment was performed on ARMv8 platforms, Raspberry Pi 3B and 4B. We implemented, differently from our predecessors, the convolution operation using ARM NEON _intrinsics_ instead of inline assembly. Intrisics allow to produce code easier to maintain and automatically fit both ARMv7 and ARMv8 platforms without losing appreciable performance compared to pure assembly code. In Fig. 6 we compare implementations on targets Rpi 3B and 4B. Our solution shows a clear performance improvement for single binarized convolutions for all kernels and, including all the optimizations introduced in Section 3, accelerates binary convolution up to \(1.91\) and \(2.73\times\) compared to LCE and DaBNN with an average improvements of \(1.46\) and \(1.61\times\) respectively.
### Accuracy Analysis
We evaluated two VGG-style networks (VGG-11 and VGG-Small) and a ResNet-18 for CIFAR-10 and SVHN. VGG-11 [22] and VGG-Small [23] are both high-capacity networks for classification. Pre-trained binary models (BinaryResNetE18 and BinaryDenseNet28) were adopted to evaluate the accuracy on ImageNet.
**Results on CIFAR10 and SVHN**. For CIFAR10 the RGB images are scaled to the interval \([-1.0;+1.0]\) and the following data augmentation was used: zero padding of 4 pixels for each side, a random \(32\times 32\) crop and a random horizontal flip. No augmentation is used at test time. The models have been trained for 140 epochs.
On SVHN the input images are scaled to the interval \([-1.0;+1.0]\) and the following data augmentation procedure is used: random rotation (\(\pm 8\) degrees), zoom (\([0.95,1.05]\)), random shift (\([0;10]\)) and random shear (\([0;0.15]\)). The models have been trained for 70 epochs.
All the networks have been trained using the same training procedure without adopting additional distillation losses to further improve accuracy of BNN models.
The accuracy achieved by the models is reported in Table 1 showing that the clip operation does not substantially affect the overall accuracy and the two-stage clipping allows to preserve the original accuracy. Figs. 7 and 8 show the training and validation curves on CIFAR10 and SVHN; we can note that a limited number of epochs is necessary during the second training stage to recover accuracy.
Figure 6: Latency evaluation of our method compared to DaBNN and LCE on Raspberry Pi 3B (i) and 4B (ii) devices.
Figure 7: Training loss and testing accuracy curves for VGG11 and VGGSmall on CIFAR10 and SVHN of the first and second training stages.
Figure 8: Training loss and testing accuracy curves for ResNet-18 on CIFAR10 and SVHN of the first and second training stages.
**Results on ImageNet**. Tests were performed by using pre-trained binary versions of ResNet18 and DenseNet28 [2] taken from _zoo literature of Plumerai_2. Each BNN module (refer to Fig. 1) has been modified according to Fig. 1ii. Residual blocks seem to be more robust to clipping compared to VGG style blocks (Results are in Table 2).
Footnote 2: [https://docs.larq.dev/zoo/api/literature/](https://docs.larq.dev/zoo/api/literature/)
## 5 Conclusion
This paper introduced several optimization in the BNN data-flow that together achieve a speed-up of \(\mathbf{1.91}\) and \(\mathbf{2.73\times}\) compared to state-of-the-art BNNs frameworks, without any accuracy loss for at least one full-precision model. In the future, we intend to investigate: (i) the application of similar optimization techniques to ternary networks, which naturally get higher accuracies; (ii) the simplification of the training procedure, possibly collapsing it to a single stage to further reduce training time and complexity.
\begin{table}
\begin{tabular}{c c c c c} \hline Method & Topology & Bit-width & CIFAR10 \% & SVHN \% \\ \hline \hline BNN [5] & VGGSmall [23] & 32 FP & 93.8 & 96.5 \\ Main/Subs. Net. & VGG11 [22] & 32 FP & 83.8 & - \\ ResNet-18 [18] & ResNet-18 & 32 FP & 93.0 & 97.3 \\ \hline BNN & VGGSmall & 1-bit & 89.9 & 96.5 \\ XNOR-Net [19] & VGGSmall & 1-bit & 82.0 & 96.5 \\ Bop [10] & VGGSmall & 1-bit & 91.3 & - \\ BNN-DL [6] & VGGSmall & 1-bit & 89.9 & 97.2 \\ IR-Net [7] & VGGSmall & 1-bit & 90.4 & - \\ Main/Subs. Net. & VGG11 & 1-bit & 82.0 & - \\ Bi-Real Net [14] & ResNet-18 & 1-bit & 89.3 & 94.7 \\ ReActNet [16] & ResNet-18 & 1-bit & 91.5 & 95.7 \\ \hline
**ours** & VGGSmall & 1-bit & 88.8 & 96.1 \\
**ours** & VGG11 & 1-bit & **83.7** & 95.5 \\
**ours** & ResNet-18 & 1-bit & 90.3 & 95.3 \\ \end{tabular}
\end{table}
Table 1: Accuracy comparison (top1) of our method with SOTA on CIFAR10 and SVHN.
\begin{table}
\begin{tabular}{c c c c c} \hline Method & Topology & Bit-width & top1 \% & top5 \% \\ \hline XNOR-Net [19] & ResNet-18 & 1-bit & 51.2 & 73.2 \\ Bi-Real Net [14] & ResNet-18 & 1-bit & 56.4 & 79.5 \\ BinaryResNetE18 [2] & ResNet-18 & 1-bit & 58.1 & 80.6 \\ BinaryDenseNet28 [2] & DenseNet-28 & 1-bit & 60.7 & 82.4 \\ \hline
**ours** & ResNet-18 & 1-bit & 58.1 & 80.6 \\
**ours** & DenseNet-28 & 1-bit & 60.7 & 82.4 \\ \end{tabular}
\end{table}
Table 2: Accuracy comparison of our method with SOTA on ImageNet. |
2306.06582 | Fast, Distribution-free Predictive Inference for Neural Networks with
Coverage Guarantees | This paper introduces a novel, computationally-efficient algorithm for
predictive inference (PI) that requires no distributional assumptions on the
data and can be computed faster than existing bootstrap-type methods for neural
networks. Specifically, if there are $n$ training samples, bootstrap methods
require training a model on each of the $n$ subsamples of size $n-1$; for large
models like neural networks, this process can be computationally prohibitive.
In contrast, our proposed method trains one neural network on the full dataset
with $(\epsilon, \delta)$-differential privacy (DP) and then approximates each
leave-one-out model efficiently using a linear approximation around the
differentially-private neural network estimate. With exchangeable data, we
prove that our approach has a rigorous coverage guarantee that depends on the
preset privacy parameters and the stability of the neural network, regardless
of the data distribution. Simulations and experiments on real data demonstrate
that our method satisfies the coverage guarantees with substantially reduced
computation compared to bootstrap methods. | Yue Gao, Garvesh Raskutti, Rebecca Willet | 2023-06-11T04:03:58Z | http://arxiv.org/abs/2306.06582v1 | # Fast, Distribution-free Predictive Inference for Neural Networks with Coverage Guarantees
###### Abstract
This paper introduces a novel, computationally-efficient algorithm for predictive inference (PI) that requires no distributional assumptions on the data and can be computed faster than existing bootstrap-type methods for neural networks. Specifically, if there are \(n\) training samples, bootstrap methods require training a model on each of the \(n\) subsamples of size \(n-1\); for large models like neural networks, this process can be computationally prohibitive. In contrast, our proposed method trains one neural network on the full dataset with \((\epsilon,\delta)\)-differential privacy (DP) and then approximates each leave-one-out model efficiently using a linear approximation around the differentially-private neural network estimate. With exchangeable data, we prove that our approach has a rigorous coverage guarantee that depends on the preset privacy parameters and the stability of the neural network, regardless of the data distribution. Simulations and experiments on real data demonstrate that our method satisfies the coverage guarantees with substantially reduced computation compared to bootstrap methods.
## 1 Introduction
To assess the accuracy of parameter estimates or predictions without specific distributional knowledge of the data, the idea of re-sampling or sub-sampling on the available data has been long-established to construct prediction intervals, and there is a rich history in the statistics literature on the jackknife and bootstrap methods, see Stine (1985), Efron (1979), Quenouille (1949), Efron and Gong (1983). Among these re-sampling methods, leave-one-out methods (generally referred to as "cross-validation" or "jackknife") are widely used to assess or calibrate predictive accuracy, and can be found in a large line of literature (Stone, 1974, Geisser, 1975).
While it has been demonstrated in a large body of past work with extensive evidence that jackknife-type methods have reliable empirical performance, the theoretical properties of these types of methods are studied relatively little until recently, see Steinberger and Leeb (2018), Bousquet and Elisseeff (2002). One of the most important results among these theoretically guaranteed works is Foygel Barber et al. (2019), which introduces a crucial modification compared to the traditional jackknife method that permits rigorous coverage guarantees of at least \(1-2\alpha\) regardless of the distribution of the data points, for any algorithm that treats the training points symmetrically. We will revisit this work and give more relative details in Section 2.1.
Although theoretically jackknife+ has been proven to have coverage guarantees without distributional assumptions, in practice, this method is computationally costly, since we need to train \(n\) (which is the training sample size) leave-one-out models from scratch to find the predictive interval. Especially for large and complicated models like neural networks, this computational cost is prohibitive. The goal of this paper is to provide a _fast_ algorithm that provides similar theoretical coverage guarantees to those in jackknife+.
To achieve this goal, we develop a new procedure, called Differentially Private Lazy Predictive Inference (DP-Lazy PI), which combines two ideas: lazy training of neural networks and differentially private stochastic gradient descent (DP-SGD). To accelerate the procedure, we introduce a lazy
training scheme inspired by Chizat et al. (2020), Gao et al. (2022) to train the leave-one-out models. The intuition is that with data exchangeability, the leave-one-out models should be quite close or similar to each other and there is no need to train each one from scratch ( _i.e._, random initialization). Instead, we first train a model on the full data and use this model as a good initialization. By using DP-SGD as the initializer for our full model, we are able to provide coverage guarantees since the privacy mechanism prevents information leakage across leave-one-out estimators. In particular, we prove that our DP-Lazy PI procedure has a coverage of at least \(1-2\alpha-3\sqrt{2\eta+2\epsilon+\delta}\) where \(\eta\) represents an out-of-sample stability parameter and \((\epsilon,\delta)\) are the differential privacy parameters. Empirically, we show through simulations and real-data experiments that our method has significant advantage over the existing jackknife+ method in run-time while still maintaining good coverage.
## 2 Preliminaries
We first define key notation. For any values \(v_{i}\) indexed by \(i\in[n]\), define
\[Q_{\alpha}^{+}\left(\{v_{i}\}_{i\in[n]}\right)=\lceil(1-\alpha)(n+1)\rceil^{ \text{th}}\text{ smallest value of }v_{1},\ldots,v_{n}, \tag{1}\]
_i.e._, the \(1-\alpha^{\text{th}}\) quantile of the empirical distribution of these values; Similarly,
\[Q_{\alpha}^{-}\left(\{v_{i}\}_{i\in[n]}\right)=\lfloor(\alpha)(n+1)\rfloor^{ \text{th}}\text{ smallest value of }v_{1},\ldots,v_{n}=-Q_{\alpha}^{+}\left(\{-v_{i}\}_{i\in[n]}\right) \tag{2}\]
is the \(\alpha\) quantile of the empirical distribution.
### Distribution-free Prediction Intervals
Suppose we have training data \((X_{i},Y_{i})\in\mathcal{X}\times\mathcal{Y}\) for \(i=1,\ldots,n\), and a new test point \((X_{n+1},Y_{n+1})\), \(X_{i}\in\mathbb{R}^{p}\), \(Y_{i}\in\mathbb{R}\) drawn independently from the same distribution. We fit a regression model to the training data, _i.e._, a function \(\hat{f}:\mathcal{X}\mapsto\mathcal{Y}\) where \(\hat{f}(x)\) predicts \(Y_{n+1}\) given a new feature vector \(X_{n+1}=x\), and then provide a prediction interval centered around \(\hat{f}(X_{n+1})\) for the test point. Specifically, given some target coverage level \(1-\alpha\), we aim to construct a prediction interval \(\hat{C}_{n,\alpha}\), such that \(\mathbb{P}\left[Y_{n+1}\in\widehat{C}_{n,\alpha}(X_{n+1})\right]\geq 1-\alpha.\) We call the probability \(\mathbb{P}\left[Y_{n+1}\in\widehat{C}_{n,\alpha}(X_{n+1})\right]\) as the coverage, and \(\text{len}[\widehat{C}_{n,\alpha}(X_{n+1})]\) as the interval width of \(\widehat{C}_{n,\alpha}(X_{n+1})\), which is the distance between the left and right endpoints.
Naive Prediction IntervalA naive way to construct a predictive interval at the new test point \((X_{n+1},Y_{n+1})\) is to center the interval at \(\hat{f}(X_{n+1})\) and estimate the margin ( _i.e._, half of the interval width) from the training residuals \(|Y_{i}-\hat{f}(X_{i})|,\ i=1,\ldots,n\). Therefore, a naive prediction interval can be constructed as:
\[\widehat{C}_{n,\alpha}^{\text{naive}}(x)\coloneqq\left[\hat{f}(x)\pm Q_{ \alpha}^{+}\left(\{|Y_{i}-\hat{f}(X_{i})|\}_{i\in[n]}\right)\right].\]
where \(Q_{\alpha}^{+}\left(\{|Y_{i}-\hat{f}(X_{i})|\}_{i\in[n]}\right)\) is defined as (1). Due to the problem of overfitting when the training errors are typically smaller than the test errors, this naive interval \(\widehat{C}_{n,\alpha}(x)\) may undercover-- _i.e._, the probability that \(Y_{n+1}\) falls outside the interval can be larger than \(\alpha\), as discussed in Foygel Barber et al. (2019).
Jackknife Prediction IntervalTo avoid undercoverage due to model overfitting, the _jackknife_ method estimates the margin of errors is estimated by leave-one-out residuals instead of training residuals (Steinberger and Leeb, 2016, 2018). The idea is straightforward: for each \(j=1,\ldots,n\), we fit a regression function \(\hat{f}_{-j}\) using all the training data except the \(j\)-th training sample. Based on these leave-one-out models \(\hat{f}_{-1},\ldots,\hat{f}_{-n}\), we can therefore compute the leave-one-out residuals: \(|Y_{1}-\hat{f}_{-1}(X_{1})|,\ldots,|Y_{n}-\hat{f}_{-n}(X_{n})|\).
With the leave-one-out residuals as well as the regression function \(\hat{f}\) fitted on the full training data, the jackknife prediction interval is constructed as:
\[\widehat{C}_{n,\alpha}^{\text{jackknife}}(x)\coloneqq\left[\hat{f}(x)\pm Q_{ \alpha}^{+}\left(\{|Y_{i}-\hat{f}_{-i}(X_{i})|\}_{i\in[n]}\right)\right].\]
Jackknife+ Prediction IntervalThe jackknife+ is a modification of jackknife, and both of these methods use the leave-one-out residuals when constructing the prediction intervals. The difference is that the jackknife interval is centered at the predicted value \(\hat{f}(X_{n+1})\) (where \(\hat{f}\) is fitted on the full training data), whereas jackknife+ uses the leave-one-out predictions \(\hat{f}_{-i}(X_{n+1}),i=1,\ldots,n\) instead to the build the prediction interval:
\[\widehat{C}_{n,\alpha}^{\text{jackknife+}}(x)\] \[\quad\coloneqq\left[Q_{\alpha}^{-}\left(\left\{\hat{f}_{-i}(x)-|Y _{i}-\hat{f}_{-i}(X_{i})|\right\}_{i\in[n]}\right),\ \ Q_{\alpha}^{+}\left(\left\{\hat{f}_{-i}(x)+|Y_{i}-\hat{f}_{-i}(X_{i})| \right\}_{i\in[n]}\right)\right].\]
Foygel Barber et al. (2019) provides the non-asymptotic coverage guarantee for jackknife+ without assumptions beyond the training and test data being exchangeable. Theoretically, a distribution-free coverage guarantee of at least \(1-2\alpha\) is provided, and it's observed that the method can achieve \(1-\alpha\) coverage in empirical studies.
Summary of coverage guarantees and computational costsThe coverage guarantees and the computational cost of the above methods are summarized in Table 1. Concretely computational cost refers to the computation at two stages, model training and evaluation and we measure cost in terms of number of models trained during the training phase and number of calls to the trained function in the evaluation stage.
Clearly the cost of model training is the largest cost, especially when training a complicated model such as a neural network. Our method attempts to only train a single model (like the naive model) while still providing a coverage guarantee (like jackknife+).
## 3 Method and Algorithm
Our method involves using the lazy estimation framework (Chizat et al., 2020) with an initialization using differential privacy stochastic gradient descent (DP-SGD, Abadi et al. (2016)).
### Lazy Estimation
To calculate the predictive interval fast and reduce the computational cost, we consider using a lazy estimation scheme when estimating the leave-one-out (LOO) models for calculating the predictive interval. The key idea of our approach is given an initialization parameterized by \(\theta_{0}\), for each \(i\), we approximate \(\hat{f}_{-i}\) by linearizing around initial base model and training the linearized model using all but the \(i^{\text{th}}\) training sample. Thus instead of training \(n\) neural networks explicitly, we train a single neural network followed by \(n\) linear models, leading to a substantial computational saving.
For any dataset \(\mathcal{D}=\{(X_{i},Y_{i})\}\) and a large model \(f(x;\ \theta)\), where \(\theta\) denotes the model parameters, we define a lazy estimation operator \(\text{LAZY}_{\lambda}(\theta;\mathcal{D}):\Theta\times\mathcal{D}\mapsto\Theta \subset\mathbb{R}^{M}\) with the ridge regression parameter \(\lambda>0\) and initialization \(\theta_{0}\), we estimate the LOO models by taking a linearization around \(\theta_{0}\) and minimize the penalized training loss on data \(\mathcal{D}_{\setminus j}=\{(X_{i},Y_{i})\}_{i\in[n]\setminus[j]}\),
\[\text{LAZY}_{\lambda}(\theta_{0};\mathcal{D}_{\setminus j})=\theta_{0}+ \operatorname*{argmin}_{\Delta\in\mathbb{R}^{M}}\left\{\sum_{i\in[n]\setminus \{j\}}\mathcal{L}\left(Y_{i},f(X_{i},\theta_{0})+\Delta^{\top}\nabla_{\theta} f(X_{i};\theta)|_{\theta=\theta_{0}}\right)+\lambda\|\Delta\|^{2}\right\}. \tag{3}\]
\begin{table}
\begin{tabular}{l|c c|c c} \hline & \multicolumn{2}{c|}{Coverage Guarantee} & \multicolumn{2}{c}{Computation} \\ \cline{2-5} & Distribution-free Theory & Empirical & \# models trained & \# calls to trained function \\ \hline
**Naïve** & No guarantee & \(<1-\alpha\) & 1 & \(n+n_{\text{test}}\) \\ \hline
**jackknife** & No guarantee & \(\approx 1-\alpha\) & \(n\) & \(n+n_{\text{test}}\) \\ \hline
**jackknife+** & \(\geq 1-2\alpha\) & \(\approx 1-\alpha\) & \(n\) & \(n\cdot n_{\text{test}}\) \\ \hline \end{tabular}
\end{table}
Table 1: Coverage guarantees and computation costs of existing methods for estimating prediction interval. Here \(n\) is the training sample size; \(n_{\text{test}}\) is the number of test points for which we seek prediction intervals.
### Differential Privacy (DP) initialization
One of the important questions with lazy estimation is how to choose the initialization \(\theta_{0}\). For this we use the concept of _differential privacy_, which allows us to achieve coverage guarantees with a large reduction in computational costs.
**Definition 1** (Differential Privacy (Dwork and Lei, 2009)).: _A randomized mechanism \(\mathcal{M}:\mathcal{D}\mapsto\mathcal{R}\) with domain \(\mathcal{D}\) and range \(\mathcal{R}\) satisfies \((\epsilon,\delta)\)-differential privacy if for any two adjacent datasets \(d,d^{\prime}\subseteq\mathcal{D}\) and for any subset of outputs \(S\subseteq\mathcal{R}\) it holds that_
\[P\left(\mathcal{M}(d)\in S\right)\leq e^{\epsilon}P\left(\mathcal{M}(d^{ \prime})\in S\right)+\delta. \tag{4}\]
In essence, this differential privacy (DP) condition ensures that the output distribution does not change much when the input data has some small change, which makes it hard to distinguish between the input databases on the basis of the output. Abadi et al. (2016) proposes Algorithm 1, which performs stochastic gradient descent with noisy gradients and showed that it achieves \((\epsilon,\delta)\)-differential privacy, where noise is added to the gradients in the scale of \(O(\frac{\sqrt{\log\delta}}{n\epsilon})\). Algorithms to ensure DP are always designed based on the sensitivity of the original algorithm. More precisely, given an algorithm \(\mathcal{A}\) and a norm function \(\|\cdot\|\) over the range of \(\mathcal{A}\), the sensitivity of \(\mathcal{A}\) is defined as
\[s(\mathcal{A})=\max_{d(D,D^{\prime})=1,\text{card}(D)=n}\|\mathcal{A}(D)- \mathcal{A}(D^{\prime})\| \tag{5}\]
The Laplacian mechanism to construct a differentially private algorithm is as follows (see (Koufogiannis et al., 2015, Holohan et al., 2018)): for an algorithm (or function) \(\mathcal{A}:\mathcal{D}\mapsto\Theta\subset\mathbb{R}^{M}\), the random function \(\mathcal{A}^{\text{DP}}(d)=\mathcal{A}(d)+\xi,\ \forall d\in\mathcal{D}\) satisfies \((\epsilon,0)\)-differential privacy, where the elements \(\xi_{i}\) follows a Laplacian distribution \(\text{Lap}(0,s(\mathcal{A})/\epsilon)\): \(p(\xi)\propto e^{-\epsilon|\xi|/s(\mathcal{A})}\). We can set \(\epsilon\) and \(\delta\) as small constants ( _e.g._, \(\epsilon=0.01\) and \(\delta=10^{-3}\)), and adjust the noise levels in different DP mechanisms.
```
Parameters: Noise scale \(\sigma\); learning rate \(\eta_{t}\); group size \(l\), gradient norm bound \(C\); number of interactions \(T\). Data:\(\mathcal{D}=\{Z_{1},\ldots,Z_{n}\}=\{(X_{1},Y_{1}),\ldots,(X_{n},Y_{n})\}\); Initialize:\(\theta_{0}\) randomly; for\(t\in[T]\)do Take a random sample \(L_{t}\) with sampling probability \(l/n\); For each \(i\in L_{t}\), compute the gradient \(g_{t}(Z_{i})\leftarrow\nabla_{\theta_{t}}\mathcal{L}(\theta_{t};Z_{i})\); Clip gradient: \(\tilde{g}_{t}(Z_{i})=g_{t}(Z_{i})/\max(1,\frac{|g_{t}(Z_{i})|_{2}}{C})\); Add Noise: \(\tilde{g}_{t}(Z_{i})\leftarrow\frac{1}{|L_{t}|}(\sum_{i}\tilde{g}_{t}(Z_{i})+ \mathcal{N}(0,\sigma^{2}C^{2}I))\); Descent: \(\theta_{t+1}\leftarrow\theta_{t}-\eta_{t}\tilde{g}_{t}\). Output \(\theta_{T}\) with the overall privacy cost \((\epsilon,\delta)\) using a privacy accounting method.
```
**Algorithm 1**DP-SGD (Abadi et al., 2016)
### Our method: Lazy Estimation with Differential Privacy
First, we use a noisy SGD method (Abadi et al., 2016) with \((\epsilon,\delta)\) differential privacy to get a full model parameter estimation \(\widehat{\theta}_{n}^{\text{DP}}(\epsilon,\delta)\), which for convenience will be denoted as \(\widehat{\theta}_{n}^{\text{DP}}\). Fig. 3.3 provides an overview of our procedure.
Based on this \(\widehat{\theta}_{n}^{\text{DP}}\), we estimate the LOO model parameters by using lazy training to take a linearization around \(\widehat{\theta}_{n}^{\text{DP}}\) and minimize the penalized training loss on data \(\mathcal{D}_{\setminus j}=\{(X_{i},Y_{i})\}_{i\in[n]\setminus[j]}\),
\[\widetilde{\theta}_{n,\setminus j}=\text{LAZY}_{\lambda}(\widehat{\theta}_{n}^ {\text{DP}};\ \mathcal{D}_{\setminus j}). \tag{6}\]
Here \(\text{LAZY}_{\lambda}(\cdot;\cdot)\) is defined in (3). Based on the LOO model parameters, we can calculate the LOO residuals by
\[R_{j}:=|Y_{j}-f(X_{j};\ \widetilde{\theta}_{n,\setminus j})|.\]
For a test data \(X_{n+1}\), its predictive interval using our method is:
\[\widehat{C}_{\alpha,n}^{\nu}(X_{n+1})\coloneqq\left[Q_{\alpha}^{-}\left\{f(X_ {n+1};\ \widetilde{\theta}_{n,\setminus j})-R_{j}\right\}-\nu,\ Q_{\alpha}^{+}\left\{f(X_ {n+1};\ \widetilde{\theta}_{n,\setminus j})+R_{j}\right\}+\nu\right] \tag{7}\]
where \(\nu\) is a relaxation term that is close to \(0\). Our full method is presented in Algorithm 2.
## 4 Coverage Guarantee
Recall that \(\widehat{\theta}_{n}^{\text{DP}}\) is the NN parameter estimate from \((\epsilon,\delta)\)-DP algorithm using the full training data \(\mathcal{D}=\{Z_{1},\ldots,Z_{n}\}=\{(X_{1},Y_{1}),\ldots,(X_{n},Y_{n})\}\). For any \(j\in[n]\), define \(\widehat{\theta}_{\setminus j}^{\text{DP}}\) as NN parameter estimate from an \((\epsilon,\delta)\)-DP algorithm \(\mathcal{A}^{\text{DP}}\) trained with the \(j\)-th sample deleted, _i.e._, using data \(\mathcal{D}_{\setminus j}=\mathcal{D}\backslash\{Z_{j}\}\). Based on \(\widehat{\theta}_{\setminus j}^{\text{DP}}\), we define another lazy estimate:
\[\widetilde{\theta}_{\setminus j,\setminus j}=\widehat{\theta}_{\setminus j }^{\text{DP}}+\arg\min_{\Delta\theta}\left\{\sum_{i\in[n]\backslash\{j\}} \mathcal{L}\left(Y_{i},f(X_{i},\widehat{\theta}_{\setminus j}^{\text{DP}})+ \Delta\theta^{\top}\nabla_{\theta}f(X_{i};\theta)|_{\theta=\widehat{\theta}_ {\setminus j}^{\text{DP}}}\right)+\lambda\|\Delta\theta\|^{2}\right\}. \tag{8}\]
**Theorem 1**.: _For any \(\nu>0\) and \(\eta>0\), such that_
\[\mathbb{P}\left(|f(X_{n+1};\ \widetilde{\theta}_{n,\setminus j})-f(X_{n+1};\ \widetilde{\theta}_{\setminus j,\setminus j})|>\nu/2\right)\leq\eta, \tag{9}\]
_the coverage of the DP-Lazy prediction interval \(\widehat{C}_{\alpha,n}^{\nu}(x)\) defined in Equation (7) is larger than \(1-2\alpha-3\sqrt{2\eta+2\epsilon+\delta}\), where \((\epsilon,\delta)\) are the DP parameters, i.e.,_
\[\mathbb{P}\left[Y_{n+1}\in\widehat{C}_{\alpha,n}^{\nu}(X_{n+1})\right]\geq 1 -2\alpha-3\sqrt{2\eta+2\epsilon+\delta}. \tag{10}\]
**Remark 1**.: _In Theorem 1, essentially (9) requires an out-of-sample stability in the DP algorithm, which appears elsewhere in the literature Foygel Barber et al. (2019) and is referred to as "hypothesis stability" in Bousquet and Elisseeff (2002). We want to emphasize that this out-of-sample stability only requires that, for a test data point that is independent of the training data, the predicted value does not change much if we remove one point in the training data set, which can hold even for algorithms that suffer from strong overfitting in contrast to the in-sample stability. As an illustrative example, the K-nearest-neighbor algorithm is shown to satisfy the out-of-sample stability with \(\nu=0\) and \(\eta=K/n\) (see Example 5.5 in Foygel Barber et al. (2019))._
_For neural networks, notions of stability are also assumed in the literature (Verma and Zhang, 2019; Forti et al., 1994). In the Supplementary Material, we also give an example showing that if \(x\) is multivariate Gaussian, for a two-layer neural network (one hidden layer) with activation functions like ReLU, sigmoid or tanh and \(s(\mathcal{A})=O(1/n)\) (defined in Equation (5)), (9) holds true with \(\nu=O(1/n)\) up to some log terms and \(\eta=O(e^{-\epsilon n})\) for some constant \(c\)._
Figure 1: The outline of DP-Lazy PI method to calculate prediction intervals
### Proof Overview of Theorem 1
In this section, we'll provide the key ideas and the skeleton of the proof for the coverage guarantee, while the completed proof is postponed in the Appendix. Recall \(R_{j}:=|Y_{j}-f(X_{j};\ \widetilde{\theta}_{n,\setminus j})|\) and define \(R_{j,\setminus j}:=|Y_{j}-f(X_{j};\ \widetilde{\theta}_{\langle j,\setminus j \rangle})|\).
First, for any \(\alpha<\alpha^{\prime}<1\) we show that a jackknife+ type interval \(\widetilde{C}_{\alpha^{\prime},n}(X_{n+1})\) defined as
\[\widetilde{C}_{\alpha^{\prime},n}(x)\coloneqq\left[Q_{\alpha^{\prime}}^{-} \left\{f(x;\ \widetilde{\theta}_{\langle j,\setminus j\rangle}-R_{j,\setminus j}\right\},\ Q_{\alpha^{\prime}}^{+}\left\{f(x;\ \widetilde{\theta}_{\langle j,\setminus j \rangle}+R_{j,\setminus j}\right\}\right] \tag{11}\]
is contained within our rapidly calculated prediction interval \(\widehat{C}_{\alpha,n}^{\nu}(X_{n+1})\) with high probability that depends on the relaxation term \(\eta\) and the DP parameters \((\epsilon,\delta)\):
\[\mathbb{P}\left[\widetilde{C}_{\alpha^{\prime},n}(X_{n+1}) \nsubseteq\widehat{C}_{\alpha,n}^{\nu}(X_{n+1})\right] \tag{12}\] \[\quad=\mathbb{P}\Bigg{[}\left\{Q_{\alpha}^{+}\left\{f(X_{n+1};\ \widetilde{\theta}_{n,\setminus j})+R_{j}+\nu\right\}<Q_{\alpha^{\prime}}^{+} \left\{f(X_{n+1};\ \widetilde{\theta}_{\langle j,\setminus j\rangle})+R_{j,\setminus j}\right\}\right\}\] (13) \[\qquad\qquad\cup\left\{Q_{\alpha}^{-}\left\{f(X_{n+1};\ \widetilde{\theta}_{n,\setminus j})-R_{j}-\nu\right\}>Q_{\alpha^{\prime}}^{-} \left\{f(X_{n+1};\ \widetilde{\theta}_{\langle j,\setminus j\rangle})-R_{j,\setminus j}\right\} \right\}\Bigg{]}\] \[\leq\mathbb{P}\Bigg{[}\Bigg{\{}\sum_{j=1}^{n}\mathbb{1}\Big{(} \Big{|}f(X_{n+1};\ \widetilde{\theta}_{\langle j,\setminus j \rangle})-f(X_{n+1};\ \widetilde{\theta}_{n,\setminus j})\Big{|}\] (14) \[\qquad\qquad+\Big{|}f(X_{j};\ \widetilde{\theta}_{\langle j, \setminus j\rangle})-f(X_{j};\ \widetilde{\theta}_{n,\setminus j})\Big{|}>\nu\Big{)}\geq( \alpha^{\prime}-\alpha)(n+1)\Bigg{\}}\Bigg{]}\] \[\leq\frac{1}{(\alpha^{\prime}-\alpha)}\Bigg{\{}\mathbb{P}\left[ \Big{|}f(X_{n+1};\ \widetilde{\theta}_{\langle j,\setminus j\rangle})-f(X_{n+1};\ \widetilde{\theta}_{n,\setminus j})\Big{|}>\nu/2\right]\] \[\qquad\qquad+\mathbb{P}\left[\Big{|}f(X_{j};\ \widetilde{\theta}_{\langle j, \setminus j\rangle})-f(X_{j};\ \widetilde{\theta}_{n,\setminus j})\Big{|}>\nu/2\right] \Bigg{\}}\] (15) \[\leq\frac{1}{(\alpha^{\prime}-\alpha)}\left\{2\mathbb{P}\left[ \Big{|}f(X_{n+1};\ \widetilde{\theta}_{\langle j,\setminus j \rangle})-f(X_{n+1};\ \widetilde{\theta}_{n,\setminus j})\Big{|}>\nu/2\right]+2 \epsilon+\delta\right\}\] (16) \[\leq\frac{2\eta+2\epsilon+\delta}{\alpha^{\prime}-\alpha}. \tag{17}\]
The probabilities are taken with respect to all the training data \(\mathcal{D}\) as well as the test data \((X_{n+1},Y_{n+1})\). (13) holds by the definitions of the prediction intervals of interest, which is equivalent to (14); by data exchangeability and Markov inequality, (15) holds true; by the property of differential privacy in the Appendix, the in-sample stability term in (15) is relaxed to an out-of-sample stability condition in (16), which is bounded by the condition in Equation (9).
By the jackknife+ coverage guarantee in Foygel Barber et al. (2019) that
\[\mathbb{P}(Y_{n+1}\notin\widetilde{C}_{\alpha^{\prime},n}(X_{n+1}))\leq 2 \alpha^{\prime}, \tag{18}\]
we can bound the miscoverage rate for the prediction interval \(\widehat{C}_{\alpha,n}^{\nu}(X_{n+1})\):
\[\mathbb{P}(Y_{n+1}\notin \widehat{C}_{\alpha,n}^{\nu}(X_{n+1})) \tag{19}\] \[\leq \mathbb{P}(Y_{n+1}\notin\widetilde{C}_{\alpha^{\prime},n}(X_{n+1} ))+\mathbb{P}\left[\widetilde{C}_{\alpha^{\prime},n}(X_{n+1})\nsubseteq\widehat {C}_{\alpha,n}^{\nu}(X_{n+1})\right]\] (20) \[\leq 2\alpha^{\prime}+\frac{2\eta+2\epsilon+\delta}{\alpha^{\prime}-\alpha} \tag{21}\]
for all \(\alpha^{\prime}>\alpha\). Take \(\alpha^{\prime}=\alpha+\sqrt{2\eta+2\epsilon+\delta}\), we therefore have the coverage of \(\widehat{C}_{\alpha,n}^{\nu}(X_{n+1})\):
\[\mathbb{P}(Y_{n+1}\in \widehat{C}_{\alpha,n}^{\nu}(X_{n+1}))=1-\mathbb{P}(Y_{n+1}\notin \widehat{C}_{\alpha,n}^{\nu}(X_{n+1}))\] \[\geq 1-2\alpha-3\sqrt{2\eta+2\epsilon+\delta}.\]
## 5 Experiments
We compare the following methods for estimating prediction intervals: (1) jackknife+: defined in Section 2.1, with the base algorithm being neural networks with random initialization; (2) DP-Lazy PI (labeled _lazy_dp_ in plots): our proposed method with the same NN architecture used in jackknife+, with the relaxation term \(\nu\) set to zero, and differential parameters set as \(\epsilon=0.01,\ \delta=10^{-3}\); (3) Lazy PI without DP (labeled _lazy_finetune_ in plots): removes the privacy mechanism compared to DP-Lazy PI, _i.e._, use \(\widehat{\theta}_{n}\) instead of \(\widehat{\theta}_{n}^{\mathrm{DP}}\) as \(\theta_{0}\) in the lazy estimation step (3).
For evaluation, we consider the following three performance aspects on the test data set \(\mathcal{D}_{\mathrm{test}}\) with size \(n_{\mathrm{test}}\) for a prediction interval \(\widehat{C}_{n}(\cdot)\): (1) Coverage: \(\frac{1}{n_{\mathrm{test}}}\sum_{(x,y)\in\mathcal{D}_{\mathrm{test}}}\mathbbm{1 }\left[y\in\widehat{C}_{n}(x)\right](2)\) Compute time; (3) Average interval width: \(\frac{1}{n_{\mathrm{test}}}\sum_{(x,y)\in\mathcal{D}_{\mathrm{test}}}\mathrm{ len}[\widehat{C}_{n}(x)]\). As we set the level as \(\alpha=0.1\), our target of the coverage is \(1-2\alpha=80\%\), we want to emphasize that a higher coverage of the prediction intervals is not always desirable, since it may suggest that the prediction intervals are overly wide and conservative.
### Simulation
To generate the data, \(\{X_{i},\ i\in[N]\}\) are randomly selected from a Gaussian distribution \(X_{i}\in\mathbb{R}^{p}\sim\mathcal{N}(0,5\cdot\mathbb{I}_{p}),\ i\in N\), where \(N=5000\). The responses \(Y_{i},\ i\in[N]\) are generated by:
\[Y_{i}=\sqrt{\mathrm{ReLU}(X_{i}^{\top}\beta)}+\epsilon_{i},\ \text{where}\ \epsilon_{i}\sim\mathcal{N}(0,0.5),\ \beta\sim\mathrm{Beta}(1.0,\,2.5). \tag{22}\]
We consider a neural network with two hidden layers, each contains 64 nodes. The data are randomly split into the training data with the training size \(n=100\), and evaluation set with the size \(n_{\mathrm{test}}=4900\).
We construct prediction intervals using jackknife+, lazy finetune and DP-Lazy PI respectively on the training data with \(\alpha=0.1\). When training the neural networks, the training batch size is 10 and the max number of epochs is 10. The penalty parameter is taken as \(\lambda=10\) in the lazy type of methods.
As shown in Figure 2, the coverage of our method is closer to the target 80% coverage and in terms of the compute time, lazy methods reduce the average compute time significantly compared to jackknife+. As the feature dimension increases, the coverage is still guaranteed; we can also see in Figure 2 (b) that compared to lazy finetune without noise added in the full model training, lazy DP becomes better in the interval width, which might come from the interpolation (near zero training error) in the high-dimensional case that makes the LOO estimates in lazy finetune stays at the initial base model in the lazy procedure.
### Real Data
Data sets(1) _The BlogFeedback1_ data set (Spiliopoulou et al., 2014, Foygel Barber et al., 2019), contains information on 52397 blog posts with \(p=280\) covariates. The response is the number of comments left on the blog post in the following 24 hours, which we transformed as \(Y=\log(1+\#\text{comments})\). (2) _The Medical Expenditure Panel Survey 2016 data set2_ contains 33005 records on individuals' utilization of medical services such as visits to the doctor, hospital stays etc with feature dimension \(p=107\) with relevant features such as age, race/ethnicity, family income, occupation type, etc. Our goal is the predict the health care system utilization of each individual, which is a composite score reflecting the number of visits to a doctor's office, hospital visits, days in nursing home care, etc.
Footnote 1: [https://archive.ics.uci.edu/ml/datasets/BlogFeedback](https://archive.ics.uci.edu/ml/datasets/BlogFeedback)
Footnote 2: [https://meps.ahrq.gov/mepsweb/data_stats/download_data_files_detail.jsp?cboPufNumber=HC-192](https://meps.ahrq.gov/mepsweb/data_stats/download_data_files_detail.jsp?cboPufNumber=HC-192)
Experiment SettingsThe training size to construct the prediction interval is set \(n=100\). We consider the 3-layer neural network with hidden layers \((64,64)\) as the base NN architecture. To train the model parameters in jackknife+, we initialize the NN randomly for each LOO models. The penalty level is set as \(\lambda=10\), and the DP parameters are \(\epsilon=0.01,\ \delta=10^{-3}\). The batch size is 10 and the maximum number of epochs is 10. We repeat the trials 15 times with different random seeds for the train-test split and the random initialization of NN parameters in jackknife+. The figures show the average coverage, computing time and the interval width across these random trails.
ResultsAs shown in Figure 3 (a) and (b), when we look into the real data sets, DP-Lazy PI decreases the computing time while enjoys a narrower interval width, while achieving the coverage guarantee at 80%. Compared to the data generated in the simulation, for real data with much complicated data structure, the model mis-specification makes the width of the jackknife+ estimator greater and the implicit regularization of the lazy approach reduces this width. The implementation codes can be found in [https://github.com/Vioyue6/DP_LAZY_PI](https://github.com/Vioyue6/DP_LAZY_PI).
Figure 2: The average coverage, average computing time and width of the prediction intervals with \(\alpha=0.1\) in 15 trials, with the error bars to show the \(\pm\) standard errors. The dotted line in the coverage panel indicates the target coverage level 80%.
## 6 Discussion and Limitations
In this paper, we describe a new method, Differential Privacy Lazy Predictive Inference or DP-Lazy PI, which provides a fast method for distribution-free inference for neural networks. Our method involves using differentially private stochastic gradient descent (DP-SGD) as an initial estimate and then computing leave-one-out approximations using lazy training and then re-combining the leave-one-out estimators to form the predictive interval. Importantly, we are able to provide coverage guarantees that closely match the coverage guarantees provided for the jackknife+ procedure with a fraction of the computational cost since only a single neural network model needs to be trained.
The two real data examples also suggest that when we have significant model misspecification, the implicit regularization that lazy training provides means that our DP Lazy-PI method has narrower width prediction intervals than jackknife+.
An alternative method we evaluated in our experiments was lazy finetune, which removes the privacy mechanism prior to lazy training. The simulation results show very similar and at times slightly improved performance compared to our DP Lazy-PI approach; however, a limitation is that we lack theoretical guarantees for this approach. It remains an open question whether we can provide theoretical guarantees for lazy finetune, noting that in the zero training error regime where we can perfectly interpolate, lazy training does not apply since we are at a zero gradient initialization, making linearized models useless. However, in higher dimensions, the implicit regularization through early stopping of neural network training may be the reason lazy training works well. A further limitation of this work is that our method requires the base algorithm ( _e.g._, the neural network estimation) to be stable and satisfy the condition (9).
Figure 3: The coverage, average computing time and width of the prediction intervals on real data sets with \(\alpha=0.1\). The dotted line in the left panels in (a) and (b) is the target 80% coverage.
## Acknowledgements
R. Willett gratefully acknowledges the support of AFOSR grant FA9550-18-1-0166 and NSF grants DMS-2023109 and DMS-1925101. G. Raskutti acknowledges the support of NIH grant R01 GM131381-03.
|
2305.07482 | Applications of information geometry to spiking neural network behavior | The space of possible behaviors complex biological systems may exhibit is
unimaginably vast, and these systems often appear to be stochastic, whether due
to variable noisy environmental inputs or intrinsically generated chaos. The
brain is a prominent example of a biological system with complex behaviors. The
number of possible patterns of spikes emitted by a local brain circuit is
combinatorially large, though the brain may not make use of all of them.
Understanding which of these possible patterns are actually used by the brain,
and how those sets of patterns change as properties of neural circuitry change
is a major goal in neuroscience. Recently, tools from information geometry have
been used to study embeddings of probabilistic models onto a hierarchy of model
manifolds that encode how model behaviors change as a function of their
parameters, giving a quantitative notion of "distances" between model
behaviors. We apply this method to a network model of excitatory and inhibitory
neural populations to understand how the competition between membrane and
synaptic response timescales shapes the network's information geometry. The
hyperbolic embedding allows us to identify the statistical parameters to which
the model behavior is most sensitive, and demonstrate how the ranking of these
coordinates changes with the balance of excitation and inhibition in the
network. | Jacob T. Crosser, Braden A. W. Brinkman | 2023-05-12T13:50:41Z | http://arxiv.org/abs/2305.07482v1 | # Applications of information geometry to spiking neural network behavior
###### Abstract
The space of possible behaviors complex biological systems may exhibit is unimaginably vast, and these systems often appear to be stochastic, whether due to variable noisy environmental inputs or intrinsically generated chaos. The brain is a prominent example of a biological system with complex behaviors. The number of possible patterns of spikes emitted by a local brain circuit is combinatorially large, though the brain may not make use of all of them. Understanding which of these possible patterns are actually used by the brain, and how those sets of patterns change as properties of neural circuitry change is a major goal in neuroscience. Recently, tools from information geometry have been used to study embeddings of probabilistic models onto a hierarchy of model manifolds that encode how model behaviors change as a function of their parameters, giving a quantitative notion of "distances" between model behaviors. We apply this method to a network model of excitatory and inhibitory neural populations to understand how the competition between membrane and synaptic response timescales shapes the network's information geometry. The hyperbolic embedding allows us to identify the statistical parameters to which the model behavior is most sensitive, and demonstrate how the ranking of these coordinates changes with the balance of excitation and inhibition in the network.
## I Introduction
A major obstacle to understanding the computational underpinnings of the brain is the high dimensionality of its inputs--environmental stimuli such as light and sound--and its outputs--the activity of neurons and the organismal behaviors they enact [1]. The behavioral space of a neural circuit with \(N\) neurons is unmanageably large: the number of possible spike train patterns such a network can in principle produce over a trial of time length \(T\) divided into time bins of size \(\Delta t\) is of the order \(\sim\)\(2^{NT/\Delta t}\), assuming at most one spike per time bin. As \(\Delta t\to 0\), this output space becomes infinite-dimensional. However, the behavior of a neural population does not occupy this entire space, as activity is correlated across time and neurons, and the actual behavior of any given neural circuit constitutes just a subset of all possible observations. Perhaps surprisingly, analysis of experimental data has repeatedly found that under many conditions collective neural activity is low dimensional, often comprising less than \(\sim\)\(10^{2}\) dimensions of this infinite space [2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13].
Theoretical and computational work in neuroscience has largely focused on investigating the role that synaptic connections between neurons play in shaping the possible activity patterns of a network [14; 15; 16; 17; 18; 19; 20], which can be represented by manifolds (hyper-surfaces) in the behavioral output space of a neural circuit. These manifolds are complicated by the fact that many distinct neural circuits give rise to essentially identical patterns of activity [21; 22; 23; 24], meaning many different configurations are mapped to nearby points on these manifolds. Understanding how network activity and function changes as network properties or states change is a fundamental problem in neuroscience, and learning how to manipulate this activity most efficiently could lead to new and more effective treatments of neurological disorders.
Taming the possible behavioral repertoires of neural circuits by brute force simulations of network activity is computationally expensive and impractical for circuits larger than a few neurons. The tools of information geometry offer a possible means of representing network activity in an abstract way, but one that is easier to apply to larger networks and begin to understand how to most effectively move a network through its parameter space to achieve desired output behaviors [25; 26; 27; 28; 29; 30; 31; 32]. Note that this use of information geometry is a means of understanding the structure of complex models themselves, in contrast to applications of information theory in neuroscience as a modeling tool for understanding sensory coding [33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45].
Many models in complex biology generate a hierarchy of "hyperribbons" in their behavioral output space. These hyperribbons are manifolds with a few long directions of the manifold, representing "stiff" directions that separate disparate activity states, and many thin directions, which represent "sloppy" directions that describe networks with very similar behavior [46; 47; 48; 49; 50; 51; 52; 53]. These model manifolds come equipped with a natural metric that measures a sense of difference in behavior that is like a distance. We can determine the combinations of parameters that predict the bulk of the behavioral space of the network by identifying these model manifolds. This opens a path for better understanding of how to manipulate network properties to tune a circuit between different regimes of behavior.
In this work, we apply tools of information geometry to models of neural circuitry, and investigate how the bal
ance of single-neuron properties and the properties of the synaptic connections between neurons shape the hierarchy of possible behaviors of the networks. Specifically, we study how changing the membrane and synaptic time constants of the networks shape the manifold hierarchy. We also investigate how adjusting the balance of excitation and inhibition in the network change the rankings of the different hierarchical modes of the behavioral space. Previous work applying ideas from information geometry to neuroscience have primarily to study abstracted representations of spiking networks [25; 29], networks of rate models [30; 31; 28; 32], or neural field and pool models [26; 27]. By contrast, the work presented in this paper studies a class of leaky integrate-and-fire neurons--a commonly used modeling framework--with explicit consideration of some biophysical properties of individual neurons to make closer contact with the biological reality of neural systems. This is done by leveraging the specific properties of recently developed tools in information geometry [52; 53]. We organize the paper as follows: in Sec. II we introduce the class of stochastic spiking models we will be working with and the reduction to a population-based formalism. Then, in Sec. III, we give a self-contained explanation of the "isKL" embedding method introduced by [53], and how it applies to our population model. We detail the results of the application of the isKL method in Sec. IV, and finally discuss the interpretation and significance of our results and methodology in Sec. V.
## II Models
### Nonlinear Hawkes process
To model the spiking dynamics of individual neurons, we consider a nonlinear Hawkes process [54; 17]
\[\frac{dV_{i}}{dt}= -\tau_{m}^{-1}(V_{i}-\varepsilon_{i})+I_{i}\] \[+\tau_{s}^{-1}\left(\mu_{\text{ext}}-J_{\text{self}}\dot{n}_{i}( t)+\sum_{j=1}^{n}w_{ij}\dot{n}_{j}(t)\right) \tag{1a}\] \[\dot{n}_{i}(t)dt\sim\text{Poiss}[\phi(V_{i}(t))dt], \tag{1b}\]
where \(V_{i}\) is the membrane potential of neuron \(i\), \(\varepsilon_{i}\) is the leak reversal potential, \(w_{ij}\) is the strength of a synaptic connection from neuron \(j\) to neuron \(i\), and \(-J_{\text{self}}\) is an inhibitory self-coupling to implement post-spike refractory dynamics. The two currents \(\mu_{\text{ext}}\) and \(I_{i}\) represent an average current received from an external network and an experimentally injected current that differs by neuron, respectively. The process \(\dot{n}_{i}(t)\) is the spike train of neuron \(i\), and \(\phi(V_{i}(t))dt\) is the instantaneous firing rate nonlinearity that determines a Poisson event rate conditioned on the membrane potential of a given neuron. For the specific models studied here, \(\phi(x)=\frac{1}{2}(x+\sqrt{x^{2}+1/2})\). Finally, \(\tau_{m}\) and \(\tau_{s}\) are modulated parameters corresponding membrane and synaptic timescales, respectively. Eqn. 1a of this model assigns leaky integration dynamics to the membrane potential of each individual neuron, while Eqn. 1b assigns conditionally Poisson spiking dynamics to each neuron. Taken together, this model can be thought of as a soft-threshold leaky integrate-and-fire system.
Foreshadowing the coming analysis, we note that analytically calculating the statistical properties of the models in Eqn. 1 is generally intractable, and to make headway we will implement a Gaussian-process approximation of the network dynamics around the mean-field activity.
We can obtain a mean-field approximation of the steady-state solution for the membrane potential dynamics in Eqn. 1a by marginalizing out the spiking dynamics and assuming the distribution is sharply peaked around the most probable path of \(V_{i}(t)\). Assuming the network achieves a steady state at long times, this procedure gives us a set of transcendental equations that can be solved numerically:
\[V_{i}^{\text{mf}} =\varepsilon_{I}+\tau_{m}I_{i}\] \[+\frac{\tau_{m}}{\tau_{s}}\left(\mu_{\text{ext}}-J_{\text{self}} \phi(V_{i}^{\text{mf}})+\sum_{j}w_{ij}\phi(V_{j}^{\text{mf}})\right), \tag{2}\]
where the \(V_{i}^{mf}\), the solutions of these equations, are the mean-field predictions of the steady-state values of membrane potentials, with \(\phi(V_{i}^{mf})\) the corresponding mean-field prediction of the firing rates. We find the solutions to these transcendental equations using a forward-Euler integration scheme.
Following the prescription of Ref. [55; 56], the time-dependent distribution of model behaviors described in Eqn. 1 can be written in the form of a path integral,
\[P[\mathbf{V}(t),\mathbf{\dot{n}}(t)]=\int\mathfrak{D}[\tilde{\mathbf{V}}, \mathbf{\ddot{n}}]e^{-S[\tilde{\mathbf{V}},\mathbf{V},\tilde{\mathbf{n}}, \dot{\mathbf{n}}]}, \tag{3}\]
with an action \(S\) given by
s The Gaussian process approximation of the membrane dynamics in Eqn. 1a is obtained by marginalizing out the spiking dynamics from the action in Eqn. 4 and taking a saddle point approximation of the action around the mean-field solution in Eqn. 2 (see Appendices A & B for details). The resulting action corresponds to the Gaussian stochastic process given by [17]
\[d\mathbf{V}=\mathbf{A}\left(\mathbf{V}^{mf}-\mathbf{V}\right)dt+\mathbf{ \Sigma}d\mathbf{W}_{t} \tag{5a}\] \[A_{ij}=\delta_{ij}\left(\tau_{m}^{-1}+\tau_{s}^{-1}J_{\text{self} }\phi^{\prime}(V_{j}^{\text{mf}})\right)-\tau_{s}^{-1}w_{ij}\phi^{\prime}(V_{ j}^{\text{mf}})\] (5b) \[\left(\mathbf{\Sigma}\mathbf{\Sigma}^{T}\right)_{ij}=\tau_{s}^{- 2}\sum_{k}\Biggl{[} \left(-\delta_{ik}J_{\text{self}}+w_{ik}\right)\] \[\times\left(-\delta_{jk}J_{\text{self}}+w_{jk}\right)\phi(V_{k}^ {\text{mf}})\Biggr{]} \tag{5c}\]
where \(d\mathbf{W}_{t}\) is a standard Wiener process and we use the Ito convention. We note that Eqn. 5a is an Ornstein-Uhlenbeck (OU) process, albeit one in which the drift and diffusion matrices are dependent on the mean-field values of the membrane potential.
In principal, the network modeled in Eqn. 1 and approximated in Eqn. 5 could be of arbitrary size. To make our information geometric analysis tractable, however, we will reduce the model to a three-population model, comprising excitatory and inhibitory populations, and a single neuron targeted with an injected current; this is depicted diagrammatically in Fig. 1. We start by considering the connectivity matrix to be random with each entry being a Bernoulli variable with probability \(p\) being scaled by a connection type-dependent value \(w_{IJ}\). To produce the more tractable reduced model, we take a population-averaging approach to the approximated process in Eqn. 5. We now use an uppercase subscript to denote a population averaged variable. For example
\[V_{I}\equiv\frac{1}{N_{I}}\sum_{i\in I}V_{i}(t),\]
where we will use uppercase indices \(I,J,K\in\{0,1,2\}\) to denote the different populations, with \(I=0\) the single test neuron, \(I=1\) the excitatory population, and \(I=2\) the inhibitory population. The dynamics of the population-averaged membrane potentials under the Gaussian approximation now follow a lower-dimensional version of Eqn. 5a with drift and diffusion matrices given by
\[A_{IJ}= \delta_{IJ}(\tau_{m}^{-1}+\tau_{s}^{-1}J_{\text{self}}\phi^{\prime }(V_{I}^{\text{mf}}))\] \[-\tau_{s}^{-1}pw_{IJ}N_{J}\phi^{\prime}(V_{J}^{\text{mf}}) \tag{6a}\]
Figure 1: **Network model** (A) A graphical representation of the network architecture being studied. (B) An example raster plot generated from an extended network of spiking neurons modeled by Eqn.1
\[\left(\mathbf{\Sigma\Sigma}^{T}\right)_{IJ}= \tau_{s}^{-2}\sum_{K=0,1,2}\Bigg{[}\left(-\delta_{IK}\frac{J_{\rm self }}{N_{K}}+pw_{IK}\right)\] \[\times\left(-\delta_{JK}\frac{J_{\rm self}}{N_{K}}+pw_{JK}\right)N _{K}\phi(V_{K}^{\rm mf})\Bigg{]} \tag{6b}\] \[\approx\tau_{s}^{-2}\sum_{K=1,2}p^{2}w_{IK}w_{JK}N_{K}\phi(V_{K}^ {\rm mf})\]
The approximation in the last line above comes from the fact that \(N_{1},\ N_{2}\gg 0\). The population-averaged mean-field equations are now
\[V_{I}^{\rm mf}= \varepsilon_{I}+\tau_{m}I_{I}+\frac{\tau_{m}}{\tau_{s}}\mu_{\rm ext}\] \[+\frac{\tau_{m}}{\tau_{s}}\left(-J_{\rm self}\phi(V_{I}^{\rm mf}) +\sum_{J=0,1,2}pw_{IJ}N_{J}\phi(V_{J}^{\rm mf})\right) \tag{6c}\]
We formally derive the Gaussian-process approximation of the full-network (Eqn. 5) and the population-averaged approximation (Eqn. 6) in Appendix A. We also note that the statistics of the model in Eqn. 6 are equivalent to those derived by first taking a population average of the membrane potential dynamics and then applying the Gaussian approximation framework. This second derivation is provided in Appendix B.
The spiking model we study centers around a balanced network, specifically a network that is not finely tuned. This notion of fine-tuning arises from a standard derivation of balance equations for the model system (see Appendix D). In short, we can look at the average external input \(\kappa_{I}\) into population \(I\). For our model, we can approximate \(\kappa_{I}\) to leading order as:
\[\tau_{s}^{-1}\kappa_{I}\approx\sqrt{N} \left(\frac{1}{\sqrt{N}}\left(I_{I}+\tau_{s}^{-1}\mu_{\rm ext} \right)\right.\] \[\left.+\tau_{s}^{-1}\left\{pw_{I1}\frac{N_{1}}{\sqrt{N}}\phi(V_{ 1})+pw_{I2}\frac{N_{2}}{\sqrt{N}}\phi(V_{2})\right\}\right)\]
where \(V_{I}\) is the population-averaged membrane potential for population \(I\). For the model to be in a balanced state, the variance of the synaptic input should be \(\mathcal{O}(N^{0})\) which in turn implies the synaptic weights should scale as \(w_{IJ}\sim 1/\sqrt{N}\). Additionally, we assume that \(N_{I}\propto N\) and \(\langle I_{I}\rangle,\ \mu_{\rm ext}\propto\sqrt{N}\). The balanced state of the model also requires that all \(\kappa_{I}\) be \(\mathcal{O}(1)\). For this to be true as \(N\to\infty\), the terms in the parentheses must vanish. This provides gives us a linear system that uniquely defines (\(\phi(V_{1})\), \(\phi(V_{2})\)):
\[-\begin{bmatrix}I_{1}+\tau_{s}^{-1}\mu_{\rm ext}\\ I_{2}+\tau_{s}^{-1}\mu_{\rm ext}\end{bmatrix}=\frac{1}{\tau_{s}}\begin{bmatrix} pw_{11}N_{1}&pw_{12}N_{2}\\ pw_{21}N_{1}&pw_{22}N_{2}\end{bmatrix}\begin{bmatrix}\phi(V_{1})\\ \phi(V_{2})\end{bmatrix} \tag{7}\]
From this set of equations, we derive two cases. First, if the matrix on the right-hand side of Eqn. 7 is singular and neither of the columns of the matrix are trivially the zero-vector, the columns must be scalar multiples of each other. We refer to this as a "fine-tuned" spiking model. If the left-hand side of Eqn. 7 is also a multiple of the columns, the system admits an infinite set of solutions (\(\phi(V_{1})\), \(\phi(V_{2})\)). Otherwise, it admits no solution. Such a network is thus finely-tuned to specific inputs. In contrast to this, we have "un-tuned" spiking models. In this case, the matrix on the right-hand side of Eqn. 7 is invertible and the system admits a unique solution (\(\phi(V_{1})\), \(\phi(V_{2})\)). This in effect applies constraints on the values of \(\{w_{IJ}\}\), which we refer to as the balance equations Eqs. D2 & D3 (see Appendix D for a derivation). Moving forward, we consider only spiking models derived from a balanced, un-tuned network. We also introduce a linear non-spiking model that will serve as a baseline comparison.
### Linear non-spiking model
Although the Gaussian process approximation of the spiking network will have a Gaussian steady-state distribution of the membrane potentials, the parameters of this distribution vary nonlinearly with the self-consistent mean-field solutions. To demonstrate that the behaviors we observe are consequences of the mean-field treatment of the spiking network, and not just the behavior of Gaussian processes, we also construct a simpler model of networked, linear non-spiking (or "graded potential") neurons. We assume the neurons are injected with large numbers of synaptic input that sum together to be approximately Gaussian, with non-zero mean \(\mu_{\rm ext}\), creating a stochastic system with dynamics described by:
\[\frac{dV_{i}}{dt}= -\tau_{m}^{-1}(V_{i}-\varepsilon_{I})+I_{i}+\tau_{s}^{-1}\mu_{\rm ext }-\tau_{s}^{-1}J_{\rm self}\phi(V_{i})\] \[\qquad\qquad+\tau_{s}^{-1}\sum_{j}w_{ij}\phi(V_{j})+\xi_{i}(t) \tag{8}\]
Here, the transfer function \(\phi(\cdot)\) is simply the identity function (i.e. \(\phi(x)=x\)). The processes \(\xi_{i}(t)\) are zero-mean Gaussian noise synaptic input from neurons external to the network being examined, and thus they scale with \(\tau_{s}^{-1}\). We define the covariance of the noise processes \(\{\xi_{i}(t)\}\) as follows.
\[\langle\xi_{i}(t)\xi_{j}(t^{\prime})\rangle=\tau_{s}^{-2}\delta_{ij}\mu_{\rm ext }\delta(t-t^{\prime})\]
After population-averaging, the non-spiking model becomes another OU process:
\[d\mathbf{V}= \mathbf{A}\left(\mathbf{A}^{-1}\left(\tau_{s}^{-1}\mu_{\rm ext}+ \tau_{m}^{-1}\varepsilon_{I}+\mathbf{I}\right)-\mathbf{V}\right)dt+\mathbf{\Sigma}d \mathbf{W}_{t}\] \[= \mathbf{A}\left(\mu-\mathbf{V}\right)dt+\mathbf{\Sigma}d\mathbf{W}_{t}. \tag{9}\]
The drift and diffusion matrices are defined as follows
\[A_{IJ}= \delta_{IJ}\tau_{m}^{-1}+\tau_{s}^{-1}w_{IJ}^{*}\] \[w_{IJ}^{*} =-\delta_{IJ}J_{\rm self}+pw_{IJ}^{\rm mod}N_{J}\] \[\left(\Sigma\Sigma^{T}\right)_{IJ} =\tau_{s}^{-2}\delta_{IJ}\frac{\mu_{\rm ext}}{N_{I}}\]
Here and in the following sections, \(\mathbf{w}^{*}\) denotes the effective connectivity matrix for the linear non-spiking models. The values of \(\mathbf{w}^{\rm mod}\) are modulated depending on
the desired excitation-inhibition conditions, which will be discussed in Sec. II.4.
The linear form of the population-averaged non-spiking model permits more analytic study than the corresponding spiking models. Ornstein-Uhlenbeck processes like those in Eqns. 6 and 9 admit a Gaussian steady-state distribution if all eigenvalues of the drift matrix are positive [57]. From the form of the drift matrix for the linear model (Eqn. 9) there is a correspondence between eigenvalues of the drift matrix \(\mathbf{A}\) and the connectivity matrix \(\mathbf{w}^{*}\).
\[\lambda_{i,\mathbf{A}}=\tau_{m}^{-1}-\tau_{s}^{-1}\lambda_{i,\mathbf{w}^{*}}.\]
From the stationarity condition on the eigenvalues of \(\mathbf{A}\) and this correspondence between eigenvalues of \(\mathbf{A}\) and \(\mathbf{w}^{*}\), we can derive a stability boundary for the \((\tau_{m}^{-1},\tau_{s}^{-1})\) inverse timescale-space
\[\tau_{m}^{-1}>\tau_{s}^{-1}\lambda_{\mathbf{w}^{*}},\ \forall\ \lambda_{ \mathbf{w}^{*}}. \tag{10}\]
The loss of stability observed in OU processes often corresponds to a non-stationary regime in which the random variables may grow without bound. As the firing rate nonlinearity \(\phi(x)\) used in the spiking model is quasi-linear in the \(x>0\) regime, we expect the stability of the spiking models to be similar to the non-spiking models when they have the same E/I-dependent connectivity given by \(w_{IJ}\).
### Stationary distributions
As mentioned above, the stationary distributions admitted by Ornstein-Uhlenbeck processes are Gaussian when they exist [57]. Consider a general \(N\)-dimensional OU process:
\[d\mathbf{X}=\mathbf{A}\left(\mu-\mathbf{X}\right)dt+\mathbf{\Sigma}d\mathbf{W }_{t}\]
The stationary distribution, when it exists, is described by the multivariate normal probability density [57]
\[p\left(\mathbf{X}\right)=\frac{1}{(2\pi)^{N/2}\sqrt{\det(\mathbf{C})}}e^{- \frac{1}{2}(\mathbf{X}-\mu)^{T}\mathbf{C}^{-1}(\mathbf{X}-\mu)},\]
where the stationary covariance \(\mathbf{C}\) is given by the solution to the matrix equation [57]
\[\mathbf{\Sigma}\mathbf{\Sigma}^{T}=\mathbf{A}\mathbf{C}+\mathbf{C}\mathbf{A}^ {T}.\]
In practice, the stationary covariance matrix can by found by linearizing the matrix equation and solving the resulting linear system numerically.
### Network architectures
Now, we turn back to our network models. We consider a population of excitatory and inhibitory neurons in which a single excitatory target neuron is injected with an external driving current. The full network contains \(N=1000\) sparsely connected neurons. We condensed the full network model into representative 3-neuron network by population-averaging, as depicted in Fig.1A and described in Eqn. 6, representing the excitatory target neuron, the excitatory population, and the inhibitory population. Table 1 contains descriptions and numerical values for the parameters used in the present study.
In addition, we would like to adjust the relative recurrent excitation and inhibition in the networks. To accomplish this, the base connection weights given in Table 1 are scaled by a ratio \(r>0\) depending on the desired activity regime:
\[\mathbf{w}_{Xe}^{\mathrm{mod}}=r_{e}(r)w_{Xe,\mathrm{base}}=\begin{cases}rw_ {Xe,\mathrm{base}}&\text{if }r\geq 1\\ w_{Xe,\mathrm{base}}&\text{otherwise}\end{cases} \tag{11a}\] \[\mathbf{w}_{Xi}^{\mathrm{mod}}=r_{i}(r)w_{Xi,\mathrm{base}}= \begin{cases}w_{Xi,\mathrm{base}}&\text{if }r\geq 1\\ \frac{1}{r}w_{Xi,\mathrm{base}}&\text{otherwise}\end{cases}. \tag{11b}\]
The ratio serves to boost the recurrent excitatory weights in the excitatory regime (\(r>1\)) and the recurrent inhibitory weights in the inhibitory regime (\(r<1\)), through the functions \(r_{e}(r)\) and \(r_{i}(r)\), respectively.
The connection matrices \(\mathbf{w}\) and \(\mathbf{w}^{*}\) of the population-averaged spiking and non-spiking models, respectively, are now constructed from the full-network parameters and scaling of excitation and inhibition. All matrices \(\mathbf{w}\) and \(\mathbf{w}^{*}\) use the same indexing with \(I=0\) denoting the target neuron "population," \(I=1\) denoting the remaining excitatory neurons, and \(I=2\) denoting all inhibitory neurons. The connection strengths used in the linear non-spiking model in Eqn. 9 is then given by
\[\mathbf{w}^{*}= -\begin{bmatrix}J_{\mathrm{self}}&0&0\\ 0&J_{\mathrm{self}}&0\\ 0&0&J_{\mathrm{self}}\end{bmatrix}+\frac{1}{\sqrt{pN}}\begin{bmatrix}pw_{ee}&p \left(N_{e}-1\right)w_{ee}&pN_{i}w_{ei}\\ pw_{ee}&p\left(N_{e}-1\right)w_{ee}&pN_{i}w_{ei}\\ pw_{ie}&p\left(N_{e}-1\right)w_{ie}&pN_{i}w_{ii}\end{bmatrix}\] \[=-\begin{bmatrix}J_{\mathrm{self}}&0&0\\ 0&J_{\mathrm{self}}&0\\ 0&0&J_{\mathrm{self}}\end{bmatrix}+\frac{1}{\sqrt{pN}}\begin{bmatrix}pr_{e}(r )w_{ee,\mathrm{base}}&pr_{e}(r)\left(N_{e}-1\right)w_{ee,\mathrm{base}}&pr_{i} (r)N_{i}w_{ei,\mathrm{base}}\\ pr_{e}(r)w_{ee,\mathrm{base}}&pr_{e}(r)\left(N_{e}-1\right)w_{ee,\mathrm{base }}&pr_{i}(r)N_{i}w_{ei,\mathrm{base}}\\ pr_{e}(r)w_{ie,\mathrm{base}}&pr_{e}(r)\left(N_{e}-1\right)w_{ie,\mathrm{ base}}&pr_{i}(r)N_{i}w_{ii,\mathrm{base}}\end{bmatrix}. \tag{12}\]
The \(1/\sqrt{pN}\) scaling of the connection weights arises from the balance conditions mentioned at the end of Sec. II.1 and derived in Appendix D. The connection matrices used by the spiking models described generally in Eqn. 6 is given by
\[\mathbf{w} =\frac{1}{\sqrt{pN}}\mathbf{w}^{\mathrm{mod}}\] \[=\frac{1}{\sqrt{pN}}\begin{bmatrix}r_{e}(r)w_{ee,\mathrm{base}}&r _{e}(r)w_{ee,\mathrm{base}}&r_{i}(r)w_{ei,\mathrm{base}}\\ r_{e}(r)w_{ee,\mathrm{base}}&r_{e}(r)w_{ee,\mathrm{base}}&r_{i}(r)w_{ei, \mathrm{base}}\\ r_{e}(r)w_{ie,\mathrm{base}}&r_{e}(r)w_{ie,\mathrm{base}}&r_{i}(r)w_{ii, \mathrm{base}}\end{bmatrix}. \tag{13}\]
Finally, we would like a measure of the balance of excitation and inhibition ("E/I") within a class of models. As each model type corresponds to many particular models with different values of the inverse timescales (\(\tau_{m}^{-1},\tau_{s}^{-1}\)), we require a proxy measure for the E/I ratio to describe the whole class. In line with the method for adjusting the relative strength of recurrent excitation and inhibition introduced above, we assign a ratio of connection weights into the bulk excitatory population for a given model and a given modulation \(r\). For the non-spiking models, we give the log-ratio \(R\) of these weights
\[R=\log_{10}\left|\frac{\mathbf{w}_{2,1}^{*}+\mathbf{w}_{2,2}^{*}}{\mathbf{w}_{ 2,3}^{*}}\right|\]
To make an accurate comparison to the non-spiking models, the E/I values for the spiking models are reported using this same measure (for a given value of modulation parameter \(r\)).
We note here that the same balanced-network calculations (see Appendix D) that gave rise to the definitions of "fine-tuned" and "un-tuned" formally define a notion of balance. A balanced spiking network based on the model architecture used here must satisfy constraints on the weights of \(\mathbf{w}\) (Eqn. 13), either Eqns. D2 or Eqns. D3. The base connection weights for the unadjusted network--i.e. \(r=1\) in Eqns. 11--were chosen to meet these balance criteria, and the functions \(r_{e}(r)\) and \(r_{i}(r)\) serve to tilt the excitation-inhibition balance with respect to this measure.
\begin{table}
\begin{tabular}{|l||l|l|} \hline Parameter & Description & Value \\ \hline \hline \(N\) & Total number of neurons & \(1000\) \\ \hline \(N_{e}\) & Number of excitatory neurons & \(0.8N\) \\ \hline \(N_{i}\) & Number of inhibitory neurons & \(0.2N\) \\ \hline \(p\) & Probability of a directional synaptic connection \(w_{ij}\) between any two neurons & \(0.1\) \\ \hline \(-J_{\mathrm{self}}\) & The self connection for a neuron of type \(E/I\) designed to capture post-spike re- & \(-5\) \\ & factory dynamics & \\ \hline \(\varepsilon_{I}\) & Leak reversal potential for neuron \(i\) & \(0\) \\ \hline \(I_{I}\) & Injected current impinging on neuron population \(I\) & \(\begin{cases}0.02\text{ if target}\\ 0\text{ otherwise}\end{cases}\) \\ \hline \(\mu_{\mathrm{ext}}\) & Mean input from network-external neurons & \(0.1\) \\ \hline \(w_{ee,\mathrm{base}}\) & The total expected synaptic input weight from exc. neurons to exc. neurons & \(285\) \\ \hline \(w_{ie,\mathrm{base}}\) & The total expected synaptic input weight from exc. neurons onto inh. neurons & \(300\) \\ \hline \(w_{ei,\mathrm{base}}\) & The total expected synaptic input weight from inh. neurons to exc. neurons & \(-902.5\) \\ \hline \(w_{ii,\mathrm{base}}\) & The total expected synaptic input weight from inh. neurons to inh. neurons & \(-950\) \\ \hline \(\phi(x)\) & Firing rate transfer function & \(\begin{cases}x\text{ if Non}-\text{spiking}\\ \frac{x+\sqrt{x^{2}+\frac{1}{2}}}{2}\text{ if Spiking}\end{cases}\) \\ \hline \(\tau_{m}\) & Membrane timescale & variable \\ \hline \(\tau_{s}\) & Synaptic timescale & variable \\ \hline \end{tabular}
\end{table}
Table 1: **Model parameters** Descriptions and numerical values for the parameters for the non-spiking and spiking model types.
### Timescale sampling
To embed and visualize the model manifolds of interest, we must sample points on the manifold characterized by different values of the two modulated parameters. We do this by sampling a portion of the inverse-timescale parameter space that satisfies the stability condition given by Eq. (10) and where both inverse-timescales are positive. We apply a curvilinear grid to this region, uniformly sampling the radial and angular components. The radial distance components \(d\) of the grid are taken over a fixed range:
\[d\in[0.0025,\,0.03]\text{ ms}^{-1}\]
To apply both the stability boundary and positivity constraints, the lower bound of the angular component \(\alpha\) of the sample grid is set to a fixed value while the upper bound is set either by the stability boundary described by Eqn. 10 or to a fixed value, whichever is more stringent:
\[\tan(\alpha)\in\left[0.1,\,\min\left(\frac{1}{\max\{\lambda_{\mathbf{w}^{*}} \}},500\right)\right]\]
The conditions for this maximal sampling are summarized in Table 2. It is important to note that the stability boundary is determined by the eigenvalues of the connectivity matrix \(\mathbf{w}^{*}\), and thus the stability boundary and the sampling region are affected by the induced E/I balance is adjusted through its affect on \(\mathbf{w}^{*}\). The spiking models use the connection matrix from the equivalent non-spiking model to set the sampling range. The maximal sampling scheme is depicted diagrammatically in Fig. 2.
After the maximal sampling of parameter space for each model type for each E/I condition, sample points from the inverse-timescale space are subject to further exclusionary criteria. For both the spiking- and non-spiking-type models, sample points are excluded if they cause either the drift matrix \(\mathbf{A}\) or the covariance matrix \(\mathbf{C}\) to become singular. The singularities in these matrices have been observed to occur numerically near the theoretical stability boundary (Eqn. 10). In addition, sample points for the spiking models are excluded if the Euler integration used to find the mean-field solutions to Eqn. 6c does not converge. The integration is determined to be numerically non-convergent if the rate of change of the system either exceeds a predetermined value during integration or does not hit a convergence threshold before reaching the maximum number of steps.
All model manifolds studied here were generated from between 210,000 and 211,000 sampled parameter pairs.
## III Iskl embedding
In this section, we recapitulate the methods developed by Teoh and colleagues [53]. This framework revolves around using the symmetric Kullback-Liebler divergence \(D_{sKL}\) as a measure of separation for different probabilistic models of the same form but with different parameters:
\[D_{sKL}(\theta,\theta^{\prime})= D_{KL}(\theta:\theta^{\prime})+D_{KL}(\theta^{\prime}:\theta)\] \[=\mathbb{E}_{\theta}\left[\ln\frac{p(x|\theta)}{p(x|\theta^{ \prime})}\right]-\mathbb{E}_{\theta^{\prime}}\left[\ln\frac{p(x|\theta)}{p(x| \theta^{\prime})}\right].\]
Teoh _et al._ apply this measure to exponential family models, which have the general form
\[p(x|\theta)=\exp\left[\sum_{i=1}^{n}t_{i}(x)\eta_{i}\left(\theta\right)+k(x)- A\left(\eta\left(\theta\right)\right)\right],\]
where \(\{\eta_{i}\left(\theta\right)\}\) are the \(n\) natural parameters of the model and \(\{t_{i}(x)\}\) are the corresponding sufficient statistics.
Figure 2: **Maximal sampling of the inverse timescale plane:** We sample pairs of inverse-timescale values from the depicted region of the \(\tau_{m}^{-1}\)-\(\tau_{s}^{-1}\) plane. Sampled points are distributed evenly on a curvilinear grid between a predefined lower bound and the stability boundary for the specific connection matrix being used (A). If sampling to the stability boundary would produce samples with negative timescales, the space is instead sampled up to a predefined value in the angular direction (B).
The \(D_{sKL}\) for exponential family models can be analytically decomposed into a _finite_ number of component functions
\[D_{sKL}[\theta,\theta^{\prime}]=\sum_{i=1}^{n}\left\{\left[\mathcal{T}_{i}^{+}( \theta)-\mathcal{T}_{i}^{+}(\theta^{\prime})\right]^{2}-\left[\mathcal{T}_{i}^ {-}(\theta)-\mathcal{T}_{i}^{-}(\theta^{\prime})\right]^{2}\right\}.\]
These component functions form a set of \(n\) space-like (\(\mathcal{T}_{i}^{+}\)) and \(n\) time-like (\(\mathcal{T}_{i}^{-}\)) coordinates by which the model manifold may be embedded in a Minkowski-like behavioral space [53]. These coordinate functions are given in terms of the natural parameters and sufficient statistics [53] by
\[\mathcal{T}_{i}^{\pm}=\frac{1}{2}\left[\eta_{i}\left(\theta\right)\pm\langle t _{i}(x)\rangle_{\theta}\right].\]
Alternatively, we may use an isometric embedding given by shifting and rotating the manifold [53]:
\[T_{i}^{\pm}(\theta)=\frac{1}{2}\left\{\lambda_{i}\left[\eta_{i}(\theta)- \overline{\eta_{i}}\right]\pm\frac{1}{\lambda_{i}}\left[\langle t_{i}\rangle_ {\theta}-\overline{\langle t_{i}\rangle}\right]\right\}. \tag{14}\]
We use \(T^{\pm}\) to distinguish the isometric embedding coordinates from the unscaled coordinates \(\mathcal{T}^{\pm}\). Here, an over-bar denotes a mean over sampled parameters and \(\lambda_{i}=\left[\operatorname{var}\left(\langle t_{i}\rangle\right)/ \operatorname{var}\left(\eta_{i}\right)\right]^{1/4}\). These coordinates can be understood as an alternative definition of the exponential family. We can straightforwardly express the log-likelihood function for an exponential family in terms of the isKL coordinates:
\[\ln p(x|\theta)= \ln k(\mathbf{x})+\sum_{i}\mathcal{T}_{i}^{+}(\theta)t_{i}( \mathbf{x})\] \[\qquad+\sum_{i}\mathcal{T}_{i}^{-}(\theta)t_{i}(\mathbf{x})-A(\theta)\] \[=g(\mathbf{x})+\sum_{i}T_{i}^{+}(\theta)t_{i}(\mathbf{x})\] \[\qquad+\sum_{i}T_{i}^{-}(\theta)t_{i}(\mathbf{x})-A(\theta),\]
where \(g(\mathbf{x})=\ln k(\mathbf{x})+\sum_{i}\overline{\eta}_{i}t_{i}(\mathbf{x})\). The authors [53] show that the coordinates \(\{T_{i}\}\) can also be understood in relation to the data visualization procedure multidimensional scaling (MDS). In standard MDS the data points are recorded data, whereas here each "data point" corresponds to the full distribution of an exponential family evaluated at a specific set of parameters. The double mean centered matrix of MDS can be constructed in this context from the pairwise separation matrix measured by the symmetric KL-divergence \(\mathbf{D}_{c}=-\mathbf{P}\mathbf{D}_{sKL}\mathbf{P}\) with \(\mathbf{P}_{ij}=1/n-\delta_{ij}\). In the continuous sampling limit, the eigenvalue problem for MDS is formulated as an integral equation:
\[\int D_{c}(\tilde{\theta},\theta)v(\theta)d\rho(\theta)=\Lambda v(\tilde{ \theta}), \tag{15}\]
where \(d\rho(\theta)=\rho(\theta)d\theta\) is the measure of the distribution of parameters \(\theta\). Teoh and colleagues show [53] that the coordinates \(T_{i}^{\pm}\) are solutions of this eigenvalue problem with corresponding eigenvalues
\[\Lambda_{i}^{\pm}=\frac{1}{2}\left[\operatorname{Cov}(\eta_{i},\langle t_{i} \rangle)\pm\sqrt{\operatorname{var}(\eta_{i})\operatorname{var}(\langle t_{i }\rangle)}\right] \tag{16}\]
This procedure produces an embedding with only a finite and relatively small number of non-zero modes, contrasting sharply with the infinite or data-proportional embedding produced by other methods for continuous or discrete parameter sampling, respectively [53].
We complement this perspective by viewing this embedding procedure as an eigenmode expansion of the conditional probability \(p(x|\theta)\) around the marginalized distribution \(p(x)\) for a given prior on the parameters \(\theta\):
\[p(x|\theta)=p(x)+\sum_{i}c_{i}^{+}(x)T_{i}^{+}(\theta)+\sum_{i}c_{i}^{-}(x)T_{ i}^{-}(\theta) \tag{17}\]
By defining the inner product of functions on \(\Theta\) as
\[\langle f(\theta),g(\theta)\rangle=\int d\rho(\theta)f(\theta)g(\theta),\]
the modes \(\sqrt{\mu(\theta)}v(\theta)\) of Eq. (15) can be shown to be orthogonal as long as the corresponding eigenvalues are distinct. Thus, the coordinate functions \(T_{i}^{\pm}(\theta)\) are orthogonal with respect to the weight \(\rho(\theta)\). Taking advantage of this orthogonality of the coordinate functions, it follows
\[\int d\rho(\theta)p(x|\theta)T_{j}^{\pm}(\theta)\] \[=p(x)\int d\rho(\theta)T_{j}^{\pm}(\theta)+\sum_{i,\pm}\int d\rho (\theta)c_{j}^{\pm}(x)T_{j}^{\pm}(\theta)T_{i}^{\pm}(\theta)\] \[=c_{j}^{\pm}(x)\int d\rho(\theta)\left(T_{j}^{\pm}(\theta) \right)^{2}\]
The first term on the right-hand side vanishes because the mean of each coordinate function is zero by construction, while only the \(i=j\) term from the sum survives due to the orthogonality. Thus, we may calculate the coefficient functions \(c_{i}^{\pm}(x)\) as
\[c_{i}^{\pm}(x)=\langle T_{i}^{\pm},T_{i}^{\pm}\rangle^{-1}\int d\rho(\theta)p (x|\theta)T_{i}^{\pm}(\theta).\]
\begin{table}
\begin{tabular}{|l||l|l||} \hline & Radial distance \(d\) & Angle \(\alpha\) \\ \hline \hline Minimum & & \\ Value & 0.0025 ms\({}^{-1}\) & 0.1 \\ \hline Maximum & & \\ Value & 0.03 ms\({}^{-1}\) & \(\min\left(\frac{1}{\max\{\lambda_{\mathbf{y}^{-1}}\}},500\right)\) \\ \hline Number of & & \\ Samples & 301 & 701 \\ \hline \end{tabular}
\end{table}
Table 2: **Maximal Sampling Parameters** Descriptions and numerical values for the parameters that are constant across the non-spiking, spiking with fine-tuning, and spiking without fine-tuning model types.
In this work we focus on applying these embedding methods to the stationary distributions of the various network models, both multivariate normal within our mean-field approximation. For a \(M\)-dimensional multivariate normal distribution with a set of means \(\{\mu_{i}\}\) and covariance values \(\{C_{ij}\}\), the \(M(M+3)/2\) distinct natural parameters and sufficient statistics are given by
\[\eta=\begin{bmatrix}\sum_{i}C_{1i}^{-1}\mu_{i}\\ \vdots\\ \sum_{i}C_{1i}^{-1}\mu_{i}\\ -\frac{1}{2}C_{11}^{-1}\\ \vdots\\ -\frac{1}{2}C_{M1}^{-1}\\ -\frac{1}{2}C_{22}^{-1}\\ \vdots\\ -\frac{1}{2}C_{MM}^{-1}\end{bmatrix},\quad\langle t_{i}\rangle_{\theta}= \begin{bmatrix}\langle x_{1}\rangle\\ \vdots\\ \langle x_{M}\rangle\\ \langle x_{1}^{2}\rangle\\ \vdots\\ \langle x_{M}x_{1}\rangle\\ \vdots\\ \langle x_{2}^{2}\rangle\end{bmatrix} \tag{18}\]
Before we present the embedding and analysis of the
Figure 3: **Visualizations of the isKL embedding coordinate functions:** (A) The two coordinate functions for the 1-dimensional exponential model. (B-E) Coordinate functions for the 1-dimensional Gaussian model. (B) \(T_{1}^{+}\). (C) \(T_{1}^{-}\). (D) \(T_{2}^{+}\). (E) \(T_{2}^{-}\). (F) The model manifold for the 1-dimensional exponential model colored by \(\log\upsilon\). (G) The model manifold for the 1-dimensional Gaussian model projected onto just \(T_{1}^{\pm}\), colored by \(\mu\).
models from Section II, we provide two simpler models as illustrative examples that are related to the Poissonian and Gaussian characteristics of our model.
### Example: 1-dimensional exponential model
Let \(X\) be exponentially distributed with rate \(\upsilon\), i.e. \(X\sim\text{Exp}(\upsilon)\). In the exponential family formalism, we have
\[\eta=-\upsilon,\ \ \langle t_{i}\rangle_{\theta}=\langle x\rangle_{\upsilon}= \upsilon^{-1},\ \ k(x)=1,\ \ A(\upsilon)=-\ln\upsilon.\]
The isKL embedding coordinates for this model are one-dimensional functions given by
\[T^{\pm}(\upsilon)=\frac{1}{2}\left\{\lambda\left[\overline{\upsilon}-\upsilon \right]\pm\frac{1}{\lambda}\left[\upsilon^{-1}-\overline{\upsilon^{-1}} \right]\right\}.\]
These embedding functions are shown in Fig. 3A using a parameter distribution \(\rho(\upsilon)=\left(8\upsilon\ln 10\right)^{-1}\) with support \(\upsilon\in[10^{-5},10^{5}]\) for illustration.
We may also explicitly calculate the coefficients \(c^{\pm}(x)\) for this example,
\[c^{\pm}(x)= \langle T^{\pm},T^{\pm}\rangle^{-1}\Bigg{[}\left(\frac{\lambda}{ 2}\overline{\upsilon}\mp\frac{1}{2\lambda}\overline{\upsilon^{-1}}\right) \left(-\frac{dZ}{dx}\right)\] \[-\frac{\lambda}{2}\frac{d^{2}Z}{dx^{2}}\pm\frac{1}{2\lambda}Z \Bigg{]},\]
where \(Z(x)\equiv\int e^{-vx}d\rho(\upsilon)\) is the moment-generating function of the distribution \(\rho(\upsilon)\) with source \(-x\). The full model manifold is depicted in Fig. 3F, where points are colored by the logarithm of the rate parameter \(\upsilon\). Here, we see the manifold is neatly broken into two branches corresponding to a low event-rate (\(\log\upsilon<0\)) and a high event-rate (\(\log\upsilon>0\)).
### Example: 1-dimensional Gaussian model
Let \(X\) be normally distributed as \(X\sim\mathcal{N}(\mu,\sigma)\). In the exponential family formalism, we have
\[\eta=\begin{bmatrix}\mu/\sigma^{2}\\ -\sigma^{-2}\end{bmatrix},\ \ \langle t_{i}\rangle_{\theta}=\begin{bmatrix} \langle x\rangle\\ \langle x^{2}\rangle\end{bmatrix}=\begin{bmatrix}\mu\\ \sigma^{2}+\mu^{2}\end{bmatrix}\] \[k(x)=\frac{1}{\sqrt{2\pi}},\ \ \ A(\mu,\sigma)=\frac{\mu^{2}}{2 \sigma^{2}}+\ln\sigma.\]
The isKL embedding coordinates are then two-dimensional functions
\[T^{\pm}_{1}(\mu,\sigma)=\frac{1}{2}\left\{\lambda_{1}\left[\frac{\mu}{\sigma^{ 2}}-\overline{\left(\frac{\mu}{\sigma^{2}}\right)}\right]\pm\frac{1}{\lambda_ {1}}\left[\mu-\overline{\mu}\right]\right\}\]
The Gaussian model coordinate functions are depicted in Fig. 3B-E using a parameter distribution
\[\rho(\mu,\sigma)=\begin{cases}1/800\text{ if }-20\leq\mu\leq 20,\ 0<\sigma\leq 20 \\ 0\ \text{ otherwise}\end{cases}.\]
A projection of the model manifold onto the space-like and time-like coordinates corresponding to first moment of the model is depicted in Fig. 3G. The points on this projection are colored by the mean parameter \(\mu\). Here, we see a degree of rotational symmetry in the manifold projection, separated into negative mean values on the left and positive mean values on the right of the \(T^{+}_{1}\) center-line. Also note that there are apparent breaks in this manifold projection. These breaks do not reflect a true discontinuity in the structure of the manifold, but instead reflect the density with which the \((\mu,\sigma)\)-space is sampled. We will see manifold breaks related to the sampling density in our results for the network models.
## IV Results
Before proceeding with results, it is helpful to briefly summarize the goal of this paper and the workflow constructed in prior sections. We wish to study the population-averaged behavior of stochastic spiking models as we vary synaptic and membrane timescales, repeating this across a range of relative excitation and inhibition. To do this, we approximate the full spiking network dynamics as a population-averaged multivariate Gaussian process (Eqn. 6). We choose a sample of inverse timescales as discussed in Sec. II.5, and in particular constrain the sampling based on the stability condition for the corresponding non-spiking model (Eqn. 10). Within this sampled regime of timescales, the Gaussian process approximations should be mean-reverting and thus reach a stationary Gaussian distribution. We numerically solve for the vector-mean and the covariance matrix of the stationary Gaussian distribution at each sampled timescale-point. Finally, we embed this manifold of stationary Gaussian distributions into a behavioral space using the isKL methods introduced in Sec. III. With the workflow summarized, we may proceed.
### Gaussian process approximations are stable
A key step in the analysis workflow is to find the stationary distribution for the approximated processes at each sampled timescale-point. For the stationary distribution of an Ornstein-Uhlenbeck process to exist, all of the eigenvalues \(\lambda_{\mathbf{A}}\) of the drift matrix \(\mathbf{A}\) must have a positive real component. Basing the upper sampling boundary on the theoretical stability boundary of the related linear model, as well as the check for singularities in the drift matrices \(\mathbf{A}\) and covariance matrices \(\mathbf{C}\), should ensure this requirement is met. We confirm this
by explicitly examining the eigenvalues of the sampled models.
Each individual model--as specified by the model type, E/I log-ratio \(R\), and a pair of inverse timescales (\(\tau_{m}^{-1},\tau_{s}^{-1}\))--has three drift-matrix eigenvalues. We pool together the eigenvalues of all particular models on a given model manifold as specified just by the model type and E/I log-ratio \(R\). The resulting eigenvalue distributions for a subset of E/I conditions \(R\) are given in Fig. 4. Points in the eigenvalue distribution are colored by the log-distance of the particular model (specified by \((\tau_{m}^{-1},\tau_{s}^{-1})\)) from the origin in the inverse-timescale parameter space, shown in the insert. Each individual eigenvalue distribution is accompanied by marginal histograms where appropriate.
The subset of manifolds shown in Fig. 4 highlights a portion of the inhibition-dominated regime for which the models were observed to have complex eigenvalues. These complex distributions seen in the spiking-type model manifolds have a relatively small range in the imaginary direction and the imaginary components tend to pool near the origin in the along the real-axis. Additionally, most of the density for these complex distributions are along the real-axis itself, indicating that models with complex eigenvalues are relatively rare within their corresponding manifolds. The manifolds for the remaining E/I conditions have eigenvalue distributions qualitatively very similar to those at the extremes: purely real and positive eigenvalues spanning roughly the same range and skewed toward the origin. A key takeaway from Fig. 4 is that all of the eigenvalues have strictly-positive real components, confirming that the sampled models for each manifold are stable and therefore appropriate for embedding analysis.
### Behavior of full spiking network models
As discussed in Sec. III, the isKL embedding methods take the model manifold from the parameter space and position it an a hyperbolic using the symmetric Kullback-Liebler divergence. The KL-divergence functions similarly to a distance between models based on their (sufficient) statistics which determine the behavior of a particular model from the manifold. The isKL method thus embeds the model manifold in a behavioral space. This connection to the underlying behavior of the sampled models can be obscured when looking only at the results of the embedding analysis. As such, we will take some time here to discuss the behavior of the full-network model described by Eqn. 1.
Full-network spiking models were generated from the appropriate parameters in Table 1 with a sparse, random connection matrix as described in Sec. II.1. A subset of model manifolds were chosen from across the range of E/I conditions \(R\), and individual models from these manifolds were taken from along an arc of radius \(\sim 0.01\) (see e.g. Fig. 5, left column). The membrane and spiking dynamics described by Eqn. 1 with a specific choice of timescales were simulated using a basic forward-Euler integration scheme using a time step \(dt=0.1\) ms. Most models were simulated for \(20,000\) ms. The models in bottom 3 rows of the right-most column simulated for increased durations--\(100,000\) ms, \(150,000\) ms, and \(150,000\) ms respectively--to assure convergence to a stationary behavioral regime.
The long-term dynamics of the population-averaged membrane potentials and the membrane potentials of six excitatory and six inhibitory neurons are shown in Fig. 5. The mean-field values of the membrane potentials predicted by Eqn. 6c are represented by the red and blue dotted lines. From this figure, we see that the population-averaged membrane potentials in the full spiking network do indeed reach stationary values as predicted by the drift-eigenvalues \(\lambda_{A}\) for the approximated spiking models shown in Sec. IV.1. Additionally, we see that the mean-field membrane potential values correspond fairly well to the stationary population-averaged potentials (Fig. 5, columns 1 and 2), but this breaks down near the upper bound of the arc (Fig. 5, column 3). This upper boundary of the arc corresponds to the stability boundary in first two manifolds (Fig. 5, rows 1 and 2) and a bifurcation boundary in the remaining manifolds. The breakdown of the mean-field approximations at these limits thus lines up with the colloquial understanding of their accuracy. Knowing that the population-averaged potentials of the full spiking networks converge to a stationary condition, we next look at how these stationary solutions of these models differ.
The membrane potential dynamics in the stationary regime of these same models are shown in Fig. 6. Here, we show data from the last 1000 ms of simulation and ignore the mean-field predictions. We see that the population-averaged membrane potentials visibly fluctuate around a average variable for most of the simulated models. These fluctuations in the population averages become less noticeable as the overall magnitude of the averages and standard deviations increase, e.g. along column 3 of Fig. 6. A similar trend is seen in the membrane dynamics for individual neurons in the network. Fluctuations in individual potentials are very large relative to their mean values for the first two models along the arc (Fig. 6, columns 1 and 2). The population-variance in membrane potentials for these models is thus highly dependent on the fluctuations of individual membrane potentials. Towards the upper end of the arc (Fig. 6, column 3), the magnitude of the membrane potentials increases and the fluctuations of individual potentials are less pronounced. For these models, the population-variance of the membrane potentials is much more dependent on the spread of individuals around the population-mean as opposed to the fluctuations of those individuals.
We finish this section by examining the actual spiking dynamics for the example models in Fig. 7. Here we show raster plots for each model during the last 1000 ms of simulation. First, we see that the target neuron (black
spikes) fires more frequently than other excitatory neurons in the network, which is to be expected as it receives extra current input. For each model manifold (different rows in Fig. 7), we observe an overall increase in the rate of spiking in the network as we move along the arc from point 1 to point 3. This aligns with the change in overall magnitude of the membrane potentials seen in Figs. 5 & 6. This observation also aligns with an intuitive understanding of the timescales: along this arc, the relative rate of synaptic input becomes much faster than the relaxation dynamics. This trend is taken to the extreme in the models at the top of the arc for each manifold (Fig. 7, column 3) where we see unrealistically high spiking rates in the last three rows. With this, we've built an intuitive understanding of the differences in behavior across the model manifolds and across E/I conditions. We now move on to the embedding analysis for these models.
### Network embedding is hierarchical
It has been previously reported that biological models exhibit a hierarchy of sensitivities to different parameter combinations relative to some cost function on model behavior [51]. A similar hierarchical structure has been observed in the widths of model manifolds and the corresponding eigenvalues induced by a particular embedding, and a correlation between the widths and eigenvalues has also been noted [47, 52]. The current modeling and embedding differs from these prior cases in that we
Figure 4: **Eigenvalues of drift matrices** Eigenvalue distributions for the drift matrices \(\mathbf{A}\) of all sampled network models for the non-spiking model manifolds (top) and the spiking model (bottom) as the excitation-inhibition ratio \(R\) is adjusted. Points are colored by the radial distance \(d\) of model from the origin in inverse-timescale space, as illustrated by the inset. The marginal histograms for the real (\(\mathcal{RE}\), top) and imaginary (\(\mathcal{LM}\), right) are given for each distribution. Plots without a histogram in the imaginary dimension indicates a marginal \(\delta\)-distribution. Emphasis is placed on a portion of the inhibitory regime (\(-0.19<R<-0.09\), middle columns) in which some eigenvalue distributions display imaginary components.
are embedding probabilistic models in behavior space. Considering also the limited dimensionality of the current embedding, it is unclear if this hierarchical property should manifest in the current system. We show below that the manifolds for models of the types in Eqns. 6 & 9 are indeed hierarchical under the i\(\mathrm{k}\)KL embedding framework, with coordinate eigenvalues spanning several orders of magnitude for each E/I condition.
We used the i\(\mathrm{s}\)KL methods (Sec.III) to embed the stationary distributions for both the spiking and non-spiking model types across 25 E/I conditions ranging from the excitation-dominated to the inhibition-dominated, and approximately centered at \(R=0\). The root absolute eigenvalues for the embedding coordinates \(\{\Lambda_{i}^{\pm}\}\) are plot
Figure 5: **Membrane potential dynamics of spiking network models** The dynamic population-averaged membrane potentials (solid lines) are plotted against the predicted mean-field values (dotted lines) for three different parameter pairs across the examined range of E/I ratios \(R\). The membrane dynamics of six excitatory and six inhibitory neurons (dashed lines) are also shown for each condition. The ribbons around the population-averaged potentials are the the standard deviation of the membrane potentials within the corresponding population at each time point. The sampled parameter distribution from the embedding calculations, along with the chosen points for spiking simulation, are given in the right-most column. Full-network spiking simulations were run until an apparent stationary state was reached.
ted against the observed manifold width along the same coordinate for the non-spiking models (Fig. 8A) and the spiking models (Fig. 8C). Here, the manifold width is taken to be the simple range across a given coordinate. We see that the widths and eigenvalues are indeed correlated across E/I conditions for both model types, following with previous observations [47, 52].This suggests these two measures may be used interchangeably in further analysis. The coordinate eigenvalues of the non-spiking model (Fig. 8B) span at least three orders of magnitude for any given E/I condition tested, and up to nearly fifteen orders of magnitude at the most ex
Figure 6: **Stationary membrane potential dynamics of spiking network models** The dynamic population-averaged membrane potentials (solid lines) are plotted against the predicted mean-field values (dotted lines) for three different parameter pairs across the examined range of E/I ratios \(R\). The membrane dynamics of six excitatory and six inhibitory neurons (dashed lines) are also shown for each condition. The ribbons around the population-averaged potentials are the the standard deviation of the membrane potentials within the corresponding population at each time point. The sampled parameter distribution from the embedding calculations, along with the chosen points for spiking simulation, are given in the right-most column. Full-network spiking simulations were run until an apparent stationary state was reached, and the membrane dynamics for the the last 1000 ms of simulation time are plotted.
treme. The coordinate eigenvalues for the spiking models (Fig. 8D) span roughly two to three orders of magnitude on the extreme ends of the E/I spectrum and upwards of four in at some points in the center, with eigenvalues peaking towards the center as you approach from either extreme. Taken together, both model types studied here exhibit a hierarchical structure in line with prior observations of other systems, albeit with a more limited degree of separation in the case of the spiking-type models.
Before proceeding, we make some comparative observations between the two model categories. The scale and range of eigenvalues for the non-spiking model significantly larger than those for the spiking model type when in the excitation-dominated regime. Additionally, the non-spiking models show a sharp jump in eigenvalues when moving from the inhibition-dominated regime to the excitation-dominated one. This jump in eigenvalues may indicate a sort of bifurcation in the overall manifold. The eigenvalues \(\{\Lambda_{i}^{\pm}\}\) directly reflect the covariance and--anecdotally more importantly--the variance in the corresponding sufficient statistics and natural parameters. A jump in the magnitude of the eigenvalues thus
Figure 7: **Stationary spiking behavior of full network models** Example raster plots for individual time-scale pairs for spiking models across the sampled range of E/I ratios \(R\). The sampled parameter distribution from the embedding calculations, along with the chosen points for spiking simulation, are given in the right-most column. Full-network spiking simulations were run until an apparent stationary state was reached, and the spikes from the last 1000 ms are plotted. In each model, neuron index 350 was designated as the target neuron and its spikes are shown in black.
indicates a sudden increase in the variability of model behavior, and this could correspond to sampling near the stability boundary (Eqn. 10) in the case of the transition seen the non-spiking models. A similar transition may be happening at the peaks in the eigenvalue distributions of the spiking-type models, however these are much less pronounced than the one seen in the non-spiking models and the increase does not persist through the excitation-dominated regime as in the non-spiking models. We note that the firing rate non-linearity for the spiking model-type (Table 1) is quasi-linear when \(x\gg 1\). A naive prediction would be a similar behavior between model types when the membrane potentials become more positive as in the excitation-dominated regime. However, this is not reflected in the observed distributions of \(\{\Lambda_{\rm i}^{\pm}\}\).
### Projection hierarchies
Having established that the isKL embedding of the spiking models and the non-spiking models exhibit a hierarchical structure, we next want to interrogate this structure across our model manifolds. We do this by examining projections of the manifolds onto lower-dimensional spaces along the largest widths and smallest widths. We will focus on 2-dimensional projections.
Fig. 9 shows the largest manifold projections in behavioral space for the non-spiking models and spiking models across a subset of E/I conditions. Points on these manifolds are colored by the mean value of the membrane potential for the test neuron \(\langle V_{0}\rangle\). It is visually clear that the manifold projections are shrinking from top left to bottom right for each condition, reflecting the hierarchical structure of the manifold. A large fraction of projections--for example, Fig. 9 column 3, row 2--have apparent gaps in their structure. These are similar to the gaps seen in the projection of the example Gaussian distribution shown in Fig. 3G, and are tied to the sampling density used across the inverse-timescale space near key boundaries (data not shown).
Many of the projections across model types and E/I conditions appear very linear or piece-wise linear, for example Fig. 9 column 2 rows 1-4. This thinness at the largest scales would suggest a relatively simple relationship between the largest coordinates and that the model manifold is relatively flat. The difficulty of overcoming under-sampling of the parameter space complicates this interpretation slightly. The gaps in the projections seen in the excitation-dominated regime are clear evidence of some under-sampling, but interpolating the data across gaps suggests that the projections in these conditions may still be piecewise- or quasi-linear. These stick-like projections both model-types in the excitation-dominated regime. The projections are also seen to change shape qualitatively as the E/I conditions change. In the non-spiking models, we see the appearance of spoon-shaped projections as we move into the inhibition dominated regime. In contrast we see knife-like projections in the spiking models, albeit only under the most inhibitory of E/I conditions. This qualitative change in the manifold projections seen across the two model types could be caused either by warping of the manifold along each coordinate as the conditions change or by changes in ranking of each coordinate. This point will be revisited in Sections IV.5 & IV.7.
Additionally, we note that many of the projections separate points on the manifold in alignment with \(\langle V_{0}\rangle\), as was seen in the example embeddings shown in Fig. 3F & G. This is particularly clear, for example, in the inhibition-dominated regime of the two model types (Fig. 9 row 5). The separation of the manifold into sections based on behavioral regimes depends on more than just \(\langle V_{0}\rangle\), however. For example, we see no such trend in Fig. 9 column 2 row 4. In this case the individual models within the manifold are more significantly separated by (a combination of) behavioral parameters that, in a sense, have a dependence on the timescale parameters that is orthogonal to the way \(\langle V_{0}\rangle\) depends on them. Exceptions aside, this noted \(\langle V_{0}\rangle\)-aligned separation serves as a clear demonstration of the behavioral clustering induced by the isKL method.
We can also examine the smallest projections of the model manifolds for the two model types, which correspond to the least important modes of the expansion in Eqn. 17. Fig. 10 shows the smallest projections of the model manifolds for all three model types across E/I conditions. As was the case for the largest coordinate projections, there is evidence of under-sampling in the smallest projections also. This particularly evident in the non-spiking model manifold in the excitation-dominated regime (Fig. 10, column 1, rows 1 and 2). In contrast to the largest projections, the stick-like projections comprise the minority of the small-projection shapes. The model manifolds appear instead to be highly curved at the fine-grained level. Following the observation from the large-scale projections, we see the smallest manifold projections separate points in line with \(\langle V_{0}\rangle\). In particular, the counterexample mentioned before ( Fig. 9 column 2 row 4) now also shows a degree of alignment with changes in \(\langle V_{0}\rangle\), now in Fig. 10 column 2 row 4. This highlights the fact that model separation along different directions on the manifold can be more or less tied to a particular behavioral or statistical parameter.
Before moving on, we should highlight the relationship between the projection coordinates and model behavior. Coordinates with larger eigenvalues contribute more to separation between \(p(x|\theta)\) and \(p(x|\theta^{\prime})\) as measured by the \(D_{sKL}\). More specifically, a relatively large absolute-eigenvalue \(|\Lambda_{i}^{\pm}|\) indicates that the corresponding natural parameter is a relatively better way to separate individual models on the manifold by their behavioral predictions, or alternatively that a larger part of the variance of behavioral predictions across the manifold are explained by the associated natural parameter. Given a tractable mapping between the underlying model parameters and the natural parameters, the magnitude of co
ordinate eigenvalues can also give a sense of the relative importance of different parameter combinations aligning with a given coordinate direction. Lastly, the high degree of correlation between coordinate eigenvalues and the manifold widths along those coordinates means that the relative size of a particular projection gives a visual representation of the importance of the corresponding parameter combination.
### Coordinate rankings
We saw in Sec. IV.4 that the projections hierarchies of the model manifolds changed across the examined range of E/I conditions \(R\). One possible explanation for this is that the rankings of coordinates by manifold width change with \(R\). This potential aspect of the changing projections can be interrogated by tracking the rank of each coordinate across the range of \(R\). This will additionally provide insight into what aspects of the statistical model have the greatest (or least) impact on the overall model behavior for both model-types. Fig. 11 depicts the ranking for each coordinate by both the length of the manifold along that coordinate (top row) and eigenvalue-magnitude (middle row) for the spiking and non-spiking model types for a subset of E/I conditions. As each coordinate corresponds directly to a single sufficient statistic, we color- and shape-code the rank of each coordinate according to this correspondence. The bottom row of Fig. 11 reproduces the eigenvalue distributions shown in Fig. 8, except each point is now color- and shape-coded according to the sufficient statistic instead of the E/I measure \(R\).
We see in Fig. 11 that the ranking of coordinates changes across E/I conditions for both the non-spiking and the spiking models. We also note that the rankings by observed width (top row) and by eigenvalue-magnitude (middle row) agree fairly well across the range of \(R\). This agreement between the two sets of rankings makes sense when considering the high degree of correlation between the coordinate eigenvalues and manifold widths shown in Fig. 8. It is interesting that the
Figure 8: **Hierarchies of manifold widths** Top row: The correlation between the coordinate eigenvalues \(\Lambda_{\hat{x}}^{\pm}\) and the width \(W\) across the manifold in that coordinate direction as the E/I ratio is varied for the non-spiking models (A) and spiking models (C). A simple linear regression is applied to the log-widths and log-absolute eigenvalues for visualization. The distribution of the scale of coordinate eigenvalues as the E/I ratio is varied for the non-spiking models (B) and the spiking models (D).
eigenvalue distributions--particularly those for the spiking models--seem to separate into clusters of coordinates that do not intersect for much of the range of \(R\). For example, the tan-orange-steel blue (eight point star-five point star-hexagon) cluster at the top of the eigenvalue spectrum corresponds to the second moments involving the inhibitory and excitatory populations. This cluster remains consistently above the grey-magenta (downward triangle-pentagon) and pink-red (diamond-square) clusters--which correspond to the second moments involving the target neuron and first moments for the bulk populations, respectively--across \(R\) for the spiking models.
Knowing that the eigenvalues and thus the eigenvalue-magnitude rankings form these clusters across the E/I spectrum, it is natural to examine the sufficient statistics that correspond to coordinates in these clusters. We will focus on the spiking-type models that exhibit these
Figure 9: **Largest manifold projections** Projections of the manifolds for the two model types onto the largest four coordinates as determined by the observed manifold width. The hierarchy of projections is shown as the excitation-inhibition ratio is changed from excitation dominant regime (top) to an inhibition-dominant regime (bottom). Manifolds are colored by \(\langle V_{0}\rangle\)—the mean membrane potential of the test neuron—and each projection is scaled by the largest observed width. These projections are the “stiffest,” contributing the most to the behavior of the distribution of activities.
clusters. The tan-orange-steel blue (eight point star-five point star-hexagon) cluster noted before dominates over other clusters in the spiking models, and these coordinates correspond to the second-order moments of the membrane potentials of the excitatory and inhibitory populations (Fig. 11 columns 2 and 3). This indicates that they are the most important statistics for distinguishing models across the manifolds. These second moments are also important for the non-spiking model manifolds, but they only sit at the top of the hierarchy in the excitation dominated regime \(R>0\). The grey-magenta (downward triangle-pentagon) cluster in the spiking models corresponds to the mixed second moments involving the target neuron \(V_{0}\) and either \(V_{1}\) or \(V_{0}\). This cluster is above the green (triangles) cluster in the mid-range of \(R\) and just below it in the extreme E/I conditions, and this green (triangles) cluster is the pure second moment \(\langle V_{0}\rangle\). The green cluster is gener
Figure 10: **Smallest manifold projections** Projections of the model manifold for the two model types onto the smallest four coordinates as determined by the observed manifold width. The hierarchy of projections is shown as the excitation-inhibition ratio is changed from excitation dominant regime (top) to an inhibition-dominant regime (bottom). Manifolds are colored by \(\langle V_{0}\rangle\), and each projection is scaled by the observed width along the coordinate ranked \((M-3)=15\). These projections are the “sloppiest,” contributing the least to the behavior of the distribution of activities.
ally above the pink-red (diamonds-squares) and the blue (circles) clusters, except for a brief crossing of the pink-red and the green clusters around \(R\approx-0.18\). These last two clusters correspond to the mean values of all three membrane potentials. Taken together, these observations say that for the spiking-type models the fluctuations are more important for distinguishing between individual models on a given manifold and--for both the first and second moments--the statistics that involve the test neuron are generally less important than those that do not. These observations hold for the non-spiking model manifolds in the excitation-dominated regime, but not in the inhibition-dominated regime (Fig. 11, column 1).
Let us discuss the coordinate rankings at a more granular level of detail. While both model types have the second moments at the top of their respective hierarchies in the excitation-dominated regime, it is interesting to note how they differ here. The spiking-type models have the \(\langle V_{2}^{2}\rangle^{+}\)-related coordinates at the top while the non-spiking model is topped by the \(\langle V_{1}^{2}\rangle^{+}\)-related coordinates. The suggests that the degree of fluctuations in the inhibitory population are the most varied for the spiking models in this regime, but the excitatory population fluctuations take that title in the excitation-dominated non-spiking models. The last fine-grained detail we highlight here is the increased importance of the \(V_{0}\)-moments in distinguishing the behavior of the inhibition-dominated non-spiking models relative to their importance in the inhibition-dominated spiking-type models.
In addition to visualizing the relative importance of certain sufficient statistics for distinguishing between particular models across a given model manifold, we get another piece of information visualized for free through the eigenvalue-magnitude ranking plots (Fig. 11, middle row). Recall from Eqn. 16 that the eigenvalues \(\Lambda_{i}^{\pm}\) are given by the covariance of the sufficient statistic and natural parameter across the manifold (\(\text{Cov}(\eta_{i},\langle t_{i}\rangle)\)) and the geometric means of their individual variances (\(\sqrt{\text{var}(\eta_{i})\text{var}(\langle t_{i}\rangle)}\)). As we know the \(\Lambda_{i}^{-}\) eigenvalues are negative and of the same order of magnitude as the corresponding \(\Lambda_{i}^{+}\) (Fig. 11, row 3), we know that geometric mean of those variances greatly outweighs their covariance. Furthermore, the relative ranking of \(\Lambda_{i}^{+}\) and \(\Lambda_{i}^{-}\) imply the sign of the covariance \(\text{Cov}(\eta_{i},\langle t_{i}\rangle)\): if \(\Lambda_{i}^{+}>\Lambda_{i}^{-}\) then \(\text{Cov}(\eta_{i},\langle t_{i}\rangle)>0\) and _vice versa_. For example, by looking at the eigenvalue-magnitude ranking of the steel blue (hexagon) coordinates in the spiking-type models (Fig. 11, row 2 columns 2 and 3) we see that \(\text{Cov}(-\frac{1}{2}C_{12}^{-1},\langle V_{1}V_{2}\rangle)<0\) across all E/I conditions examined here. While this could very easily be determined by looking at these covariances themselves--and they must be calculated in order to determine \(\Lambda_{i}^{\pm}\)--it is convenient to be able to glean this from a plot already produced for another purpose.
We will briefly summarize. Fig. 11 shows that the coordinate rankings do change across the sampled E/I range, thus explaining the changing projection hierarchies in Figs. 9,10 at least in part. We found that the coordinates form clusters in the eigenvalue distribution that behave in a correlated manner across the E/I spectrum and with which they share relations to similar types sufficient statistics. In particular, the cluster of coordinates corresponding to the fluctuations \(\langle V_{I}V_{J}\rangle^{\pm}\) for \(I,J\in\{1,2\}\) have the most impact on the activity of the spiking-type models, as well as in the excitation-dominated non-spiking models. We made observations of which types of fluctuations were most important to model distinction across the manifold for different model-types and different E/I conditions. Finally, we highlighted a secondary visual interpretation of the eigenvalue-magnitude ranking plots relating to the base statistical model.
### Transforming of base parameters
We highlighted in Sec. IV.5 that the statistical parameters from the stationary Gaussian distribution of membrane potentials--discussed in terms of the sufficient statistics--have a hierarchical impact on the possible behaviors of the spiking model that changes across the E/I spectrum \(R\). Further, we identified clusters of parameters that tended change in similar ways with \(R\). In the case of the spiking models, the fluctuations \(\langle V_{I}V_{J}\rangle^{\pm}\) for \(I,J\in\{1,2\}\) were the most impactful while the mean coordinates \(\langle V_{0}\rangle^{\pm}\) had a relatively small impact. While important, these observations do not directly address the role of the inverse timescales (\(\tau_{m}^{-1},\tau_{s}^{-1}\)) on model behavior. Unfortunately, the mapping from the timescale parameters to statistical parameters is intractable, owing primarily to the transcendental system of mean-field equations (Eqn. 6c). Closed forms for the stationary distribution parameters of the linear non-spiking models can be found, but these expressions are ratios of high-degree polynomial functions of the timescales and do not directly reflect the mapping in the spiking model context. To begin untangling the impact of the timescale parameters on the range of model behaviors, we must thus rely on a qualitative understanding of the relationship between the timescales and e.g. the sufficient statistics.
In Fig. 12, we plot the logarithm of several sufficient statistics as a function of position in inverse-timescale space for the spiking network with \(R=-6.06\times 10^{-3}\). We include \(\langle V_{1}^{2}\rangle\) (Fig. 12,B) and \(\langle V_{2}^{2}\rangle\) (Fig. 12,C) from the upper cluster as well \(\langle V_{0}\rangle\) (Fig. 12,A) to serve as a representative set from across the hierarchies in Fig. 11. Note that the second moments have different units than those of \(\langle V_{0}\rangle\), which should be kept in mind when comparing the color scales. That said, the \(D_{sKL}\) between two members of the same exponential family can be rewritten as [53]
\[D_{sKL}(\theta,\theta^{\prime})= \sum_{i}\left(\eta_{i}(\theta)-\eta_{i}(\theta^{\prime})\right) \left(\langle t_{i}(\theta)\rangle-\langle t_{i}(\theta^{\prime})\rangle \right).\]
Paired with the sufficient statistics of a multivariate normal distribution (Eqn. 18), we see that the \(D_{sKL}\) is in
some sense weighing the first and second moments directly against each other. This in mind, the variability in the second moments is \(\sim 2\) orders of magnitude larger than that for the mean of the test neuron, in line with their relative ranking in Fig. 11. Further, we note the similar dependence of all three statistical parameters on the inverse timescales, increasing in magnitude radially from \(\tau_{m}^{-1}\)-axis to the \(\tau_{s}^{-1}\)-axis as well as exhibiting a "cold spot" triangle on the right-most corner of the sampled wedge. The trends between the means and covariances differ most significantly along the \(\tau_{s}^{-1}\) boundary. Here, the magnitude of the mean increases towards the inverse-timescale origin (i.e. very long timescales) while the second moments increase moving away from the origin (i.e. very short timescales).
The presence of the cold spot in the mappings to the statistical parameters--particularly the sharpness of the transition seen for \(\langle V_{0}\rangle^{\pm}\)--reinforce the intuition that translating changes in the statistical parameters back onto the timescale parameters is non-trivial. That said, the shared general trend in the mappings suggest a possible avenue for model reduction if some loss of expressivity is permitted. Reducing the 2-dimensional sampled space to an arc around the origin and through the cold spot
Figure 11: **Coordinate rankings** The ranking of each manifold coordinate \(T_{i}^{\pm}\) as the E/I balance is changed in both model types. Coordinates are ranked from most important (top of each plot) to least important (bottom of each plot) based on the observed width of the manifold along said coordinate (top row) or the magnitude of the corresponding eigenvalue \(|\Lambda_{i}^{\pm}|\) (middle row). The log-magnitude of the eigenvalue for each coordinate is given in the bottom plot as the E/I balance \(R\) is changed as in Fig. 8. The legend renames each coordinate \(T_{i}^{\pm}\) to the corresponding sufficient statistic \(\langle t_{i}\rangle^{\pm}\) for ready interpretation.
could be used to capture the concomitant increases in the magnitude of the first and second moments, capturing the majority of their respective variability. Alternatively, radial sampling along the \(\tau_{s}^{-1}\) boundary could be used to study the apparent trade-off in magnitude of the means and covariances. This idea of model reduction is intimately tied to notions of model dimensionality, which we will return to in Sec. IV.8 and Sec. V.
### Manifold projections change smoothly with E/I balance
We now return to the question raised at the beginning of Sec. IV.5: What causes the projection hierarchies to change across E/I conditions? While the changing coordinate ranking observed across E/I conditions for both models can explain the changing manifold projections, it does not rule out the possibility that the projections along a given coordinate are themselves changing. To address this possibility, we project the model manifolds for both of the model-types onto the same pair of coordinates across the E/I spectrum in Fig. 13. We chose to project the manifold onto the space-like \(\langle V_{0}^{2}\rangle^{+}\) and \(\langle V_{0}\rangle^{+}\) coordinates as the statistical behavior of the test neuron may be of particular interest in some scenarios. For the sake of visualization, each projection along the E/I spectrum is scaled by the larger of the two manifold projections at each condition. The overall scale of the projection is given by the axis scale.
We can see in the projections of the non-spiking model manifolds (Fig. 13, upper diagonal) that there is a squashing and stretching of the manifold relative to the overall change in scaling as the E/I conditions are changed. Additionally, these transformations appear to act smoothly on the manifold projections until the manifold flattens going from the inhibition-dominated regime to the excitation-dominated one in the range \(-6.06\times 10^{-3}\leq R\leq 0.115\). This flattening reflects a radical increase in the manifold scale along the \(\langle V_{0}^{2}\rangle^{+}\)-coordinate relative to the \(\langle V_{0}\rangle^{+}\)-coordinate as all of the eigenvalues are seen to jump (Fig. 11, column 1 row 3). This interpretation is corroborated by the change in overall scale of the axes--from \(\sim\mathcal{O}(10^{3})\) for \(R<0.115\) to \(\sim\mathcal{O}(10^{17})\) for \(R\geq 0.115\) (Fig. 13, upper diagonal)--and the correlation between eigenvalue-magnitude and manifold width discussed in Sec. IV.3 (Fig. 8). The projections of the spiking manifolds (Fig. 13, lower diagonal) are also seen to transform smoothly with \(R\) with the fork-shaped projections (e.g. lower diagonal, \(R=-0.307\)) collapsing into the spoon projections (e.g. lower diagonal, \(R=-0.103\)) seen in the non-spiking model around \(R\approx-0.1\). The projections for the spiking-type manifolds do change along \(R\) in line with the changes in their respective eigenvalue distributions (Fig. 11, columns 2 and 3, row 3), but these changes are more subtle than in the non-spiking model. The eigenvalue distributions for the spiking-type models drift downwards as you move from \(R<-0.18\) to \(R>0.10\), and this is mirrored in the manifold projects by a slight decrease in projection scale moving in the same direction.
We have shown here that the manifold projections for both model-types do indeed change across the sampled E/I range, which plays a subsequent role in the changing of projection hierarchies across E/I conditions noted
Figure 12: **Mapping from inverse timescales to statistical parameters** The relationship between the inverse timescales \((\tau_{m}^{-1},\tau_{s}^{-1})\) and a select set of statistical parameters from the corresponding stationary Gaussian distribution for the spiking network with \(R=-6.06\times 10^{-3}\). Inverse timescale spaces are colored by the log-value of one of the sufficient statistics: A) \(\log_{10}\langle V_{0}\rangle\); B) \(\log_{10}\langle V_{1}^{2}\rangle\); C) \(\log_{10}\langle V_{2}^{2}\rangle\)
in Sec. IV.4. The individual projections were shown to undergo potentially significant rescaling across values of \(R\) that alter it visually, as noted in the non-spiking manifold. Additionally, the manifold can exhibit a warping, as in the fork-spoon-fork transition noted in the spiking-type models.
### Manifold dimensionality
As mentioned at the outset (Sec. I), a key issue when analyzing the behavior of a collection of large spiking networks is the dimensionality of the behavioral output space. The true behavioral space of the model--as expressed through the spiking activity--grows with both an increasing network size \(N\) and a decreasing time bin size \(\Delta t\). A goal of the current work is to understand the behavioral output of these models in a lower dimensional framing. Thus, we will briefly interrogate the dimensionality of our spiking and non-spiking models before proceeding to the final discussion.
The behavioral dimensionality of models following Eqn. 1 is \(NT/\Delta t\) for a discrete-time trial of length \(T\). The dimensionality of the behavioral space remains the same when moving to the Gaussian process approximation of the model in Eqn. 5. Population-averaging of
Figure 13: **Coordinate evolution as E/I balance is tuned** Projections onto the \(\langle V_{0}^{2}\rangle^{+}\)-\(\langle V_{0}\rangle^{+}\) plane of the non-spiking model manifolds (top) and the spiking model manifolds (bottom) as the excitation-inhibition ratio is adjusted. The axes of each 2-D projection are scaled to the larger width for the given model and E/I condition for the convenience of visualization. Note that the projections change in size by an order of magnitude as the E/I balance is adjusted. The projections are colored by \(\langle V_{0}\rangle\) as in prior figures.
these approximated dynamics into Eqn. 6 decreases the dimensionality to \(N_{\rm pop}T/\Delta t\), where \(N_{\rm pop}<N\) is the number of populations being considered. By simplifying our analysis to studying the _stationary distribution_ of population behaviors, the behavioral dimensionality drops to \(N_{\rm pop}(N_{\rm pop}+3)/2\) corresponding to the maximal number of independent sufficient statistics (see Sec. III). Finally, the isKL methods embed this manifold of behaviors into an \(N_{\rm pop}(N_{\rm pop}+3)\)-dimensional space which determines the upper limit of dimensionality that may be measured from embedded data (i.e. sampled models).
Having considered how the dimensionality changes across the steps for our analysis, two key questions remain. If we can only see the results of the embedding, how do we gauge the dimensionality of the manifold being analyzed? If we instead have an understanding of the maximal dimensionality of the system, is there any effective reduction in dimensionality that we can measure? To address these questions, we start with a measure of effective manifold dimensionality commonly used in principal component analysis (PCA) known as the participation ratio (PR):
\[PR=\frac{\left(\sum_{i}\Lambda_{i}\right)^{2}}{\sum_{i}\Lambda_{i}^{2}}.\]
As the isKL embedding methods are intimately tied to multidimensional scaling (MDS)--an extension of PCA--the PR should serve as a useful base for measuring the effective dimensionality of our embedded model manifolds. This is complicated slightly by the presence of negative eigenvalues \(\{\Lambda_{i}^{-}\}\) that arise in MDS, so we use an altered PR as our measure of effective dimensionality:
\[PR=\frac{\left(\sum_{i,\pm}|\Lambda_{i}^{\pm}|\right)^{2}}{\sum_{i,\pm}\left( \Lambda_{i}^{\pm}\right)^{2}}. \tag{19}\]
The effective dimensionality of our two model-types across the examined E/I spectrum is shown in Fig. 14. We see that the spiking-type models begin with a relatively high \(PR\approx 8\) in the inhibition-dominated regime before dropping to \(PR\approx 6\) in the middle regime and then rising slightly again in the excitation-dominated regime. By contrast, the non-spiking model has \(PR\approx 3\) in the excitation-dominated regime. The \(PR\) for the non-spiking model then peak at \(PR\approx 6\) around \(R\approx-0.1\) before decaying back down further into the inhibition-dominated regime. The non-spiking models thus have a lower effective dimensionality than the spiking-type models.
How do we contextualize the measured \(PR\) for these models? First, we note that the maximal possible measured dimensionality for both the spiking and non-spiking model-types is \(N_{\rm pop}(N_{\rm pop}+3)=18\), and the _statistical model_ dimensionality is \(N_{\rm pop}(N_{\rm pop}+3)/2=9\). This indicates that the approximate models show a dimensionality reduction compared to both the model dimensionality and the maximal embedding dimensionality. This seems trivial until one examines the \(PR\) measure for the example embeddings given in Sec. III. The simple Poisson example has just 1 parameter \(\Lambda\) and subsequently a maximum embedding dimension of 2. Despite the intrinsic parameter density of 1, its isKL embedding (Fig. 3F) gives an effective dimensionality is much closer to its maximal embedding dimension and gives \(PR\approx 1.982\). In a similar vein, the example Gaussian model has two parameters \((\mu,\sigma)\), yet it has an embedding dimension of \(PR\approx 3.929\) which is nearly its maximum embedding dimensionality of 4 (see Fig. 3G for one of the manifolds projections). The measured \(PR\) thus does not seem to reflect the dimensionality of the intrinsic manifold structure, but instead the number of embedding dimensions within the isKL framework required to capture most of the variability in model behaviors. This will be discussed further in Sec. V.
## V Discussion
The central motivation of this paper is to tease apart the impact of cellular and synaptic model parameters--internal timescales and relative synaptic strengths, respectively--on the complex and high-dimensional behavioral space of spiking network models. Taking inspiration from prior applications of information geometry to neural systems [25; 26; 27; 28; 29; 30; 31; 32], we approached this Herculean task by leveraging recently developed methods for studying the information geometry of complex biology models [52; 53] and applying them to spiking network models with more biological features than those considered previously. We began by defining our spiking model [17; 54] and then simplifying it through population-averaging, using a path-integral formalism to approximate the membrane dynamics as a Gaussian process [17], and then calculating the stationary distribution for that approxima
Figure 14: **Dimensionality of model manifolds** The effective dimensionality of non-spiking and spiking models across the spectrum of E/I conditions is given. Effective dimensionality is given by the altered participation ratio given by Eqn. 19.
tion [57]. The stationary distributions for these were then analyzed using the information geometric framework introduced by Teoh and colleagues [53]. This workflow is the core of the work presented.
Before diving into the results of the geometric embedding analysis, we briefly examined the behaviors of full spiking networks across various E/I conditions and for a few different timescale points. We showed that the behavior of the actual networks change distinctly across the variables at both the level of spiking and of population-averaged membrane dynamics. Importantly, the spiking models reach a stationary behavior in the long-time limit. This agreed qualitatively with the mean field predictions and supported the analysis of the stationary distributions from the reduced model.
The information-geometric analysis demonstrated that the approximated models are hierarchical. Manifold widths and coordinate eigenvalues spanned several orders of magnitudes, pointing to a "hyperribon" structures with "stiff" and "sloppy" coordinate directions. The distribution of these coordinate eigenvalues changed across E/I conditions and with it the hierarchy of 2-dimensional manifold projections. These changes in the manifold projections arose from a smooth warping of projections onto specific coordinate pairs as well as a reordering of the coordinate rankings. Identifying each coordinates with their corresponding sufficient statistic highlighted a clustered structure in the eigenvalue distribution of the spiking models across E/I conditions. From this clustered structure, it is possible to pick out the most and least important sufficient statistics for distinguishing between models on a given manifold--these are the parameter combinations that underlie the stiff and sloppy coordinate directions, respectively. In particular, the stiffest directions on the spiking-model manifolds corresponded to the second moments of the excitatory and inhibitory population membrane potentials while the sloppiest directions were those corresponding to the first moments. This suggests that bulk fluctuations are key for determining the behavior of a specific network. It is unfortunately difficult to tie this understanding of stiff and sloppy statistical parameters to the timescale parameters in a manner that is satisfactorily analytical, owing primarily to the transcendental mean-field equations (Eqn. 6c) that must be solved numerically. That said, the implication of the sloppy and stiff coordinate observations is that an adjustment of the membrane and synaptic timescales tends to have a larger effect on the large-population fluctuations than it does on the means.
At the end of our isKL analysis, we began a discussion regarding the dimensionality of the models, their behavior, and their manifolds. The largest reductions in the size of the model being discussed occur when moving to population-averaged models and when focusing on the stationary distribution. The combined effect decreases the dimensionality of the behavioral space being studied from \(NT/\Delta t\) to \(N_{\rm pop}(N_{\rm pop}+3)/2\), in which we essentially shift from a study of particular spike patterns to a study of probability distributions. From here, the isKL methods embed the distribution in \(N_{\rm pop}(N_{\rm pop}+3)\) dimensions. This sets the upper limit of dimensionality at the end of our analysis, that upper limit being 18 for the particular architectures studied here. Using an altered participation ratio to measure the effective dimensionality of our embedded spiking models gave us a range of \(6\lessapprox PR\lessapprox 8\)--depending on the E/I measure \(R\)--less than the maximum possible dimension. It was illustrated elegantly through the example embedding of the 1-dimensional Poisson model that the participation ratio measures the number of dimensions needed to hold a sufficiently representative version of the model manifold rather than the intrinsic dimensionality of the manifold. In fact, the participation ratio for both toy models indicated that they basically "filled" their respective embedding spaces. Taken together, these show that approximated spiking models are definitely undergoing a degree of dimensionality reduction as they are not filling the embedding space like the toy models did. The participation ratio can then be interpreted as giving a sense of how "pointed" a change in parameters is. If a 6-dimensional space is needed to represent most features of the manifold, this likely implies that the modulated parameters are mostly affecting 3 natural parameters. However, this is a measure of the effect of base parameter (i.e. \(\tau_{m}\) and \(\tau_{s}\)) on behavior, and not necessarily reflect a minimal structure in the base parameter space need for nearly-full expressivity of the model.
From the copious stick-like projections seen in the hierarchies (Figs. 9,10), we may intuit that the embedded manifolds are of an even smaller dimension than is represented by the participation ratio. We can take this a step further by understanding the entire embedding process as a transformation of a manifold originally in the parameter space, implying that it should intrinsically be, at most, 2-dimensional. We also noted in Sec. IV.6 that there are _ad hoc_ ways of reducing the parameter space to a 1-dimensional curve while seemingly preserving much of the variability in statistical parameters. If the goal is to find a reduced number of base parameters to approximately cover the manifold in a more principled way, this would likely require estimating the intrinsic dimensionality of the model manifolds with more sophisticated tools than those discussed here. This would provide a number of parameters--or parameter combinations--needed to understand and express the model. Thus, a combination of both an intrinsic measure and the participation ratio provides a complimentary understanding of model manifold dimensionality through the lenses of necessary base parameters and range of impact, respectively.
Lastly, the properties of these spiking model manifolds is likely to be impacted by the conditions under which it is being studied, more specifically any particular task in which it is being implemented. The models studied here are functionally in a spontaneous regime with a tonic drive that is minor in the scale of the network. The structure of a given task is known collapse high-dimensional
spontaneous activity into a lower-dimensional behavioral space [4; 10], which might be seen directly in information-geometric interrogations such as the one performed in this paper. Furthermore, this may well affect which statistical parameters are important, in turn changing the coordinate rankings, projection hierarchies, and potentially even the degree to which the resulting manifolds are hierarchical. These possibilities require their own attention in follow-up work.
### Limitations
It is important to discuss the limitations of the framework of modeling and analysis expounded upon in this paper. The primary hurdle to expanded usage of these methods is that the base calculations required for each step combined with the number of samples needed to visually resolve the embedded manifolds make it costly to increase the dimensionality of the parameter space or the number of network elements. The manifolds embedded here required a large number of sampled parameter points to resolve adequately, restricting the number of parameters considered. Similarly, calculating \(\sim N^{2}\) statistical parameters under the Gaussian process approximately would be computationally infeasible and high intractable. The first of these restrictions led to the choice of only two key parameters--the timescales--in the current work. The second restriction motivates the reduction of the model by population-averaging. The embedding of the inverse timescale sub-plane (see Sec. II.5) revealed that much of the manifold was comprised of points near the boundaries where the behavior became pathological (data not shown). This suggests that a principled or data-informed restriction of parameter space may lead to decrease in the necessary per-parameter sampling density and ease the restrictions presented here.
### Future directions
We conclude by discussing future directions for this work. As developed here, the methods discussed could be applied to models of particular neural circuits in the brain to understand their stationary behavior. For example, one could study the range of behavior of a cortical column when its internal timescales are subject to change. Further, one could study the conditioned range of behaviors in a network in response to a well-defined distribution of inputs as the statistics of the input distribution change. This latter example is meant to demonstrate that the general framework--marginalize, approximate, population-average, and then embed--can apply to modulated parameters other than those presented here.
Perhaps more interesting are the possible extensions of the methods themselves. Of primary interest is the extension of the iskL embedding methods to non-stationary systems. A first-pass way to do this would be to discretize time, apply the embedding procedure at each time-step, and trace points through the embedding space. While conceptually straightforward, this approach would involve significantly more computational investment and the interpretation of the results would be more complicated than in the system discussed in this paper. One could instead try to extend the iSKL embedding framework to apply directly to the path integral representations used to derive the Gaussian process approximations. This would require a proof that the desired properties of the iSKL embedding still hold in this functional context, a potentially harder barrier to clear. Together, these highlight the care with which these conceptual extensions of the current method must be carried out.
## Appendix A Population averaging of the Gaussian process approximated network
Here, we derive the reduced model from Eqn. 6 by first making a Gaussian process approximation on the full-network spiking model and then averaging the resulting dynamics by population. The stochastically-spiking full network, modeled using a nonlinear Hawkes process, is reproduced here:
\[\frac{dV_{i}}{dt}=-\tau_{m}^{-1}(V_{i}-\varepsilon_{I})+I_{i}+\tau_{s}^{-1} \left(\mu_{\text{ext}}-J_{\text{self}}\dot{n}_{i}(t)+\sum_{j}w_{ij}\dot{n}_{j }(t)\right)\]
\[\dot{n}_{i}(t)dt\sim\text{Poiss}[\phi(V_{i}(t))dt]\]
. Recall that the lowercase subscripts (\(i\), \(j\), etc.) denote individual neurons within the network. \(V_{i}\) is the membrane potential of neuron \(i\), \(\varepsilon_{i}\) is the leak reversal potential, \(w_{ij}\) is the strength of a synaptic connection from neuron \(j\) to neuron \(i\), and \(-J_{\text{self}}\) is an inhibitory self-coupling. The two currents \(\mu_{\text{ext}}\) and \(I_{i}\) represent an average current from an external network and an experimentally injected current, respectively. The process \(\dot{n}_{i}(t)\) is the spike train of neuron \(i\), and \(\phi(\cdot)dt\) is the instantaneous firing rate nonlinearity, here given by \(\phi(x)=\frac{1}{2}(x+\sqrt{x^{2}+1/2})\). Finally, \(\tau_{m}\) and
\(\tau_{s}\) are modulated membrane and synaptic timescales, respectively. The mean-field equations for the steady state membrane potentials can be obtained directly from these equations by using the fact that the approximation neglects fluctuations, and hence \(\langle n_{i}(t)\rangle=\langle\phi(V_{i}(t))\rangle\approx\phi(\langle V_{i}(t)\rangle)\), yielding
\[V_{i}^{\rm mf}=\varepsilon_{I}+\tau_{m}I_{i}+\frac{\tau_{m}}{\tau_{s}}\left( \mu_{\rm ext}-J_{\rm self}\phi(V_{i}^{\rm mf})+\sum_{j}w_{ij}\phi(V_{j}^{\rm mf })\right). \tag{100}\]
To obtain the dynamics of fluctuations around the mean-field predictions, and to set up for future calculations that go even beyond the Gaussian approximation, it is useful to introduce a path integral representation of this stochastic process, using techniques from statistical physics [55]. In discrete time, we can write the joint probability for the membrane potential \(\mathbf{V}(t)\) and the spike trains \(\dot{\mathbf{n}}(t)\) as follows:
\[P[\mathbf{V}(t),\dot{\mathbf{n}}(t)]=\prod_{t,i}P[V_{i}(t)|\dot{\mathbf{n}}(t -dt)(t-dt)]P[\dot{n}_{i}(t-dt)|\mathbf{V}(t-dt)],\]
where the dynamics of the membrane potential are deterministic given a particular history of the spike trains,
\[P[V_{i}(t)|\dot{\mathbf{n}}(t-dt)]\propto\delta\left(\frac{dV_{i}}{dt}+\tau_{ m}^{-1}(V_{i}-\varepsilon_{I})-I_{i}-\tau_{s}^{-1}\left(\mu_{\rm ext}-J_{\rm self }\dot{n}_{i}(t)+\sum_{j}w_{ij}\dot{n}_{j}(t)\right)\right).\]
Here, the proportionality hides a Jacobian factor that arises from a change of variables from \(V_{I}(t)\) to \(\dot{V}_{I}(t)\); this factor is constant for an Ito time discretization, which we assume here.
Next, we take the spike train process to be conditionally Poisson given the current value of the membrane potentials
\[P[\dot{n}_{i}(t-dt)|V_{i}(t-dt)]=\frac{\phi(V_{i}(t-dt))^{\dot{n}_{i}(t-dt)dt} }{(\dot{n}_{i}(t-dt)dt)!}e^{-\phi(V_{i}(t-dt))dt},\]
giving an overall representation
\[P[\mathbf{V}(t),\dot{\mathbf{n}}(t)]=\prod_{t,i} \delta\left(\frac{dV_{i}}{dt}+\tau_{m}^{-1}(V_{i}-\varepsilon_{I})-I_{i}- \tau_{s}^{-1}\left(\mu_{\rm ext}-J_{\rm self}\dot{n}_{i}(t)+\sum_{j}w_{ij} \dot{n}_{j}(t)\right)\right)\] \[\times\left[\frac{\phi(V_{i}(t-dt))^{\dot{n}_{i}(t-dt)dt}}{( \dot{n}_{i}(t-dt)dt)!}e^{-\phi(V_{i}(t-dt))dt}\right].\]
In order to cast this in a path integral representation, the standard approach is to represent the probability distributions in terms of a Fourier space representation. For the \(\delta\)-distribution we have
\[\delta(x)=\int_{-i\infty}^{i\infty}\frac{d\tilde{x}}{2\pi}e^{-\tilde{x}x},\]
and for a Poisson distribution with rate \(\lambda\) we have
\[p(x)=\int_{-i\infty}^{i\infty}\frac{d\tilde{x}}{2\pi}e^{-\tilde{x}x+W(\tilde{x })}=\int_{-i\infty}^{i\infty}\frac{d\tilde{x}}{2\pi}e^{-\tilde{x}x+\lambda( \varepsilon^{\tilde{x}}-1)},\]
where \(W(\tilde{x})=\lambda(\exp(\tilde{x})-1)\) is the cumulant generating function for the Poisson process. We have adopted the standard notation from physics of writing the auxiliary variables this process introduces with tildes, and absorbing the factor of the imaginary unit \(i\) into the notation (giving imaginary units of integration). The path integral representation of the spiking process above is then given by
\[P[\mathbf{V}(t),\dot{\mathbf{n}}(t)]=\int\mathfrak{D}[\tilde{\mathbf{V}}, \tilde{\mathbf{n}}]e^{-S[\tilde{\mathbf{V}},\mathbf{V},\tilde{\mathbf{n}}, \dot{\mathbf{n}}]},\]
where \(S[\tilde{\mathbf{V}},\mathbf{V},\tilde{\mathbf{n}},\dot{\mathbf{n}}]\) is referred to as the "action" of the process. We take the continuous-time limit, converting the product over time into an integral over time in the exponent. For this particular model, the action is given by
\[S[\tilde{\mathbf{V}},\mathbf{V},\tilde{\mathbf{n}},\dot{\mathbf{ n}}]= \int dt\ \sum_{i=1}^{n}\Bigg{\{}\tilde{V}_{i}\Bigg{[}\dot{V}_{i}+\frac{V_{ i}-\varepsilon_{i}}{\tau_{m}}-I_{i}-\tau_{s}^{-1}\left(\mu_{\rm ext}-J_{\rm self }\dot{n}_{i}(t)+\sum_{j}w_{ij}\dot{n}_{j}(t)\right)\Bigg{]}\] \[+\tilde{n}_{i}(t)\dot{n}_{i}(t)-\left(e^{\dot{n}_{i}(t)}-1\right) \phi(V_{i})\Bigg{\}}.\]
For our purposes, it will be convenient to marginalize out the dynamics of the spiking process \(\dot{\mathbf{n}}(t)\) and its conjugate variable \(\tilde{\mathbf{n}}(t)\) to obtain a representation for the stochastic dynamics of the membrane potentials (along with their auxiliary variables \(\tilde{\mathbf{V}}(t)\)). The spike-marginalized action is
\[S[\tilde{\mathbf{V}},\mathbf{V}]=\int dt\ \sum_{i=1}^{n}\left\{\tilde{V}_{i} \left[\dot{V}_{i}+\frac{V_{i}-\varepsilon_{I}}{\tau_{m}}-I_{i}-\tau_{s}^{-1} \mu_{\text{ext}}\right]-\left(e^{\tau_{s}^{-1}\left(-J_{\text{self}}\tilde{V}_ {i}+\sum_{j}\tilde{V}_{j}w_{ji}\right)}-1\right)\phi(V_{i})\right\}.\]
The Gaussian process approximation is derived by expanding this action around the mean-field solution, retaining only terms up to quadratic order in \(\mathbf{V}(t)-\mathbf{V}^{\text{mf}}\) and \(\tilde{\mathbf{V}}(t)\). The mean-field solution is obtained by the saddle-points of the action with respect to \(\mathbf{V}(t)\) and \(\tilde{\mathbf{V}}(t)\), which reproduce Eqn. (A2) for \(\mathbf{V}^{\text{mf}}\) and yield \(\tilde{\mathbf{V}}^{\text{mf}}=\mathbf{0}\). We thus perform a functional Taylor series expansion of the action around \((\tilde{\mathbf{V}},\mathbf{V})=(\mathbf{0},\mathbf{V}^{\text{mf}})\), keeping only terms to the second order in \(\delta\mathbf{V}=\mathbf{V}-\mathbf{V}^{\text{mf}}\) and \(\tilde{\mathbf{V}}\). The result is
\[S[\tilde{V},V] =\frac{1}{2}\int dtdt^{\prime}\ \sum_{ij}\tilde{V}_{i}(t)\left[- \tau_{s}^{-2}\sum_{k}\left(-\delta_{ik}J_{\text{self}}+w_{ik}\right)\left(- \delta_{jk}J_{\text{self}}+w_{jk}\right)\phi(V_{k}^{\text{mf}})\right]\tilde{ V}_{j}(t^{\prime})\] \[\quad+\int dtdt^{\prime}\ \sum_{ij}\tilde{V}_{i}(t)\left[\delta_{ij} \delta(t-t^{\prime})\frac{d}{dt}+\delta_{ij}\left(\tau_{m}^{-1}+\tau_{s}^{-1} J_{\text{self}}\phi^{\prime}(V_{j}^{\text{mf}})\right)-\tau_{s}^{-1}w_{ij}\phi^{ \prime}(V_{j}^{\text{mf}})\right]\delta V_{j}(t^{\prime}).\]
The form of the truncated action is the same as the path integral representation of an Ornstein-Uhlenbeck process derived explicitly by Chow and Buice [55]. We may therefore match terms to identify the effective stochastic process described by this action:
\[\frac{d\delta V_{i}}{dt}=-\sum_{j=1}^{n}\left[\delta_{ij}\left(\tau_{m}^{-1}+ \tau_{s}^{-1}J_{\text{self}}\phi^{\prime}(V_{j}^{\text{mf}})\right)-\tau_{s}^ {-1}w_{ij}\phi^{\prime}(V_{j}^{\text{mf}})\right]\delta V_{j}+\xi_{i}(t)\quad \text{for }i=1,2,...,n,\]
where \(\xi_{i}(t)\) is a zero-mean Gaussian noise with covariance
\[\langle\xi_{i}(t)\xi_{j}(t^{\prime})\rangle=\tau_{s}^{-2}\sum_{k}\left(-\delta _{ik}J_{\text{self}}+w_{ik}\right)\left(-\delta_{jk}J_{\text{self}}+w_{jk} \right)\phi(V_{k}^{\text{mf}})\delta(t-t^{\prime}).\]
Casting this as a proper Ito stochastic differential equation, we get
\[d\delta\mathbf{V}=-\mathbf{A}\delta\mathbf{V}dt+\mathbf{\Sigma}d\mathbf{W}_{t}\]
or equivalently
\[d\mathbf{V}=\mathbf{A}\left(\mathbf{V}^{\text{mf}}-\mathbf{V}\right)dt+ \mathbf{\Sigma}d\mathbf{W}_{t},\]
where
\[A_{ij} =\delta_{ij}\left(\tau_{m}^{-1}+\tau_{s}^{-1}J_{\text{self}}\phi^{ \prime}(V_{j}^{\text{mf}})\right)-\tau_{s}^{-1}w_{ij}\phi^{\prime}(V_{j}^{ \text{mf}})\] \[\left(\mathbf{\Sigma}\mathbf{\Sigma}^{T}\right)_{ij} =\tau_{s}^{-2}\sum_{k}\left(-\delta_{ik}J_{\text{self}}+w_{ik} \right)\left(-\delta_{jk}J_{\text{self}}+w_{jk}\right)\phi(V_{k}^{\text{mf}}).\]
In deriving the reduced dynamics for the population averages, we begin with the Langevin dynamics derived for the full network. We consider the network to have weakly heterogeneous populations in which the connections \(w_{ij}=w_{IJ}x_{ij}\) are given by Bernoulli variables, i.e. \(w_{ij}=w_{IJ}x_{ij}\) where \(w_{IJ}\) is a constant depending on the pre- and post-synaptic population identities (\(J\) and \(I\), respectively). We take each connection variable \(x_{ij}\) to be independent:
\[x_{ij}\sim\text{Bernoulli}(p_{IJ}).\]
We formally define the average of variable \(A_{i}\) across population \(I\) as
\[\langle\langle A_{i}\rangle\rangle_{I}=A_{I}\equiv\frac{1}{N_{I}}\sum_{i\in I}A_ {i}(t).\]
At this point, we write the population-averaged connection weights as follows:
\[\langle\langle w_{ij}\rangle\rangle_{I}\approx p_{IJ}w_{IJ}.\]
We will derive the effective equations for \(V_{I=0}\equiv V_{i=0}\) (the test neuron) and the population averages
\[V_{I=1} \equiv\frac{1}{N_{1}}\sum_{i\in 1}V_{i},\] \[V_{I=2} \equiv\frac{1}{N_{2}}\sum_{i\in 2}V_{i}.\]
We make mean-field-like approximations on the population-average of terms like \(\langle\langle f(A_{i})\rangle\rangle_{I}\approx f(\langle\langle A_{i}\rangle \rangle_{i}=f(A_{I})\), and we additionally assume approximate independence between the distributions of the synaptic connections, the stationary mean-field potentials \(V_{i}^{\text{mf}}\), and the potentials \(V_{i}\). We thus have
\[\frac{d}{dt}\left(\frac{1}{N_{I}}\sum_{i\in I}V_{i}\right) =\left\langle\left\langle\sum_{j}\left[\delta_{ij}\left(\tau_{m}^ {-1}+\tau_{s}^{-1}J_{\text{self}}\phi^{\prime}(V_{j}^{\text{mf}})\right)- \tau_{s}^{-1}w_{ij}\phi^{\prime}(V_{j}^{\text{mf}})\right]\left(V_{j}^{\text{ mf}}-V_{j}\right)+\xi_{i}(t)\right\rangle\right\rangle_{I}\] \[=\left\langle\left\langle\left(\tau_{m}^{-1}+\tau_{s}^{-1}J_{ \text{self}}\phi^{\prime}(V_{i}^{\text{mf}})\right)V_{i}^{\text{mf}}\right\rangle _{I}-\left\langle\left\langle\left(\tau_{m}^{-1}+\tau_{s}^{-1}J_{\text{self}} \phi^{\prime}(V_{i}^{\text{mf}})\right)V_{i}\right\rangle\right\rangle_{I}\right\rangle _{I}\] \[\approx\left(\tau_{m}^{-1}+\tau_{s}^{-1}J_{\text{self}}\phi^{ \prime}\left((V_{i}^{\text{mf}})_{I}\right)\right)\langle(V_{i}^{\text{mf}})_{ I}-\left(\tau_{m}^{-1}+\tau_{s}^{-1}J_{\text{self}}\phi^{\prime}\left((V_{i}^{ \text{mf}})_{I}\right)\right)\left\langle(V_{i}\right)\rangle_{I}\] \[\approx\left(\tau_{m}^{-1}+\tau_{s}^{-1}J_{\text{self}}\phi^{ \prime}\left(V_{I}^{\text{mf}}\right)\right)\left(V_{I}^{\text{mf}}-V_{I} \right)-\tau_{s}^{-1}\left\langle\left\langle\sum_{j}N_{J}\langle\langle w_{ ij}\phi^{\prime}(V_{j}^{\text{mf}})_{J}\right\rangle\left\rangle\langle(V_{j}^{ \text{mf}})_{J}\right\rangle\right\rangle_{I}\] \[\qquad\qquad\qquad-\tau_{s}^{-1}\left\langle\left\langle\sum_{j} N_{J}\langle\langle w_{ij}\rangle\rangle_{J}\phi^{\prime}\left((V_{j}^{\text{mf}})_{J} \right)\langle(V_{j})_{J}\right\rangle\right\rangle\right\rangle_{I}+\Xi_{I}(t)\] \[\approx\left(\tau_{m}^{-1}+\tau_{s}^{-1}J_{\text{self}}\phi^{ \prime}\left(V_{I}^{\text{mf}}\right)\right)\left(V_{I}^{\text{mf}}-V_{I} \right)-\tau_{s}^{-1}\sum_{J}N_{J}p_{IJ}w_{IJ}\phi^{\prime}\left(V_{J}^{\text{ mf}}\right)\left(V_{J}^{\text{mf}}-V_{J}\right)+\Xi_{I}(t)\] \[\Rightarrow\frac{dV_{I=1}}{dt} \approx\sum_{J}\left[\delta_{IJ}(\tau_{m}^{-1}+\tau_{s}^{-1}J_{ \text{self}}\phi^{\prime}(V_{I}^{\text{mf}}))-\tau_{s}^{-1}w_{IJ}p_{IJ}N_{J} \phi^{\prime}(V_{J}^{\text{mf}})\right](V_{J}^{\text{mf}}-V_{J})+\Xi_{I}(t).\]
In the last line above, the population-averaged effective noise processes are defined by \(\Xi_{I}(t)=\frac{1}{N_{I}}\sum_{i\in I}\xi_{i}(t)\), and the sum over \(J\) is over an arbitrary definition of sub-populations. In our particular case, we have \(J\in\{0,1,2\}\) as defined in Sec. II.4 with \(N_{0}=1\).
Next, we calculate the covariance of the population-averaged noise processes \(\Xi_{I}(t)\). We make the mean-field-like approxima
tions as before:
\[\langle\Xi_{I},\Xi_{J}\rangle= \Bigg{\langle}\frac{1}{N_{I}}\sum_{i\in I}\xi_{i},\frac{1}{N_{J}} \sum_{j\in J}\xi_{j}\Bigg{\rangle}\] \[=\frac{1}{N_{I}N_{J}}\sum_{i\in I,j\in J}\Big{[}\langle\xi_{i}\xi _{j}\rangle-\langle\xi_{i}\rangle\langle\xi_{j}\rangle\Big{]}\] \[\approx\frac{\tau_{s}^{-2}}{N_{I}N_{J}}\sum_{i\in I,j\in J} \Bigg{[}\sum_{k}\left(-\delta_{ik}J_{\rm self}+w_{ik}\right)\left(-\delta_{jk} J_{\rm self}+w_{jk}\right)\phi(V_{k}^{\rm mf}\right]\biggr{]}\delta(t-t^{ \prime})\] \[=\frac{\tau_{s}^{-2}}{N_{I}N_{J}}\sum_{i\in I,j\in J}\Big{[} \delta_{ij}J_{\rm self}^{2}\phi(V_{i}^{\rm mf})-J_{\rm self}w_{ji}\phi(V_{i}^ {\rm mf})-w_{ij}J_{\rm self}\phi(V_{j}^{\rm mf})+\sum_{K}\sum_{k\in K}w_{ik}w_ {jk}\phi(V_{k}^{\rm mf})\Big{]}\delta(t-t^{\prime})\] \[=\frac{\tau_{s}^{-2}}{N_{I}N_{J}}\Big{[}\delta_{IJ}N_{I}\langle \langle J_{\rm self}^{2}\phi(V_{i}^{\rm mf})\rangle\rangle_{I}-N_{I}N_{J} \Big{\langle}\Big{\langle}J_{\rm self}w_{ji}\phi(V_{i}^{\rm mf})\Big{\rangle} \Big{\rangle}_{I,J}-N_{I}N_{J}\Big{\langle}\Big{\langle}w_{ij}J_{\rm self}\phi (V_{j}^{\rm mf})\Big{\rangle}\Big{\rangle}_{I,J}\] \[\qquad\qquad+N_{I}N_{J}\Biggl{\langle}\Biggl{\langle}\sum_{K} \sum_{k\in K}\left\langle\left\langle w_{ik}w_{jk}\phi(V_{k}^{\rm mf})\right \rangle\right\rangle_{K}\Biggr{\rangle}\Biggr{\rangle}_{I,J}\biggr{]}\delta(t-t ^{\prime})\] \[=\tau_{s}^{-2}\Big{[}\delta_{IJ}\frac{J_{\rm self}^{2}}{N_{I}} \phi(V_{I}^{\rm mf})-J_{\rm self}\langle\langle w_{ji}\rangle\rangle_{I,J} \phi(V_{I}^{\rm mf})\] \[\qquad\qquad-\langle\langle w_{IJ}\rangle\rangle_{I,J}J_{\rm self }\phi(V_{J}^{\rm mf})+\sum_{K}\langle\langle w_{jk}\rangle\rangle_{K,I,J}N_{K }\phi(V_{K}^{\rm mf})\Big{]}\delta(t-t^{\prime})\] \[=\tau_{s}^{-2}\Bigl{[}\delta_{IJ}\frac{J_{\rm self}^{2}}{N_{I}} \phi(V_{I}^{\rm mf})-J_{\rm self}p_{JI}w_{JI}\phi(V_{I}^{\rm mf})-p_{JI}w_{IJ} J_{\rm self}\phi(V_{j}^{\rm mf})+\sum_{K}p_{JK}w_{JK}N_{K}\phi(V_{K}^{\rm mf}) \Bigr{]}\delta(t-t^{\prime})\] \[=\tau_{s}^{-2}\sum_{K}\left(-\delta_{IK}\frac{J_{\rm self}}{N_{K}} +p_{IK}w_{IK}\right)\left(-\delta_{JK}\frac{J_{\rm self}}{N_{K}}+p_{JK}w_{JK} \right)N_{K}\phi(V_{K}^{*})\delta(t-t^{\prime}).\]
Note that in this derivation we are assuming an equivalence between the temporal mean-field membrane potential for each individual neuron \(V_{i}\) (used in the previous section) with the mean-field value of the population-averaged membrane potential \(V_{I}\). This amounts to saying the network is sufficiently large and thus the mean of the membrane potential \(V_{i}\) for \(i\in I\) tends toward the mean of \(V_{I}\). This yields stochastic differential equation of the form
\[d{\bf V}={\bf A}\left({\bf V}^{\rm mf}-{\bf V}\right)dt+{\bf\Sigma}d{\bf W}_{t},\]
where
\[A_{ij}=\delta_{ij}\left(\tau_{m}^{-1}+\tau_{s}^{-1}J_{\rm self}\phi^{\prime}( V_{j}^{\rm mf})\right)-\tau_{s}^{-1}w_{ij}\phi^{\prime}(V_{j}^{\rm mf})\]
\[\left({\bf\Sigma}{\bf\Sigma}^{T}\right)_{ij}=\tau_{s}^{-2}\sum_{k}\left(- \delta_{ik}J_{\rm self}+w_{ik}\right)\left(-\delta_{jk}J_{\rm self}+w_{jk} \right)\phi(V_{k}^{\rm mf}).\]
## Appendix B Gaussian process approximation of a population-averaged network
In this appendix, we derive the reduced model from Eqn. 6 by first averaging the Hawkes process dynamics across sub-populations and then making a Gaussian approximation, reversing the order of operations in Appendix A. We begin with the base model:
\[\frac{dV_{i}}{dt}=-\tau_{m}^{-1}(V_{i}-\varepsilon_{I})+I_{i}+\tau_{s}^{-1} \left(\mu_{\rm ext}-J_{\rm self}\dot{n}_{i}(t)+\sum_{J}\sum_{j\in J}w_{ij} \dot{n}_{j}(t)\right)\]
\[\dot{n}_{i}(t)dt\sim{\rm Poiss}[\phi(V_{i}(t))dt].\]
The population-averaged membrane potential dynamics are given by
\[\frac{d}{dt}V_{I}= \frac{d}{dt}\langle\langle V_{i}\rangle\rangle_{I}\] \[=-\frac{\langle\langle V_{i}\rangle\rangle_{I}-\varepsilon_{I}}{ \tau_{m}}+\langle\langle I_{i}\rangle\rangle_{I}+\frac{\mu_{\rm ext}}{\tau_{s}} -\tau_{s}^{-1}J_{\rm self}\langle\langle\dot{n}_{i}(t)\rangle\rangle_{I}+\tau_{s }^{-1}\Biggl{\langle}\Biggl{\langle}\sum_{J}\sum_{j\in J}w_{ij}\dot{n}_{j}(t) \Biggr{\rangle}\Biggr{\rangle}\] \[=-\frac{V_{I}-\varepsilon_{I}}{\tau_{m}}+I_{I}+\frac{\mu_{\rm ext}} {\tau_{s}}-\tau_{s}^{-1}J_{\rm self}\langle\langle\dot{n}_{i}(t)\rangle\rangle_{I}+ \tau_{s}^{-1}\sum_{J}\sum_{j\in J}\langle\langle w_{ij}\rangle\rangle\dot{n}_{j}(t).\]
As before, we take the connections \(w_{ij}\) to be scaled Bernoulli variables, i.e. \(w_{ij}=w_{IJ}x_{ij}\) where \(w_{IJ}\) is a constant depending on the pre- and post-synaptic population identities (\(J\) and \(I\), respectively) and \(x_{ij}\sim\text{Bernoulli}(p_{IJ})\). The population-averaged connections are again given by \(\langle\langle w_{ij}\rangle\rangle_{I}\approx p_{IJ}w_{IJ}\). We next re-cast the spiking processes into population spiking processes using the following definition
\[\dot{m}_{I}(t)\equiv\sum_{i\in I}\dot{n}_{i}(t)=N_{I}\langle\langle\dot{n}_{i} \rangle\rangle_{I}.\]
As each \(\dot{m}_{I}(t)\) is a sum of conditionally-Poisson processes, it is also a conditionally-Poisson process. Using the same mean-field-esque approximation as before, we may approximate the conditional rate of each \(\dot{m}_{I}(t)\) as follows:
\[\dot{m}_{I}=\sum_{i\in I}\dot{n}_{i}(t) \sim\text{Poiss}\left(\sum_{i\in I}\phi(V_{i}(t))dt\right)=\text{ Poiss}\left(N_{I}\langle\langle\phi(V_{i}(t))\rangle\rangle_{I}dt\right)\] \[\approx\text{Poiss}\left(N_{I}\phi(\langle\langle V_{i}(t)\rangle \rangle_{I}dt)\right.\] \[=\text{Poiss}\left(N_{I}\phi(V_{I}(t))dt\right).\]
With this, the population-averaged Hawkes process dynamics become
\[\frac{d}{dt}V_{I}= -\frac{V_{I}-\varepsilon_{I}}{\tau_{m}}+I_{I}+\frac{\mu_{\text{ ext}}}{\tau_{s}}-\tau_{s}^{-1}J_{\text{self}}\frac{\dot{m}_{I}(t)}{N_{I}}+\tau_{s}^{- 1}\sum_{J}p_{IJ}w_{IJ}\dot{m}_{J}(t)\] \[=-\frac{V_{I}-\varepsilon_{I}}{\tau_{m}}+I_{I}+\tau_{s}^{-1}\left( \mu_{\text{ext}}+\sum_{J}\left(-\delta_{IJ}\frac{J_{\text{self}}}{N_{I}}+p_{ IJ}w_{IJ}\right)\dot{m}_{J}(t)\right)\] \[\dot{m}_{I}(t)dt\sim\text{Poiss}[N_{I}\phi(V_{I}(t))dt].\]
After deriving the population-averaged dynamics for the nonlinear Hawkes process, we apply the Gaussian-process approximation scheme to the new dynamics. We begin by applying a mean-field-like approximation to the average of the population-spiking processes, namely \(\langle\dot{m}_{I}(t)\rangle\approx N_{I}\phi(V_{I})\). This is used to find the stationary mean-field solution for the population-averaged membrane potential dynamics, given by a set of transcendental equations
\[V_{I}^{\text{mf}}=\varepsilon_{I}+\tau_{m}I_{I}+\frac{\tau_{m}}{\tau_{s}} \left(\mu_{\text{ext}}+\sum_{J}\left(-\delta_{IJ}\frac{J_{\text{self}}}{N_{I} }+p_{IJ}w_{IJ}\right)N_{J}\phi(V_{J}^{\text{mf}})\right).\]
As in Appendix A, we represent the joint probability distribution \(P[\mathbf{V}(t),\dot{\mathbf{m}}(t)]\) as a path integral by discretizing time, making appropriate Fourier transforms, and taking a continuous-time limit. This yields the expression
\[P[\mathbf{V}(t),\dot{\mathbf{m}}(t)]=\int\mathfrak{D}[\tilde{\mathbf{V}}, \tilde{\mathbf{m}}]e^{-S[\tilde{\mathbf{V}},\mathbf{V},\dot{\mathbf{m}},\dot{ \mathbf{m}}]},\]
with
\[S[\tilde{\mathbf{V}},\mathbf{V},\tilde{\mathbf{m}},\dot{\mathbf{ m}}]= \int dt\;\sum_{I=0,1,2}\Bigg{\{}\tilde{V}_{I}\left[\dot{V}_{I}+\frac{V_{I}- \varepsilon_{i}}{\tau_{m}}-I_{I}-\tau_{s}^{-1}\left(\mu_{\text{ext}}+\sum_{J} \left(-\delta_{IJ}\frac{J_{\text{self}}}{N_{I}}+p_{IJ}w_{IJ}\right)\dot{m}_{J} (t)\right)\right]\] \[+\tilde{m}_{I}(t)\dot{m}_{I}(t)-\left(e^{\dot{m}_{I}(t)}-1\right) N_{I}\phi(V_{I})\Bigg{\}}.\]
We marginalize out the explicit spiking dynamics as before by finding the zeros of the derivatives of the action w.r.t. \(\dot{\mathbf{m}}(t)\) and its conjugate variables \(\tilde{\mathbf{m}}(t)\). This yields the following marginalized action:
\[S[\tilde{\mathbf{V}},\mathbf{V}]=\] \[\int dt\;\sum_{I=0,1,2}\left\{\tilde{V}_{I}\left[\dot{V}_{I}+ \frac{V_{I}-\varepsilon_{I}}{\tau_{m}}-I_{I}-\tau_{s}^{-1}\mu_{\text{ext}} \right]-\left(e^{\tau_{s}^{-1}\left(-\frac{J_{\text{self}}}{N_{I}}\tilde{V}_{I} +\sum_{J}\tilde{V}_{I}p_{JI}w_{JI}\right)}-1\right)N_{I}\phi(V_{I})\right\}.\]
We expand this action around the mean-field solution \((\tilde{\mathbf{V}},\mathbf{V})=(\mathbf{0},\mathbf{V}^{\text{mf}})\) to quadratic order. Evaluating individual terms and derivatives at the mean-field solution, we get
\[S[\mathbf{0},\mathbf{V}^{\text{mf}}]=0,\] \[S_{V_{I}}[\mathbf{0},\mathbf{V}^{\text{mf}}]=0,\] \[S_{\tilde{V}_{I}}[\mathbf{0},\mathbf{V}^{\text{mf}}]=\int dt\; \left[\dot{V}_{i}+\frac{V_{i}^{\text{mf}}-\varepsilon_{I}}{\tau_{m}}-\frac{V_ {i}^{\text{mf}}-\varepsilon_{I}}{\tau_{m}}\right]=\int dt\;\left[\dot{V}_{i}- \dot{V}_{i}^{\text{mf}}\right]=\int dt\;\delta V_{i},\] \[S_{\tilde{V}_{I}\tilde{V}_{J}}[\mathbf{0},\mathbf{V}^{\text{mf}}] =\int dt\;\Bigg{[}-\tau_{s}^{-2}\sum_{K}\left(-\delta_{IK}\frac{J_{\text{self} }}{N_{I}}+p_{IK}w_{IK}\right)\left(-\delta_{JK}\frac{J_{\text{self}}}{N_{J}}+ p_{JK}w_{JK}\right)N_{K}\phi(V_{K})\Bigg{]},\]
\[S_{\tilde{V}_{I}V_{J}}[\mathbf{0},\mathbf{V}^{\mathrm{mf}}]=\int dt\ \left[\delta_{IJ}\left(\tau_{m}^{-1}+\tau_{s}^{-1}J_{\mathrm{self}}\phi^{ \prime}(V_{I}^{\mathrm{mf}})\right)-\tau_{s}^{-1}p_{IJ}w_{IJ}N_{J}\phi^{\prime} (V_{J}^{\mathrm{mf}})\right],\]
and
\[S_{V_{I}V_{J}}[\mathbf{0},\mathbf{V}^{\mathrm{mf}}]=0.\]
Again defining fluctuations in the membrane potential as \(\delta V_{I}:=V_{I}-V_{I}^{\mathrm{mf}}\), approximated action can be written as
\[S[\tilde{V},V] =\int dt\ \sum_{I}\left\{\delta V_{I}+\sum_{J}\left[\delta_{IJ} \left(\tau_{m}^{-1}+\tau_{s}^{-1}J_{\mathrm{self}}\phi^{\prime}(V_{I}^{ \mathrm{mf}})\right)-\tau_{s}^{-1}p_{IJ}w_{IJ}N_{J}\phi^{\prime}(V_{J}^{ \mathrm{mf}})\right]\delta V_{J}\right\}\tilde{V}_{I}(t)\] \[\quad+\frac{1}{2}\int dtdt^{\prime}\ \sum_{ij}\left[-\tau_{s}^{-2} \sum_{K}\left(-\delta_{IK}\frac{J_{\mathrm{self}}}{N_{I}}+p_{IK}w_{IK}\right) \left(-\delta_{JK}\frac{J_{\mathrm{self}}}{N_{J}}+p_{JK}w_{JK}\right)N_{K} \phi(V_{K})\right]\tilde{V}_{I}(t)\tilde{V}_{J}(t^{\prime}).\]
As before, we can identify the GPA dynamics of the population-averaged Hawkes process as corresponding to an Ornstein-Uhlenbeck process. We may therefore match terms to identify the effective stochastic process described by this action:
\[\frac{d\delta V_{I}}{dt}=-\sum_{J=0,1,2}\left[\delta_{IJ}\left(\tau_{m}^{-1}+ \tau_{s}^{-1}J_{\mathrm{self}}\phi^{\prime}(V_{I}^{\mathrm{mf}})\right)-\tau_ {s}^{-1}p_{IJ}w_{IJ}N_{J}\phi^{\prime}(V_{J}^{\mathrm{mf}})\right]\delta V_{J} +\xi_{I}(t)\quad\text{for }I=0,1,2\]
where \(\xi_{I}(t)\) is a zero-mean Gaussian noise with covariance
\[\langle\xi_{I}(t)\xi_{J}(t^{\prime})\rangle=\tau_{s}^{-2}\sum_{K}\left(-\delta _{IK}\frac{J_{\mathrm{self}}}{N_{I}}+p_{IK}w_{IK}\right)\left(-\delta_{JK} \frac{J_{\mathrm{self}}}{N_{J}}+p_{JK}w_{JK}\right)N_{K}\phi(V_{K})\delta(t-t^ {\prime}).\]
Casting this as a proper Ito stochastic differential equation, we get
\[d\mathbf{V}=\mathbf{A}\left(\mathbf{V}^{\mathrm{mf}}-\mathbf{V}\right)dt+ \mathbf{\Sigma}d\mathbf{W}_{t},\]
where
\[A_{ij}=\delta_{IJ}\left(\tau_{m}^{-1}+\tau_{s}^{-1}J_{\mathrm{self}}\phi^{ \prime}(V_{I}^{\mathrm{mf}})\right)-\tau_{s}^{-1}p_{IJ}w_{IJ}N_{J}\phi^{ \prime}(V_{J}^{\mathrm{mf}})\]
We note that this is consistent with the form derived in Appendix A.
## Appendix C Population averaging for the linear non-spiking model
We also construct a simpler model of networked, linear non-spiking (or "graded potential") neurons. We assume the neurons are injected with large numbers of synaptic input that sum together to be approximately Gaussian, with non-zero mean \(\mu_{\mathrm{ext}}\), creating a stochastic system with dynamics described by
\[\frac{dV_{i}}{dt}= -\tau_{m}^{-1}(V_{i}-\varepsilon_{I})+I_{i}+\tau_{s}^{-1}\left( \mu_{\mathrm{ext}}-J_{\mathrm{self}}\phi(V_{i})+\sum_{j}w_{ij}\phi(V_{j}) \right)+\xi_{i}(t).\]
We begin this derivation by assuming the connections \(w_{ij}=w_{IJ}x_{ij}\) are scaled Bernoulli variables as in Appendices A,B. Here, the transfer function \(\phi(\cdot)\) is a simple linear function (i.e. \(\phi(x)=x\)). The processes \(\xi_{i}(t)\) are zero-mean Gaussian noise synaptic input from neurons external to the network being examined, and thus they scale with \(\tau_{s}^{-1}\) (i.e. \(\xi_{i}(t)\sim\tau_{s}^{-1}\)). We define the covariance of the noise processes \(\xi_{i}(t)\) as follows:
\[\langle\xi_{i}(t)\xi_{j}(t^{\prime})\rangle=\tau_{s}^{-2}\mu_{\mathrm{ext}} \delta(t-t^{\prime}).\]
Here, \(k_{J}\) is a constant potentially depending on the identity of the receiving population \(J\). We wish to derive a population-averaged model for the membrane potential dynamics for comparison to the Gaussian-process-approximated spiking models. Again, we define
\[\langle\langle A_{i}\rangle\rangle_{I}\equiv\frac{1}{N_{I}}\sum_{i\in I}A_{i}( t).\]
We thus derive the population-averaged dynamics for population \(I\):
\[\frac{d}{dt}\left(\frac{1}{N_{I}}\sum_{i\in I}V_{i}\right) =-\left\langle\left\langle\frac{V_{i}-\varepsilon_{I}}{\tau_{m}} \right\rangle\right\rangle_{I}+\langle\langle I_{i}\rangle\rangle_{I}+\tau_ {s}^{-1}\left(\mu_{\mathrm{ext}}-\langle\langle J_{\mathrm{self}}\phi(V_{i}) \rangle\rangle_{I}+\left\langle\left\langle\sum_{J}N_{J}\langle\langle w_{ij} \phi(V_{j})\rangle\rangle_{J}\right\rangle\right)_{I}\right)+\langle\langle \langle\xi_{i}(t)\rangle\rangle_{I}\] \[\Rightarrow\frac{dV_{I}}{dt}\approx -\frac{V_{I}-\varepsilon_{I}}{\tau_{m}}+I_{I}+\tau_{s}^{-1}\left( \mu_{\mathrm{ext}}-J_{\mathrm{self}}\phi(V_{I})+\sum_{J}p_{IJ}w_{IJ}N_{J}\phi( V_{J})\right)+\Xi_{I}(t),\]
where we have defined \(\Xi_{I}(t)\equiv\frac{1}{N_{I}}\sum_{i\in I}\xi_{i}(t)\) for \(I=0,1,2\). The means and covariances of the population-averaged noise processes are as follows:
\[\langle\Xi_{I}(t)\rangle=\left\langle\frac{1}{N_{I}}\sum_{i\in I}\xi_{i}(t) \right\rangle=\frac{1}{N_{I}}\sum_{i\in I}\langle\xi_{i}(t)\rangle=0,\]
and
\[\langle\Xi_{I}(t),\Xi_{J}(t)\rangle= \Bigg{\langle}\frac{1}{N_{I}}\sum_{i\in I}\xi_{i}(t),\frac{1}{N_{ J}}\sum_{j\in J}\xi_{j}(t)\Bigg{\rangle}=\frac{1}{N_{I}N_{J}}\sum_{i\in I,j\in J} \langle\xi_{i}(t)\xi_{j}(t)\rangle-\langle\xi_{i}(t)\rangle\langle\xi_{j}(t)\rangle\] \[=\frac{1}{\tau_{s}^{2}N_{I}N_{J}}\sum_{i\in I,j\in J}\delta_{ij} \mu_{\text{ext}}\delta(t-t^{\prime})\] \[=\frac{\delta_{IJ}}{\tau_{s}^{2}N_{I}^{2}}\sum_{i\in I}\mu_{\text {ext}}\delta(t-t^{\prime})=\frac{\delta_{IJ}}{\tau_{s}^{2}N_{I}^{2}}N_{I}\mu_ {\text{ext}}\delta(t-t^{\prime})\] \[=\frac{\delta_{IJ}}{\tau_{s}^{2}N_{I}}\mu_{\text{ext}}\delta(t-t ^{\prime}).\]
We can then rewrite the population dynamics as
\[dV_{I}= \left(-\frac{V_{I}-\varepsilon_{I}}{\tau_{m}}+I_{I}+\tau_{s}^{-1} \left(\mu_{\text{ext}}-J_{\text{self}}\phi(V_{I})+\sum_{J}p_{IJ}w_{IJ}N_{J} \phi(V_{J})\right)\right)dt+\Xi_{I}(t)dt\] \[\to d\mathbf{V}=\mathbf{A}\left(\mathbf{A}^{-1}\left(\tau_{s}^{-1} \mu_{\text{ext}}+I_{I}\right)-\mathbf{V}\right)dt+\mathbf{\Sigma}d\mathbf{W}_{t}\] \[= \mathbf{A}\left(\mu-\mathbf{V}\right)dt+\mathbf{\Sigma}d\mathbf{W }_{t},\]
where
\[\mathbf{A}_{IJ}= \delta_{IJ}\left(\tau_{m}^{-1}-\tau_{s}^{-1}J_{\text{self}} \right)+\tau_{s}^{-1}p_{IJ}w_{IJ}N_{J}\] \[=\delta_{IJ}\tau_{m}^{-1}+\tau_{s}^{-1}w_{IJ}^{*},\] \[w_{IJ}^{*}= -\delta_{IJ}J_{\text{self}}+p_{IJ}w_{IJ}N_{J},\] \[\left(\Sigma\Sigma^{T}\right)_{IJ}= \frac{\delta_{IJ}}{\tau_{s}^{2}N_{I}}\mu_{\text{ext}}.\]
## Appendix D Balance equations
To derive the balanced state conditions for the network, we begin with the population-averaged spiking network as derived in Appendix B:
\[\frac{d}{dt}V_{I}=-\frac{V_{I}-\varepsilon_{I}}{\tau_{m}}+I_{I}+\tau_{s}^{-1} \left(\mu_{\text{ext}}+\sum_{J=0,1,2}\left(-\delta_{IJ}\frac{J_{\text{self}} }{N_{I}}+p_{IJ}w_{IJ}\right)\dot{m}_{J}(t)\right)\]
\[\dot{m}_{I}(t)dt\sim\text{Poiss}[N_{I}\phi(V_{I}(t))dt],\]
where \(p_{IJ}w_{IJ}\) came from the population-averaged synaptic connection \(\langle\langle w_{ij}\rangle\rangle_{J}\) and the effective spike count processes are \(\dot{m}_{J}(t)=\sum_{j\in J}\dot{n}_{j}(t)\). The total external input to "neuron" \(I\) is \(I_{I}+\tau_{s}^{-1}\left(\mu_{\text{ext}}-\frac{J_{\text{self}}}{N_{I}}\dot{m} _{I}(t)+\sum_{J}p_{IJ}w_{IJ}\dot{m}_{J}(t)\right)\). We want to estimate the mean and variance of this input, taken over the stochastic process. The mean is straightforward, yielding
\[\tau_{s}^{-1}\kappa_{I}\equiv I_{I}+\tau_{s}^{-1}\left(\mu_{\text{ext}}-J_{ \text{self}}\phi(V_{I})+\sum_{J}p_{IJ}w_{IJ}N_{J}\phi(V_{J})\right).\]
Note that the correction term \(-J_{\text{self}}\dot{m}_{I}(t)\) is always going to be smaller than the \(\sum_{J}p_{IJ}w_{IJ}\dot{m}_{J}\) term, so for the purposes of the balanced condition calculation we will neglect it. For the current work, we take the injected currents \(I_{I}\) to be constants.
Calculating the covariance of the total input at times \(t\) and \(t^{\prime}\) yields
\[\sum_{JK}p_{IJ}w_{IJ}p_{IK}w_{IK}\Big{[}\langle\dot{m}_{J}(t)\dot{m}_{K}(t^{ \prime})\rangle-\langle\dot{m}_{J}(t)\rangle\langle\dot{m}_{K}(t^{\prime}) \rangle\Big{]}.\]
We make a Poisson approximation to replace the covariance of the \(\dot{m}\)'s with \(\langle\dot{m}_{J}(t)\rangle\delta_{JK}\delta(t-t^{\prime})\). Hence, the covariance becomes
\[\sum_{J}(p_{IJ}w_{IJ})^{2}N_{J}\phi(V_{J})\delta(t-t^{\prime}).\]
We want the variance of the synaptic input to be \(\mathcal{O}(N^{0})\), which means that to leading order we want
\[\sum_{J}(p_{IJ}w_{IJ})^{2}N_{J}\phi(V_{J})\approx(p_{I1}w_{I1})^{2}N_{1}\phi(V_{ 1})+(p_{I2}w_{I2})^{2}N_{2}\phi(V_{2})\sim\mathcal{O}(N^{0}).\]
We neglect the contribution from the test neuron because it is sub-leading here, i.e. \(N_{0}=1\ll N_{1},\,N_{2}\). In order for this expression to be order 1, we see that we need \(w_{IJ}\) to scale like \(1/\sqrt{N}\) as implemented in Eqns. 12 & 13.
We return to the mean input to neuron \(I\), which we will write as
\[\tau_{s}^{-1}\kappa_{I}\approx\sqrt{N}\left(\frac{I_{I}+\tau_{s}^{-1}\mu_{ \text{ext}}}{\sqrt{N}}+\tau_{s}^{-1}\left\{p_{I1}w_{I1}\frac{N_{1}}{\sqrt{N}} \phi(V_{1})+p_{I2}w_{I2}\frac{N_{2}}{\sqrt{N}}\phi(V_{2})\right\}\right).\]
For a balanced network \(\kappa_{I}\) should be \(\mathcal{O}(1)\) for all \(I\), which means that the terms in brackets must vanish faster than \(1/\sqrt{N}\). We assume that \(I_{I},\mu_{\text{ext}}\propto\sqrt{N}\), and because \(w_{IJ}\sim 1/\sqrt{N}\) and \(N_{I}\propto N\) (for \(I\neq 0\)), the terms in brackets are \(\mathcal{O}(1)\).
As \(N\to\infty\), the terms in brackets must vanish in order for \(\kappa_{I}\) to be finite. This yields a linear system of equations that uniquely determines the means \(\mu_{I}\equiv\phi(V_{I})\), and allows us to place constraints on the parameters:
\[-\begin{bmatrix}I_{1}+\tau_{s}^{-1}\mu_{\text{ext}}\\ I_{2}+\tau_{s}^{-1}\mu_{\text{ext}}\end{bmatrix}=\frac{1}{\tau_{s}}\begin{bmatrix} p_{11}w_{11}N_{1}&p_{12}w_{12}N_{2}\\ p_{21}w_{21}N_{1}&p_{22}w_{22}N_{2}\end{bmatrix}\begin{bmatrix}\phi(V_{1})\\ \phi(V_{2})\end{bmatrix}.\]
Solving this system of equations for the spike rates \(\phi(V_{I}^{\text{mf}})\), we get
\[\phi(V_{1}) =\frac{\tau_{s}}{N_{1}}\frac{p_{12}w_{12}\left(I_{2}+\tau_{s}^{- 1}\mu_{\text{ext}}\right)-p_{22}w_{22}\left(I_{1}+\tau_{s}^{-1}\mu_{\text{ext }}\right)}{p_{11}p_{22}w_{11}w_{22}-p_{12}p_{12}w_{21}w_{12}},\] \[\phi(V_{2}) =\frac{\tau_{s}}{N_{2}}\frac{p_{21}w_{21}\left(I_{1}+\tau_{s}^{- 1}\mu_{\text{ext}}\right)-p_{11}w_{11}\left(I_{2}+\tau_{s}^{-1}\mu_{\text{ext }}\right)}{p_{11}p_{22}w_{11}w_{22}-p_{12}p_{12}w_{21}w_{12}}.\]
In the case of our particular models, we can further reduce this expression by noting that \(I_{I}=0\) for \(I=1,2\) and \(p_{IJ}=p\,\,\forall\,\,I,J\):
\[\phi(V_{1}) =\frac{1}{pN_{1}}\frac{w_{12}-w_{22}}{w_{11}w_{22}-w_{21}w_{12}} \mu_{\text{ext}},\] \[\phi(V_{2}) =\frac{1}{pN_{2}}\frac{w_{21}-w_{11}}{w_{11}w_{22}-w_{21}w_{12}} \mu_{\text{ext}}.\]
We highlight here that \(\phi(V_{I})>0\) by its definition as a firing rate. Additionally, \(\mu_{\text{ext}}\) is assumed to by synaptic input projected into the local network and is thus positive (i.e. excitatory) here. Taken together, these two points mean the synaptic parameters must satisfy one of the two following sets of inequalities to be in a balanced regime:
\[\begin{cases}w_{11}w_{22}>w_{12}w_{21}\\ w_{12}>w_{22}\\ w_{21}>w_{11}\end{cases} \tag{12}\]
or
\[\begin{cases}w_{11}w_{22}<w_{12}w_{21}\\ w_{12}<w_{22}\\ w_{21}<w_{11}\end{cases}. \tag{13}\]
With this, we have derived the appropriate scaling for the various parameters in the model and found constraints for the synaptic strengths in order satisfy the necessary properties of a balanced network.
|
2304.07973 | Frequency Regularization: Restricting Information Redundancy of
Convolutional Neural Networks | Convolutional neural networks have demonstrated impressive results in many
computer vision tasks. However, the increasing size of these networks raises
concerns about the information overload resulting from the large number of
network parameters. In this paper, we propose Frequency Regularization to
restrict the non-zero elements of the network parameters in the frequency
domain. The proposed approach operates at the tensor level, and can be applied
to almost all network architectures. Specifically, the tensors of parameters
are maintained in the frequency domain, where high frequency components can be
eliminated by zigzag setting tensor elements to zero. Then, the inverse
discrete cosine transform (IDCT) is used to reconstruct the spatial tensors for
matrix operations during network training. Since high frequency components of
images are known to be less critical, a large proportion of these parameters
can be set to zero when networks are trained with the proposed frequency
regularization. Comprehensive evaluations on various state-of-the-art network
architectures, including LeNet, Alexnet, VGG, Resnet, ViT, UNet, GAN, and VAE,
demonstrate the effectiveness of the proposed frequency regularization. For a
very small accuracy decrease (less than 2\%), a LeNet5 with 0.4M parameters can
be represented by only 776 float16 numbers (over 1100$\times$ reduction), and a
UNet with 34M parameters can be represented by only 759 float16 numbers (over
80000$\times$ reduction). In particular, the original size of the UNet model is
366MB, we reduce it to 4.5kb. | Chenqiu Zhao, Guanfang Dong, Shupei Zhang, Zijie Tan, Anup Basu | 2023-04-17T03:32:29Z | http://arxiv.org/abs/2304.07973v3 | # Frequency Regularization: Reducing Information Redundancy in Convolutional Neural Networks
###### Abstract.
Convolutional neural networks have demonstrated impressive results in many computer vision tasks. However, the increasing size of these networks raises concerns about the information overload resulting from the large number of network parameters. In this paper, we propose Frequency Regularization to restrict the non-zero elements of the network parameters in the frequency domain. The proposed approach operates at the tensor level, and can be applied to almost all network architectures. Specifically, the tensors of parameters are maintained in the frequency domain, where high frequency components can be eliminated by zigzag setting tensor elements to zero. Then, the inverse discrete cosine transform (IDCT) is used to reconstruct the spatial tensors for matrix operations during network training. Since high frequency components of images are known to be less critical, a large proportion of these parameters can be set to zero when networks are trained with the proposed frequency regularization. Comprehensive evaluations on various state-of-the-art network architectures, including LeNet, Alexnet, VGG, Resnet, ViT, UNet, GAN, and VAE, demonstrate the effectiveness of the proposed frequency regularization. For a very small accuracy decrease (less than 2%), a LeNet5 with 0.4M parameters can be represented by only 776 float16 numbers (over 1100\(\times\) reduction), and a UNet with 34M parameters can be represented by only 759 float16 numbers (over 80000\(\times\) reduction). In particular, the original size of the UNet model is 366MB, we reduce it to 4.5kb.
Frequency domain, Information redundancy, Network regularization, Convolutional neural network +
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally contributed.
+
Footnote †: Authors are equally.
+
Footnote †: Authors are equally contributed. |
2305.10840 | Uncertainty Quantification in Deep Neural Networks through Statistical
Inference on Latent Space | Uncertainty-quantification methods are applied to estimate the confidence of
deep-neural-networks classifiers over their predictions. However, most widely
used methods are known to be overconfident. We address this problem by
developing an algorithm that exploits the latent-space representation of data
points fed into the network, to assess the accuracy of their prediction. Using
the latent-space representation generated by the fraction of training set that
the network classifies correctly, we build a statistical model that is able to
capture the likelihood of a given prediction. We show on a synthetic dataset
that commonly used methods are mostly overconfident. Overconfidence occurs also
for predictions made on data points that are outside the distribution that
generated the training data. In contrast, our method can detect such
out-of-distribution data points as inaccurately predicted, thus aiding in the
automatic detection of outliers. | Luigi Sbailò, Luca M. Ghiringhelli | 2023-05-18T09:52:06Z | http://arxiv.org/abs/2305.10840v1 | # Uncertainty Quantification in Deep Neural Networks through Statistical Inference on Latent Space
###### Abstract
Uncertainty-quantification methods are applied to estimate the confidence of deep-neural-networks classifiers over their predictions. However, most widely used methods are known to be overconfident. We address this problem by developing an algorithm that exploits the latent-space representation of data points fed into the network, to assess the accuracy of their prediction. Using the latent-space representation generated by the fraction of training set that the network classifies correctly, we build a statistical model that is able to capture the likelihood of a given prediction. We show on a synthetic dataset that commonly used methods are mostly overconfident. Overconfidence occurs also for predictions made on data points that are outside the distribution that generated the training data. In contrast, our method can detect such out-of-distribution data points as inaccurately predicted, thus aiding in the automatic detection of outliers.
## 1 Introduction
For the practical, widespread use of artificial-intelligence (AI) predictive models, especially where predictions would impact individuals and societies, the understanding of the limits of applicability of such models is of fundamental importance. An ideal AI model should be able to always provide an accurate prediction in "familiar situations", while, in "less familiar situations", it should be able to at least signal that its prediction might be inaccurate. The challenge is to detect such "(less) familiar situations", by using only the trained model, and therefore the training data, and the (unlabelled) test data points. This challenge is generically referred to as "uncertainty quantification" (UQ)[1].
Uncertainty quantification is important in applications where the consequences of incorrect predictions are severe, such as in medical imaging for diagnosing diseases [2; 3; 4; 5], in autonomous-vehicles driving, where incorrect predictions can lead to accidents [6; 7; 8; 9]. UQ is also important in scientific applications, such as physics data analysis, where neural networks with reliable estimates of prediction uncertainty and robustness are needed [10]. Finally, UQ is of fundamental importance in scientific and technological practice, where experiment design is performed[11]. Therein, active-learning techniques [12] are necessarily based on reliable UQ.
Two types of uncertainties related to the data can be identified [1], one is related to the data that are available, one with the data that are not (yet) available, i.e., it is a knowledge uncertainty. The former is named aleatoric or irreducible uncertainty and is caused by intrinsic noise in the measurements of
the data (both the descriptor and the target quantity) as well as the incompleteness of the descriptor (so that different data points are mapped into the same input description, with different target values). The latter is named epistemic uncertainty and is caused by lack of knowledge on a portion of data. In this context, quantifying the epistemic uncertainty would be related to identifying (test) data points that are different from the training data, i.e., they belong to input domains that are distant or even disjoint from the input domain(s) of the training data.
In this paper, we focus on the estimate of the epistemic uncertainty, in particular for deep-neural-network classifiers model class. Specifically, we address the challenge of identifying data points that are "out of distribution". Out-of-distribution data points could be outliers or novelties/discoveries. Outliers include wrongly collected or labeled data points, typically to be discarded, while novelties/discoveries are data points in which the trained model is still applicable but belong to input domains that are distant/disjoint from the input domain(s) of the training data. In either case, the prediction of the model would require a comparison with the ground truth (assuming it is feasible) in order to decide how to proceed. The actual protocol would be related to the chosen active-learning [13; 14] strategy, but as a first, necessary step, we need to reliably estimate if a test point is out of distribution.
We propose a novel approach based on the analysis of the latent-space representation, which does not require a fine tuning and which we show to be both more reliable and cost-effective than the methods known to us.
### Related work
Two broad classes of (epistemic) UQ strategies have been proposed and developed in the recent years, Bayesian techniques, including Monte Carlo (MC) based approximations (e.g., _MC-dropout_) or _ensemble_ based.
Bayesian techniques and Monte Carlo dropoutIn an ideal Bayesian neural network, distributions of the training weights are learned, rather than specific values [15]. This allows for a direct estimate of the prediction uncertainty. However, in practical application, the Bayesian architecture needs to be approximated, as it is often difficult, if possible at all to compute the exact posterior distribution. A popular approximation is the _MC-dropout_ technique, where in production the network is run several times and a fraction of nodes are randomly switched off. This yields a distribution of prediction from which the average and standard deviation are used as actual prediction and uncertainty estimate, respectively. It is unclear if the output distribution is always a good approximation of the posterior distribution and, in addition, results are sensitive to the dropout ratio.
Ensemble techniques_ensemble_ techniques in general require the training of several instances of a neural network, by varying the network architecture and/or the initial seeds for the training [16; 17]. However, _ensemble_ techniques for UQ do not necessarily yield a Bayesian uncertainty estimate. In fact, _ensemble_ techniques, also beyond the neural-network model class (e.g. random forests), are more robust when the "true model" is not part of the ensemble, which is at odds with a Bayesian estimate [1]. So, in general, it is not evident why and when an ensemble of NNs can generate good uncertainty estimates. _Ensemble_ techniques require training and then running of several models for prediction, which is computationally costly.
## 2 Method
In the context of a data set \(\left\{X_{i}^{(0)},y_{i}\right\}_{i=1}^{N}\), where \(X_{i}^{(0)}\in\mathbb{R}^{D}\) is a \(D\)-dimensional input vector and \(y_{i}\in\{1,\ldots K\}\) is a class label, a deep neural network can be trained to predict the label \(y_{i}\) from the input \(X_{i}^{(0)}\). A feed-forward deep neural network is composed of \(L\) consecutive layers, where each layer takes the output of the previous layer as input. The \(l\)-th layer, where \(l\in 1\ldots L\), takes an input vector \(X_{i}^{(l-1)}\) and returns an output vector \(X_{i}^{(l)}\) defined as:
\[X_{i}^{(l)}=\sigma\left(\sum_{i=1}^{h_{L}}W_{l}X_{i}^{(l-1)}+B_{l}\right), \tag{1}\]
where \(W_{l}\) and \(b_{l}\) are respectively the weights matrix and bias vector learned during training, and \(\sigma(\cdot)\) is a non-linear activation function. The final layer of the network returns a \(K\)-dimensional output vector \(X^{(L)}\), and the predicted category can be inferred using the softmax function, which normalizes the output vector to obtain a probability distribution over the \(K\) classes. The latent representation of an input vector is the set of \(L-1\) vectors \(\left\{X^{(l)}\right\}_{l=1}^{L-1}\) that are generated when the input vector \(X^{(0)}\) is fed into the network. In our method, we use this latent representation to assess the confidence of the network's prediction \(f(X^{(0)})\).
The latent representations for a training set \(\left\{X^{(0)}\right\}\) are obtained by forwarding the entire set through the network, which results in a set of latent representations \(\left\{X^{(l)}_{i}\right\}_{l=1}^{L-1}\) for each input vector \(X^{(0)}_{i}\). These latent representations are then grouped into \(K\) subsets \(\left\{X^{(l,k)}_{i}\right\}\) based on the network's predicted label \(k\). In each subset, we prune all vectors \(X_{j}\) whose predicted label \(f(X^{(0)}_{j})\) does not match the true label \(y_{j}\). This results in the creation of latent confidence sets \(\left\{X^{(l,k)*}\right\}\), which contain the latent representations of only those vectors in the training set that the network classifies accurately. We then use these latent confidence sets to build a statistical model that can be used to estimate the likelihood of a certain prediction. The underlying assumption is that the latent representation of a well-classified input vector will be similar to the latent representations in the corresponding latent confidence set, whereas the latent representation of a misclassified input vector will be dissimilar. In essence, this approach allows us to obtain a measure of confidence for the network's predictions by leveraging the information contained in the latent representations of accurately classified training set vectors.
Every hidden layer \(l\) and each class \(k\) within the latent confidence set \(\left\{X^{(l,k)*}\right\}\) generates an independent statistical model. For our purposes, we opt for a multivariate Gaussian distribution as the prior distribution due to its numerous advantages. Specifically, it is computationally efficient, scalable in high dimensions, and exhibits fast exponential decay. These properties make it an ideal choice for accurately detecting out-of-distribution points. To build the multivariate normal distribution from the latent confidence set \(\left\{X^{(l,k)*}\right\}\), we compute the mean vector \(\mu^{(l,k)*}\) and covariance matrix \(\Sigma^{(l,k)*}\). The distribution is given by:
\begin{table}
\begin{tabular}{l} \hline \hline
**Preparation:** \\ - Collect input data set \(\left\{X^{(0)}_{i},y_{i}\right\}_{i=1}^{N}\); \\ - Train a feed-forward neural network to predict \(f(X^{(0)}_{i})=y_{i}\); \\ - Create training subsets \(\left\{X^{(0,k)}\right\}\) based on network prediction \(k\); \\ - Prune all misclassified data to create confidence sets \(\left\{X^{(0,k)*}\right\}\); \\ - Collect all latent representations in confidence sets \(\left\{X^{(l,k)*}\right\}\); \\ - Compute the mean vector \(\mu^{l,k}\) and covariance matrix \(\Sigma^{l,k}\) \\ - Calculate the log-probability of all confidence sets based on Eq. 2. \\ - Select the \(\alpha\) and \(\beta\) confidence values; \\ - Find the \(q^{(l,k)}_{\alpha}\) and \(q^{(l,k)}_{\beta}\) percentiles; \\ - Select acceptance value \(a\). \\
**Evaluation:** \\ - Given an input point \(X_{i}\) \\ - Predict label \(f(X_{i})=k\) and store latent representation \(\left\{X^{(l,k)}_{i}\right\}\); \\ - For each \(l\) compute likelihood with Eq. 3; \\ - Compute confidence with Eq. 4; \\ - If likelihood \textless{}\(a\) : reject prediction; if likelihood\textgreater{}\(a\) accept prediction. \\ \hline \hline \end{tabular}
\end{table}
Table 1: Algorithm of the _inference_ method.
\[p(X_{i}^{(l,k)})=\frac{1}{\sqrt{(2\pi)^{2}\det\Sigma^{(l,k)*}}}\exp\left(-\frac{1} {2}(X_{i}^{(l,k)}-\mu^{(l,k)*})^{T}\Sigma^{(l,k)*}(X_{i}^{(l,k)}-\mu^{(l,k)*}) \right). \tag{2}\]
To assess the confidence of a prediction using a probabilistic model, we aim for high probabilities for data points where we know the model makes correct predictions and low probabilities for those with lower likelihood. In practice, we normalize the probability distribution in Eq. 2 so that it approaches one for data points with similar likelihoods to those in the latent confidence set, and approaches zero for those with much lower likelihoods. To achieve this, we construct a histogram of the log-probabilities \(\left\{\log p\left(X^{(l,k)*}\right)\right\}\) of the confidence sets and use it to identify the \(\alpha\)-th percentile \(q_{\alpha}^{l,k}\) and the \(\beta\)-th percentile \(q_{\beta}^{l,k}\). Log-probabilities below or equal to \(q_{\alpha}^{l,k}\) are assigned a confidence value of zero, while those above or equal to \(q_{\beta}^{l,k}\) are assigned a confidence value of one. Log-probabilities within the \(q_{\alpha}^{l,k}-q_{\beta}^{l,k}\) interval are assigned a confidence value using the following smoothstep function
\[s(X_{i})=\frac{1}{2}\left(\tanh\left(\frac{X_{i,q}-1}{2\sqrt{X_{i,q}(1-X_{i,q} )}}\right)+1\right), \tag{3}\]
where \(X_{i,q}=\frac{X_{i}-q_{\alpha}^{l,k}}{q_{\beta}^{l,k}-q_{\alpha}^{l,k}}\). We notice that the function in Eq. 3 smoothly maps all values in the range \([q_{\alpha}^{l,k},q_{\beta}^{l,k}]\) to the interval \([0,1]\). To sum up, the latent confidence set \(\left\{X^{l,k}\right\}^{*}\) is used to build the probability distribution in Eq. 2, which in turn is used to rank in a histogram the vectors of the latent confidence set according to their probability. The \(\alpha\)-th percentile \(q_{\alpha}^{l,k}\) and the \(\beta\)-th percentile \(q_{\beta}^{l,k}\) of the histogram are used in Eq. 3 to give the confidence of the network prediction.
When training a deep neural network with \(L\) hidden layers on a dataset with \(K\) different class labels, the network generates \(K*L\) probability distributions and smoothstep functions, as in Eq. 2 and Eq. 3. To evaluate the confidence of the network's prediction for a given input vector \(X_{i}\) and predicted class label \(k_{i}\), we calculate the confidence value for each hidden layer, using the activation function corresponding to the predicted class label. Specifically, we compute \(L\) confidence values \(s^{0,l}(X_{i}^{0,k}),\ldots s^{0,l}(X_{i}^{L,k})\), where \(k=y_{i}\). Since each layer in the network has a large number of parameters and involves complex, nonlinear transformations in the latent space, we assume that the probability distribution of the latent representations of the same input vector but in different layers are statistically independent, i.e., \(p(X_{i}^{l,k},X_{i}^{m,k})=p(X_{i}^{l,k})p(X_{i}^{m,k})\). To obtain a final confidence value, we multiply the confidence values of each hidden layer, resulting in the product
\[a(X_{i}^{(0,k)})=s^{1}(X_{i}^{(1,k)})*\ldots*s^{L}(X_{i}^{(L-1,k)}). \tag{4}\]
This final confidence value represents the overall confidence of the network's prediction for the given input vector \(X_{i}^{(0,k)}\). A summary of the algorithm is provided in Table 1.
## 3 Experiments
To demonstrate the validity of our method, we conducted numerical experiments and compared its performance with state-of-the-art methods using the MNIST dataset. Specifically, we trained feedforward neural networks with varying numbers of layers to classify the images. To showcase the ability of our method to detect out-of-distribution samples, we conducted experiments where we removed all samples from the training set associated with a specific label, and then trained the network on the remaining set. This allowed us to evaluate the network's capability to identify instances that it has never encountered during training, and to determine whether it can recognize when it does not know the answer.
We utilized two distinct network architectures for our experiments. The first architecture was shallower and comprised of two layers, with each layer having 1024 nodes. The second architecture was deeper, consisting of four layers with 256 nodes each. We employed early stopping during training, which terminated the process once an accuracy threshold of 0.96 was attained. In all training we used dropout
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**Method** & **Network** & **Dropout** & **TP** & **TN** & **TN-OOD** \\ \hline \hline Inference - \(q_{1}\) & \(2\cdot[1024]\) & 0.2 & \(0.929\pm 0.002\) & \(0.732\pm 0.05\) & \(0.761\pm 0.158\) \\ \hline Inference - \(q_{1}\) & \(2\cdot[1024]\) & 0.5 & \(0.933\pm 0.003\) & \(0.733\pm 0.046\) & \(0.756\pm 0.161\) \\ \hline Inference - \(q_{4}\) & \(4\cdot[256]\) & 0.1 & \(0.928\pm 0.003\) & \(0.723\pm 0.021\) & \(0.707\pm 0.144\) \\ \hline Inference - \(q_{4}\) & \(4\cdot[256]\) & 0.25 & \(0.928\pm 0.003\) & \(0.745\pm 0.017\) & \(0.732\pm 0.138\) \\ \hline \hline Inference - \(q_{2}\) & \(2\cdot[1024]\) & 0.2 & \(0.873\pm 0.002\) & \(0.855\pm 0.031\) & \(0.877\pm 0.1\) \\ \hline Inference - \(q_{2}\) & \(2\cdot[1024]\) & 0.5 & \(0.877\pm 0.004\) & \(0.866\pm 0.04\) & \(0.879\pm 0.098\) \\ \hline Inference - \(q_{5}\) & \(4\cdot[256]\) & 0.1 & \(0.87\pm 0.006\) & \(0.858\pm 0.021\) & \(0.849\pm 0.113\) \\ \hline Inference - \(q_{5}\) & \(4\cdot[256]\) & 0.25 & \(0.869\pm 0.004\) & \(0.873\pm 0.017\) & \(0.869\pm 0.103\) \\ \hline \hline Inference - \(q_{3}\) & \(2\cdot[1024]\) & 0.2 & \(0.751\pm 0.005\) & \(0.957\pm 0.014\) & \(0.97\pm 0.035\) \\ \hline Inference - \(q_{3}\) & \(2\cdot[1024]\) & 0.5 & \(0.753\pm 0.006\) & \(0.968\pm 0.014\) & \(0.973\pm 0.031\) \\ \hline Inference - \(q_{6}\) & \(4\cdot[256]\) & 0.1 & \(0.748\pm 0.081\) & \(0.957\pm 0.013\) & \(0.951\pm 0.058\) \\ \hline Inference - \(q_{6}\) & \(4\cdot[256]\) & 0.25 & \(0.747\pm 0.007\) & \(0.967\pm 0.012\) & \(0.955\pm 0.052\) \\ \hline \hline MC - dropout & \(2\cdot[1024]\) & 0.2 & \(0.98\pm 0.002\) & \(0.396\pm 0.038\) & \(0.267\pm 0.079\) \\ \hline MC - dropout & \(2\cdot[1024]\) & 0.5 & \(0.902\pm 0.013\) & \(0.819\pm 0.038\) & \(0.674\pm 0.125\) \\ \hline MC - dropout & \(4\cdot[256]\) & 0.1 & \(0.96\pm 0.004\) & \(0.564\pm 0.034\) & \(0.409\pm 0.111\) \\ \hline MC - dropout & \(4\cdot[256]\) & 0.25 & \(0.908\pm 0.006\) & \(0.742\pm 0.024\) & \(0.591\pm 0.141\) \\ \hline \hline Ensemble & \(2\cdot[1024]\) & 0.2 & \(0.952\pm 0.009\) & \(0.684\pm 0.042\) & \(0.517\pm 0.119\) \\ \hline Ensemble & \(2\cdot[1024]\) & 0.5 & \(0.969\pm 0.004\) & \(0.594\pm 0.022\) & \(0.431\pm 0.119\) \\ \hline Ensemble & \(4\cdot[256]\) & 0.1 & \(0.956\pm 0.002\) & \(0.67\pm 0.018\) & \(0.532\pm 0.134\) \\ \hline Ensemble & \(4\cdot[256]\) & 0.25 & \(0.964\pm 0.002\) & \(0.621\pm 0.03\) & \(0.484\pm 0.124\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Rate of true positives (TP), true negatives (TN) and true negatives on out-of-distribution samples (TN-OOD) detected using our _inference_ method, the _MC-dropout_ method, and the _ensemble_ method. In the _inference_ method the following percentiles \(q=(\alpha,\beta)\) were used: \(q_{1}=(0.01,1),q_{2}=(0.1,50),q_{3}=(3,90)\),\(q_{4}=(2,10),q_{5}=(3,50),q_{6}=(7,90)\) Two different network architectures were employed: one composed of 2 hidden layers with 1024 nodes each - \(2\cdot[1024]\), one composed of 4 hidden layers with 256 nodes each - \(4\cdot[256]\). Dropout was implemented in every layer of all network architectures during training, with a variable dropout rate.
with different values. To evaluate methods performance, we used a hold-out test set that excluded out-of-distribution samples and thus sampled from the same distribution as the training set, as well as a set consisting only of out-of-distribution samples. We conducted the experiment iteratively by removing each label one-by-one and evaluating the confidence of the network's predictions using our inference method, MC dropout, and _ensemble_ methods. The MC dropout method involves obtaining an ensemble of predictions by leaving dropout activated during prediction. In contrast, the _ensemble_ method involves training models with different initial conditions to obtain an ensemble of predictions. Each experiment took approximately 25 minutes for the _inference_ and _MC-dropout_ methods, and approximately 4 hours for the _ensemble_ method. All simulations were performed on a single CPU Intel(R) Core(TM) i7. To make a prediction using an _ensemble_, we take the most frequently predicted value as the network prediction. The uncertainty associated with this prediction is determined by the fraction of predictions that match the most frequently predicted value, relative to the total number of models used in the ensemble. We used 100 passes for the _MC dropout_ (with ratio equal to the one used in training) experiments and 10 models for the _ensemble_ experiments. Uncertainty in our method is calculated using the algorithm described in Table 1.
The objective of our experiments is to compare the aforementioned uncertainty quantification methods and assess the algorithms' ability to differentiate between well-classified and misclassified predictions. To achieve this, we calculate uncertainties on separate holdout test sets and reject predictions whose uncertainty falls below a certain threshold, denoted as \(a\). We then assess the method's performance by counting the number of well-classified vectors that are assigned a value above the threshold, which represents true positives (TP). Additionally, we count the number of misclassified vectors below the threshold (TN) and the number of vectors from the out-of-distribution test set below the threshold (TN-OOD). These three values serve as indicators to evaluate the quality of the method, with higher values indicating better performance. Of particular importance in our analysis is the TN-OOD value, as it helps us understand the algorithm's ability to identify samples that lie outside the distribution of the training set. For the _MC-dropout_ and _ensemble_ methods, we employ a high threshold of \(a=0.99\) due to the tendency of both these methods to exhibit overconfidence[1]. Conversely, our inference method employs the conceptually intuitive threshold of \(a=0.5\). Our method utilizes two parameters: the percentiles \(\alpha\) and \(\beta\). These parameters determine the distribution of confidence values within
Figure 1: Histograms of the confidence rate assessed on a hold out test set with the _inference_, the ’MC-dropout’ and ’ensemble’ methods. The histogram is relative to the experiment conducted with a network composed of 2 hidden layers with 1024 nodes each and dropout rate fixed to 0.5. The _inference_ method employs percentile values \(q_{2}=(0.1,50)\). We can see in the histograms that most _misclassified_ and _out-of-distribution_ samples are assigned a confidence close to zero with the _inference_ method, while the _MC-dropout_ and the _ensemble_ methods assign much higher values to the same samples. Note that, in this figure, we show the histograms relative to the experiment where the _MC-dropout_ method features the best values in detecting true negatives.
the sets of vectors. Specifically, the percentage of vectors assigned a confidence value of 1 is equal to \(100-\beta\), while the percentage of vectors assigned a value of 0 is equal to \(\alpha\). The parameters \(q=(\alpha,\beta)\) used in our experiments are: \(q_{1}=(0.01,1),\ q_{2}=(0.1,50),\ q_{3}=(3,90)\) in the network composed of 2 hidden layers with 1024 nodes each, \(q_{4}=(2,10),\ q_{5}=(3,50),\ q_{6}=(7,90)\) in the network composed of 4 hidden layers with 256 nodes each.
## 4 Discussion
The results of our experiments are summarized in Table 2. The table demonstrates that our _inference_ method successfully detects both high true positives and true negatives. By tuning the percentiles \(q=(\alpha,\beta)\), we can determine the balance between true positive and true negative detection. At the top of the table, using values \(q_{1}\) and \(q_{2}\) yields high true positives. Using values \(q_{2}\) and \(q_{5}\) provides a balance between true positives and true negatives, while values \(q_{3}\) and \(q_{6}\) below allow for high true negatives. Our method consistently produces reliable results, regardless of the dropout rate and number of hidden layers. This is achieved by tuning the percentiles while utilizing different network architectures. As previously discussed, both the _MC-dropout_ and _ensemble_ methods demonstrate overconfidence in their predictions. They exhibit high true positive detection and low true negative detection. As expected, _MC-dropout_ shows higher true negatives and lower true positives with increased dropout rates. Conversely, the _ensemble_ method seems to follow the opposite trend.
Our method demonstrates robust performance in detecting true negatives, even in the presence of out-of-distribution samples, unlike the _MC-dropout_ and _ensemble_ methods, as we can see in Fig. 1. In our experiments, the _MC-dropout_ method achieves a maximum out-of-distribution sample detection rate of only 68%. Similarly, the _ensemble_ method reaches a maximum of 53% true negative detection for out-of-distribution samples. In contrast, our _inference_ method consistently maintains a true negative out-of-distribution sample detection rate above 70%, even when percentile values are adjusted to prioritize true positive detection over true negatives. Notably, our _inference_ method achieves a true negative detection rate of over 95% while still maintaining a reasonable level of approximately 75% true positive detection. However, our _inference_ method seems to be slightly less reliable when it comes to true positive detection compared to the other methods, although the difference is minimal. Our method can still achieve over 92% true positive detection. Increasing the true positive detection rate can be achieved by tuning the percentile values, but this would come at the cost of decreasing the true negative detection rate. Since our focus is on the capability of the method to detect true negatives, we don't show higher values of true positives that would compromise this capability. Additionally, it is possible to adjust the acceptance rate threshold, currently fixed at \(a=0.5\), to further balance the detection towards true negatives or true positives. However, in these experiments, tuning the percentile values proves to be sufficient to obtain satisfactory results.
We would like to emphasize that our _inference_ method not only yields superior overall results but also offers greater ease of application compared to other methods. For instance, the _MC-dropout_ method requires the network to utilize dropout, which may adversely affect network performance in many applications. Moreover, when dropout can be employed, finding the optimal dropout rate necessitates training multiple models until a suitable rate is discovered. The ensemble method instead, while conceptually simpler to apply, demands training multiple models to assess uncertainty. This computational expense renders it unfeasible for many applications. In contrast, our _inference_ method only requires tuning the percentile values, a task that can be performed after training. This means that only one model needs to be trained, simplifying the process. Furthermore, the computational cost remains low as it scales quadratically with the number of nodes in the layers, which typically do not exceed thousands of units. In practice, the time required to construct the statistical model employed in our method is negligible compared to the time needed to train the neural network. Tuning the percentile parameters can be done after training on a test set. Since our _inference_ method demonstrates consistent results on misclassified test vectors sampled from the same distribution as the training set, as well as on out-of-distribution samples (see Table 2), parameter tuning on a test sample can effectively generalize to out-of-distribution scenarios.
## 5 Limitations and outlook
Although the _inference_ method demonstrates promising results, it is important to acknowledge its limitations. Firstly, the method assumes that the data are well approximated by a normal distribution
in the latent space. Although this assumption holds true in our experiments, it is possible that in more complex scenarios, a different prior or inference method might be more appropriate. Additionally, the assumption of statistical independence among the probability distributions across different layers in the latent space, as shown in Eq. (4), is quite strong. When dealing with numerous hidden layers, this assumption can potentially impact the quality of the results. Nevertheless, it is worth mentioning that this method can still be utilized selectively on certain layers, where the assumption of independence among probabilities holds true.
We employed a technique to measure uncertainty in classification, which involved creating a distinct statistical model for each potential label. This approach played a significant role in our analysis. It would be advantageous to extend this method to inference problems that involve predicting continuous values. In such scenarios, we cannot rely on the differentiation of data based on their labels, so an alternative strategy must be employed. One potential strategy is to utilize a statistical model based on the training set vectors, where the model's prediction is similar to the prediction we wish to assess the uncertainty for. However, the specific measure of similarity needs to be defined carefully to ensure accurate results.
## 6 Conclusions
This paper presents a novel approach for quantifying uncertainty in classification tasks using feed-forward neural networks. We conducted a comparative analysis of our method against state-of-the-art techniques on a benchmarking synthetic dataset. The results demonstrate that our method outperforms the examined state-of-the-art approaches, particularly excelling in the detection of true negatives. This advantage extends to out-of-distribution samples as well, where other methods fall short.
Although it is difficult to assess the impact of a work in a rapidly evolving field like machine learning, we believe that this work can have a positive impact in the field. Especially in contexts where the identification of out-of-distribution samples is crucial, such as autonomous driving and active learning.
## Acknowledgements
LS acknowledges for funding the Leibniz Association, project "Memristors Materials by Design, MeMabyDe". LMG acknowledges for funding the NFDI project "FAIRmat" (FAIR Data Infrastructure for Condensed-Matter Physics and the Chemical Physics of Solids, German Research Foundation, project N\({}^{\text{o}}\) 460197019).
|
2306.05909 | Galaxy Light profile neural Networks (GaLNets). II. Bulge-Disc
decomposition in optical space-based observations | Bulge-disk (B-D) decomposition is an effective diagnostic to characterize the
galaxy morphology and understand its evolution across time. So far,
high-quality data have allowed detailed B-D decomposition to redshift below
0.5, with limited excursions over small volumes at higher redshifts.
Next-generation large sky space surveys in optical, e.g. from the China Space
Station Telescope (CSST), and near-infrared, e.g. from the space EUCLID
mission, will produce a gigantic leap in these studies as they will provide
deep, high-quality photometric images over more than 15000 deg2 of the sky,
including billions of galaxies. Here, we extend the use of the Galaxy Light
profile neural Network (GaLNet) to predict 2-S\'ersic model parameters,
specifically from CSST data. We simulate point-spread function (PSF) convolved
galaxies, with realistic B-D parameter distributions, on CSST mock observations
to train the new GaLNet and predict the structural parameters (e.g. magnitude,
effective radius, Sersic index, axis ratio, etc.) of both bulge and disk
components. We find that the GaLNet can achieve very good accuracy for most of
the B-D parameters down to an $r$-band magnitude of 23.5 and redshift $\sim$1.
The best accuracy is obtained for magnitudes, implying accurate bulge-to-total
(B/T) estimates. To further forecast the CSST performances, we also discuss the
results of the 1-S\'ersic GaLNet and show that CSST half-depth data will allow
us to derive accurate 1-component models up to $r\sim$24 and redshift
z$\sim$1.7. | Chen Qiu, Nicola R. Napolitano, Rui Li, Yuedong Fang, Crescenzo Tortora, Shiyin Shen, Luis C. Ho, Weipeng Lin, Leyao Wei, Ran Li, Zuhui Fan, Yang Wang, Guoliang Li, Hu Zhan, Dezi Liu | 2023-06-09T14:04:10Z | http://arxiv.org/abs/2306.05909v1 | Galaxy Light profile neural Networks (GaLNets). II. Bulge-Disc decomposition in optical space-based observations
###### Abstract
Bulge-disk (B-D) decomposition is an effective diagnostic to characterize the galaxy morphology and understand its evolution across time. So far, high-quality data have allowed detailed B-D decomposition to redshift below 0.5, with limited excursions over small volumes at higher redshifts. Next-generation large sky space surveys in optical, e.g. from the China Space Station Telescope (CSST), and near-infrared, e.g. from the space EUCLID mission, will produce a gigantic leap in these studies as they will provide deep, high-quality photometric images over more than 15000 deg2 of the sky, including billions of galaxies. Here, we extend the use of the Galaxy Light profile neural Network (GaLNet) to predict 2-Sersic model parameters, specifically from CSST data. We simulate point-spread function (PSF) convolved galaxies, with realistic B-D parameter distributions, on CSST mock observations to train the new GaLNet and predict the structural parameters (e.g. magnitude, effective radius, Sersic index, axis ratio, etc.) of both bulge and disk components. We find that the GaLNet can achieve very good accuracy for most of the B-D parameters down to an \(r\)-band magnitude of 23.5 and redshift \(\sim\)1. The best accuracy is obtained for magnitudes, implying accurate bulge-to-total (B/T) estimates. To further forecast the CSST performances, we also discuss the results of the 1-Sersic GaLNet and show that CSST half-depth data will allow us to derive accurate 1-component models up to \(r\sim\)24 and redshift \(\sim\)1.7.
galaxies: fundamental parameters, structure, evolution - methods: data analysis - surveys
## 1 Introduction
Despite their variegate morphology, most of the physical processes behind galaxy evolution can be understood by the detailed study of their two main stellar components: bulges and disks. According to the most credited galaxy formation scenario, disks are formed
by cooled gas from dark matter halos ((White and Rees, 1978); (Fall and Efstathiou, 1980)) and bulges are generally generated from the merging of two disks ((Lynden-Bell, 1967); (Toomre and Toomre, 1972); (Toomre, 1977); (Barnes and Hernquist, 1991); (Barnes, 1988); (Naab and Trujillo, 2006)), or unstable gas-rich disks ((Kormendy and Kennicutt, 2004), (Bournaud, 2016)). Accurately characterizing bulges and disks in galaxies is a difficult but necessary step to reveal their formation and evolutionary history (e.g., (Conselice et al., 2005); (Lang et al., 2014); (Gao and Ho, 2017)).
In particular, parametric fitting, describing the surface brightness profile of the different galaxy components with their structural parameters (e.g. the magnitude, the effective radius, etc.), has long been proven to be a powerful tool in galaxy analysis ((de Vaucouleurs, 1959); (Sersic, 1968); (Kormendy, 1977)). Traditionally, multi-component galaxies are represented by an exponential disk ((Freeman, 1970)) with a de Vaucouleurs (1948) bulge (see e.g. (Andredakis et al., 1995)), although it has been found that Sersic (1968) profiles with \(n\)-index, i.e. the central slope in the projected light, larger than the de Vaucouleurs's \(n=4\) can better reproduce the bulge components combined to exponential disks ((Gao et al., 2019)).
A more general approach adopts a a Sersic profile to model both components. Here, bulges and disks can be distinguished by the Sersic index, being generally \(n\lesssim 2\) for disks, and \(n\gtrsim 2\) for the bulge/spheroids ((Shen et al., 2003), (Graham and Driver, 2005), (Fisher and Drory, 2008)). In this case, one can reproduce the surface brightness distribution of galaxies with a more realistic combination of bulge-disc components ((Mendez-Abreu et al., 2004)). Ideally, the 2-Sersic models works well for bright/large galaxies, but it is harder for faint/small systems due to their low signal-to-noise ratio (SNR) even in a relatively nearby universe ((Allen et al., 2006), (Casura et al., 2022)). For high redshift galaxies, this becomes even harder due to the limited number of pixels with sufficiently high SNR to include in their modeling, hence requiring the use of space observations to best perform this kind of analysis ((Bruce et al., 2014)).
Cosmic epochs at z\(\sim\)1 and beyond are crucial for galaxy morphology evolution as the star-formation rate density reaches its peak in the universe's history ((Bruce et al., 2014); (Madau and Dickinson, 2014)), and the categories of bulges, disks and spheroids experience epochal transformations ((Costantin et al., 2021)). For this reason, we are motivated to push surface photometry studies of galaxies to improve our understanding of their evolution processes (see e.g. (Conselice et al., 2005); (Ferreira et al., 2022)). This is particularly important to fully test the predictions from cosmological hydro-dynamical simulations, which are providing unprecedented details on the internal structure of individual galaxies at different epochs (e.g., (Snyder et al., 2015); (Bottrell et al., 2017, 2017, 2017); (Dickinson et al., 2018); (Rodriguez-Gomez et al., 2019); (Du et al., 2020)). So far, detailed modeling of the multi-component structure of galaxies have been limited to low redshift ((Simard et al., 2011); (Huang et al., 2013); (Gao et al., 2020)), with only few space-based programs dedicated to high-redshift samples, over small areas and statistics ((Peng et al., 2010), (Bruce et al., 2014)) However, with the upcoming large sky surveys from space (e.g. Chinese Space Station Telescope - CSST, (Gong et al., 2019); Euclid mission, (Laureijs et al., 2011); Roman Space Telescope - Roman, (Mennesson et al., 2020)), we have the chance to move to high quality data over large volumes up to high redshifts, while deeper ground based programs (e.g. from the Vera Rubin LSST, (Ivezic et al., 2019)) will also provide exquisite data for extremely faint and/or diffuse systems in the local universe. This unprecedented data collection will improve our understanding of galaxy morphological transformation up to z\(>\)1, in samples with over a billion galaxies.
Unfortunately, galaxy surface photometry represents a bottle-neck of this learning process because of the time demand of traditional galaxy fitting methods. Tools based on 2D galaxy surface brightness distribution measurements, like GALFIT ((Peng et al., 2002)), Gim2d ((Simard, 1998)), and 2DPHOT ((La Barbera et al., 2008)), PROFIT ((Robotham et al., 2017)), have been extensively used to measure structural parameter in galaxies, either as single- or multi-component systems ((Meert et al., 2013), (Gao et al., 2020), (Xu et al., 2023)). However, these traditional codes are either too slow or need too much manual intervention to be suitable for billion galaxy samples. Thus, even though most of these codes can reach a fairly high accuracy "if" initial conditions are correctly given ((Yoon et al., 2011)), more automatic methods with similar or even higher accuracy are demanded.
Among all options, Machine Learning (ML) tools are becoming a game changer in the approach to big dataset analysis and interpretation. In the last decade ML tools have been regularly applied in a variety of research areas, including astronomy. They can easily perform tasks like classification or regression with unprecedented speed and accuracy and they have been used in the analyses of gravitational waves ((Carrillo et al., 2015); (Biswas et al., 2013)), the photometric classification of supernovae ((Lochner et al., 2016)), the search for strong gravitational lenses ((Petrillo et al., 2017, 2019, 2019); (Ja
cobs et al., 2019); (Canomeras et al., 2020), (Li et al., 2020, 2021)), star/galaxy classification ((Baqui et al., 2021)), unsupervised feature-learning for galaxy SEDs ((Frontera-Pons et al., 2017)) and galaxy morphology classification ((Gauci et al., 2010); (Ball et al., 2004); (Banerji et al., 2010)). Convolutional Neural Networks (CNNs) are a particular class of ML tools that are designed to derive features from arrays of data by convolution, and, as such, are optimal in images processing. For instance, CNNs have been used in galaxy classification ((Dieleman et al., 2015)) and surface brightness distribution analysis ((Tuccillo et al., 2018); (Umayahara et al., 2020); (Li et al., 2022)).
In this context, we have started developing the GAlaxy Light profile convolutional neural Networks (GaLNets, (Li et al., 2022), Li+22 hereafter) to perform, for the first time, single Sersic profile fitting of galaxies from ground-based data with the accurate treatment of the point-spread function (PSF). Compared to traditional tools, the GaLNets can reach similar accuracies with a computational speed more than three orders of magnitude faster. As a first application of GaLNets to ground-based galaxy surface photometry, we have used a sample of galaxies from the Kilo Degree Survey (KiDS: (de Jong et al., 2015, 2017); (Kuijken et al., 2019)) and shown that CNNs can effectively and accurately perform single Sersic analyses of very large samples of galaxy light profiles from the ground, similar to the ones that will be collected by VR/LSST. The upcoming all-sky space observations motivate us to extend the use the GaLNets to perform detailed 2-component analysis of galaxies over a wide redshift range. So far, the only similar attempt, we are aware of, from Grover et al. (2021) is limited to the derivation of the bulge-to-total (B/T) luminosity of nearby galaxies.
In this paper, we adopt a similar scheme of the first GaLNets (Li+22) to implement a PSF-convolved, 2-Sersic profile fitting of galaxies in space observations. In particular, we use the case of the CSST, which is explicitly optimized in the optical wavelengths, and in particular, we will concentrate on the single-epoch \(r\)-band data sample, as the reference dataset for this first test (see more details on SS2.3.1). As in Li+22, we simulate 2-dimensional mock galaxies, but here we use two component (bulge and disk) systems using parameters from the CosmoDC2 catalog ((Korytov et al., 2019))1, except the Sersic index, that in CosmoDC2 is assumed 1 and 4 for the disk and bulge respectively. Instead, we took a more realistic distribution of the Sersic indexes of the two components from Kennedy et al. (2016). We assumed a typical _PSF_ from the CSST to convolve 2D bulge-disk Sersic models that we add to randomly selected "background cutout" from CSST mock observations, and finally obtain realistic galaxy mock observations. Finally, we have trained the GaLNets on 1-component galaxies to evaluate the applicability of these networks to a more general variety of real observations with CSST.
Footnote 1: This is a large synthetic galaxy catalog made for VR/LSST simulated datasets, which covers 440deg\({}^{2}\) of sky area to a redshift \(z=3\).
This work is organized as follows. In Sect.2, we describe how to build the training and testing sample and describe the CNN architectures. In Sect.3, we test our CNNs on simulated data. In Sect.4, we discuss the results of the GaLNets and in Sect.5 we draw our main conclusions.
## 2 Methods and Data
CNNs have been inspired by the research of the visual cortex of biological brains ((Hubel and Wiesel, 1962); (Fukushima, 1980); (LeCun et al., 2015)). In contrast to conventional Neural Networks that consist of fully-connected layers and tend to disregard the underlying data structure, CNNs make use of architectures that preserve the pertinent information encoded within the data's inherent structure. In particular, they use so-called Convolution layers, that have the ability to carryover "feature" information (e.g. a pattern or a color in an image) by reducing the size of the elements containing relevant data. This allows the CNNs to remarkably save computational power on those applications, like image processing, dealing with large arrays of data. Hence, CNNs are very efficient at making predictions of certain target features (i.e. a pattern, a property, one or more parameters) over specific objects in high-resolution astronomical images and spectra, and can be used either in classification (e.g. galaxy morphology, (Walmsley et al., 2020); strong gravitational lensing search, (Li et al., 2019, 2020, 2021)) or regression problems (e.g. for spectroscopic feature recognition and redshifts, (Hoyle, 2016), (Zhong et al., 2022); galaxy fitting, (Tuccillo et al., 2018), (Li et al., 2022)). To make accurate predictions, though, it is indispensable that 1) the data used for the CNN training (training set) realistically reproduce the ones used to make predictions (predictive set) and 2) training sets do cover the full target parameter space.
Ideally, these conditions can be satisfied by collecting a large sample of real systems labelled by the "true" features one wants to predict on a new dataset with unknown targets (i.e. the astrophysical parameters). This is practically out of the reach as 1) we cannot know the true parameters of astrophysical objects but only deter
mine them via other parametric or non-parametric tools, which are naturally prone to biases; 2) even if we assume that we can estimate bias-free "targets" with standard analysis tools, very often this process is time-consuming and the final training set would be too small and noisy, to prevent systematics. The use of simulated data is a very common solution to obviate these shortcomings, provided that the process of producing mock data takes all the observational and physical parameters correctly into account.
### A GaLNet for Bulge-Disk decomposition: GaLNet-BD
As introduced in Sect.4, we want to apply the GaLNets to a classic regression task where the inputs are images of bulge-disk galaxies and the corresponding PSFs, and the outputs are the parameters of the best 2-Sersic profiles describing the 2D galaxy light distribution (see below). Since this new GaLNet is specialized for B-D decomposition, we have dubbed it GaLNet-BD. The Sersic (1968) profile is defined as:
\[I(x,y)=I_{\rm e}{\rm exp}\left\{-b_{n}\left[\left(\frac{\sqrt{q(x-x_{0})^{2}+( y-y_{0})^{2}/q}}{R_{\rm e}}\right)^{\frac{1}{n}}-1\right]\right\} \tag{1}\]
where \(R_{\rm e}\) is the effective radius, \(I_{e}\) is the surface brightness at the effective radius, \(q\) the axis ratio, \(n\) is the Sersic index, while (\(x_{0}\), \(y_{0}\)) are the coordinates of the center. We also define the position angle, \(PA\), which represents the angle between the minor galaxy axis and the North to East direction on the sky. In Eq. 1, for \(b_{n}\) we use the expression provided by Ciotti & Bertin (1999):
\[b_{n}\approx\left\{\begin{array}{ll}2n-1/3+4/(405n),&n\geq 0.36\\ 0.01945-0.8902n+10.95n^{2}&n{<}0.36.\end{array}\right. \tag{2}\]
Eq. 1 is used to describe both the Bulge and Disk components, and let us to define the total surface brightness of the B-D system as
\[I_{\rm tot}(x,y)=I_{\rm Bulge}(x,y)+I_{\rm Disk}(x,y). \tag{3}\]
The total (apparent) magnitude, \(mag\), is defined by
\[mag=-2.5\log(F_{tot})+zpt \tag{4}\]
where \(zpt\) is the zero point of the adopted filter. \(F_{tot}\) is the total flux of the galaxies, definded as
\[F_{tot}=2\pi\sum_{i=1}^{2}R_{e,i}^{2}I_{e,i}e^{b_{n_{i}}}n_{i}b_{n_{i}}^{-2n_{ i}}\Gamma(2n_{i})q_{i}, \tag{5}\]
where \(i=1,2\) is for bulge and disk and \(\Gamma\) indicates the standard \(\Gamma-\)function.
As the Sersic index is known to be a proxy of the galaxy morphology (e.g. (Trujillo et al., 2007b)), we assume \(n<2\) for the thin disk component, while for the bulge we generally assume \(n>\)2, but also include \(n<2\) (pseudo-bulges, e.g. (Fisher & Drory, 2008), (Kennedy et al., 2016)). In Fig. 1, we show the dependence of the 1D profiles as a function of the \(n\)-index. In particular we see that the Sersic profiles with large \(n\)-index (\(n>2\), i.e. typical of the dominant "bulges") are clearly distinct from the ones with \(n<2\) in the central regions, although they become blended for larger and larger \(n\) value. This is also true for the large \(n\)-index in the outer regions (i.e. \(R>R_{\rm e}\)) where all profiles look almost indistinguishable, while it is easier to separate the models with lower \(n\)-index (\(n<2\)). This gives a perspective of the difficulty, in B-D decomposition, one needs to overcome when accurately predicting the overall parameters. As we will see, this is important for our deep learning methods, as they "look" at the central regions, to learn about bulges, and the outer regions, to learn about disks.
The total number of free parameters of the fitting process for a B-D decomposition is 14, i.e. the 7 parameters
Figure 1: 1D normalized Sérsic surface brightness profiles in linear (top) and “log” scale (bottom) as a function of the radius in units of \(R_{\rm e}\), for different \(n\)-indices (\(n=0.5,1,2,4,5,7\), from blue to purple).
\(x_{\rm cen}\), \(y_{\rm cen}\), \(mag\), \(R_{\rm e}\), \(q\), \(n\), \(PA\) for each of the two components. As we keep the center of the two components fixed to the central pixel of the image cutout, we are left with _10 parameters_ to be constrained by the GaLNet-BD.
In order to reproduce real observations we need to account for the local background, \(BG\), of typical instrumental observations, which are dependent on a series of observational factors, including the cosmic light background, the detector noise, and the _PSF_, which also depends on the optical design and observational conditions. All these factors are combined to produce the mock galaxies, using the equation:
\[I_{\rm Obs}(x,y)=I_{\rm tot}(x,y)\otimes PSF+I_{BG}(x,y) \tag{6}\]
where \(I_{\rm tot}(x,y)\) is the total galaxy surface brightness distribution from Eq. 3, and \(I_{BG}(x,y)\) is the value of local background, while \(\otimes\) denotes convolution. More details on this procedure will be given in SS2.3.2.
Thanks to the high quality expected from the space images of CSST degeneracies among the Sersic parameters in Eq.s 1 and 3 are expected to be less severe than ground-based observations (see, e.g. (Trujillo et al., 2001a)). In particular, space-based instruments tend to have relatively stable _PSF_, determined only by instrumental effects such as diffraction due to obscuration, optical aberrations, and polishing errors, as well as thermal variations in the telescope structure (see e.g. the breathing effect of Hubble Space Telescope). It is therefore theoretically possible to model the _PSF_ based on the instrument optics ((Krist et al., 2011)), but there are other observational parameters difficult to account. To overcome this, we assume the _PSF_ following a Gaussian distribution with FWHM centered at \(0.15^{\prime\prime}\) and a scatter, \(\sigma_{\rm FWHM}=0.015^{\prime\prime}\), which conservatively accounts for the variation of the _PSF_ across the FOV of the CSST camera (see SS3, for more details).
### The CNN Architecture
CNNs are characterised by a weight sharing network structure, which reduces the complexity of the network model and the number of weights. This is very advantageous in two-dimensional image processing, avoiding the complex feature extraction and data reconstruction process of the traditional recognition algorithm. GaLNets are built based on "supervised" learning: the input data are simulated galaxy images, labeled with corresponding Sersic parameters (labels).
In this work, we apply an adapted VGGNet ((Simonyan & Zisserman, 2014)). Its structure is a canonical CNN, consisting of three parts: the input layer, the hidden layer, and the output layer (see Fig. 2). The core of the CNN is the hidden layer, which includes 1) the convolutional layer made of multiple convolution kernels composed of weights and biases, which can be used in extracting features from the input data 2) the pooling layer, and 3) the fully connected layer. The "feature maps" derived by the convolutional layer are passed to the pooling layer, which performs some information fil
Figure 2: The structure of the GaLNet-BD used in this work. The networks are fed by both galaxy images and the corresponding “local” simulated PSFs, and the outputs are the 10 parameters of the two Sérsic profiles. 6 layers are used for the galaxy branch and 4 layers for the PSF branch. After the Concatenation layer, we add 3 fully connected layers to extract further features of the combined galaxies and PSFs.
tering to reduce the features' sizes. The pooling layer contains a preset pooling function used to replace the feature map in a given region with a single point (flattening). Finally, the fully connected layer, located at the end of the hidden layer, performs a nonlinear combination of the extracted features. In particular, it combines the low-level learned features into high-level features and passes them to the output layer. In Fig. 2, we show how in the GaLNet-BD, we apply this structure to two different channels that are meant to extract the features from two inputs in parallel, the galaxy image and the _PSF_ image. In order to combine the information selected in the two channels the GaLNet is equipped with a concentration layer, right before the fully connected layer, where the parameters are predicted on the basis of the weights imposed by the _PSF_ branch.
### Data: Mock CSST galaxy cutouts
According to the architecture shown in the previous paragraph, our training set consists of 1) 2D simulated bulge-disk galaxy images and 2) the corresponding 2D PSFs. To build this data set, we add 2-dimensional, _PSF_-convolved, simulated bulge-disk galaxies to randomly selected \(r\)-band cutouts from CSST simulations, representing the galaxy "background". In this section, we give a short description of the CSST simulated observations we have used, and the process followed to produce the mock galaxy cutouts.
#### 2.3.1 CSST simulations
In this work, we will use the China Space Station Telescope (CSST) as a prototype facility for large sky space observations in optical bands. The CSST is a 2-m telescope capable of observations in 7 bands from the UV to optical/NIR (i.e. NUV\(ugrizy\)) using a wide field camera covering an effective area of \(\sim 1.1\) deg\({}^{2}\) with a pixel scale of 0.074 arcsec/pxl. It will carry out a wide survey over 17 500 deg\({}^{2}\) and a deep survey over 400 deg\({}^{2}\) with \(r\sim 26\) and \(r\sim 27.2\) limiting mag, respectively. The superb image quality (FWHM\(<0.15^{\prime\prime}\)) and photometric depth (\(r<26\)) will be ideal for performing dark matter tomography and constraining the dark energy equation of state ((Liu et al., 2023)). However, CSST will also provide unprecedented multi-band data to study the structure of galaxies and AGN and their evolution, across cosmic time. CSST is expected to be launched by mid-2025 and since the camera has not yet finalized, there are currently no real images to be used as a templates for this work.
However, in preparation of CSST operations and to empirically assess the performance of design and hardware, the CSST Science Ground Segment has implemented a suite of software based on GALSIM (Rowe et al., 2015) framework to produce simulated images (Fang Y., private communication). Despite these simulations are not optimized for science, they provide a realistic dataset to produce mock observations, suitable for science tests.
The code is made publicly available2 and can be used to generate pixel-level CSST exposures to different fidelity level. The workflow of this simulation can be summarized by the following stages:
Footnote 2: [https://csst-tb.bao.ac.cn/code/csst_sim/csst-simulation](https://csst-tb.bao.ac.cn/code/csst_sim/csst-simulation)
1) Truth catalogues, survey strategies, _PSF_ samples, and field distortion model are prepared separately from the imaging simulation. In particular, the super-sampled _PSF_ stamps are calculated over the focal plane, and in four colours within each CSST bandpass via ray-tracing in CODE-V3.
Footnote 3: [https://www.synopsys.com/optical-solutions/codev.html](https://www.synopsys.com/optical-solutions/codev.html)
2) _PSF_ stamps are further interpolated in GALSIM to get a spatially-varying, quasi-chromatic _PSF_ model. To simulate a single exposure, in each of the exposures, each galaxy is modeled as the sum of an exponential disk and a De Vaucouleurs bulge, and each star is modeled by a simple Dirac Delta function. Photon flux is assigned to each object according to its magnitude, SED, and the corresponding filter. Locations of objects are given by projecting their celestial coordinates on to image coordinates via WCS and field distortion model.
3) In each filter, the surface brightness profiles of objects are convolved with the _PSF_ model at their locations. Rendering is handled by the "photon-shooting" option in GALSIM. Stamps from all sub-bandpasses are stacked, and various detector effects are modeled and added to get the final image.
The simulated "raw" data produced by the CSST simulation group have been processed by the current version of the CSST pipeline. A detailed description of the pipeline will be provided in a dedicated paper (Fang Y., private communication). Generally speaking, this pipeline performs a chip-to-chip standard data reduction, including bias and overscan subtraction, flat-fielding, astrometric and photometric calibration, cosmic ray subtraction.
For this paper, we make use of single epoch observations, consisting in a \(150s\) exposure for which we have measured a limiting magnitude for extended sources of
\(r\sim 25.3\)4. We have chosen the single-epoch because the full depth for the CSST wide survey will be available for the whole wide-survey area only at the end of the 10-year mission of the satellite, while the single-epoch will be the reference dataset available for the first years of telescope operations. Hence, it is worth demonstrating that galaxy structural parameters of galaxies and B-D decomposition are feasible also with half of the nominal depth of the survey. We will address the full depth of the wide survey and the deep field areas forecasts in forthcoming analyses.
Footnote 4: This has been obtained by comparing the number of observed galaxies with respect to the input catalog, as a function of the \(r\)-mag and determine the magnitude where 50% of the input galaxies are lost.
#### 2.3.2 Simulating realistic B-D galaxy images
In this section, we describe in more detail the process to produce realistic multi-component galaxies made of bulges and disks. As seen in the previous paragraph, the CSST simulated images contain simple galaxies made of an exponential disk and a De Vaucouleur profile, which are not representative of real systems (see 2.1). Rather, we intend to use a more physical distribution of this parameter in our analysis (see below, and also SS1). Overall, the simulated images described in the previous section represent the most realistic template of what the CSST observations will look like and provide the necessary realism to our test in terms of background light, image distortions and companion systems, we might eventually encounter in our analysis.
The overall simulation procedure is sketched in Fig. 3, and described in details here below.
1) _Background images._ We randomly select small cutouts of size 135\(\times\)135 pixels corresponding to about 10\(\times\)10 arcsec\({}^{2}\) from \(r\)-band CCDs of the CSST simulated images. To provide a realistic environment for the mock galaxies, we have allowed these "background" cutouts to contain other simulated sources, like stars and galaxies, with the further addition of cosmic rays. The only attention we use, at this stage, is to remove those cutouts with too bright source (galaxies or stars) in the central region. We finally produce \(\sim 1200\) galaxy "background" cutouts. In Fig. 4 (top) we show a small sample of these images, where we can clearly distinguish the structure of the background noise and also the presence of simulated sources, from faint compact to bright extended sources. We further increase this sample via "augmentation", by applying a 90, 180, 270 deg rotation and flipping, to finally collect 9600 images.
2) _Galaxy magnitudes._ As anticipated in SS1, to simulate Sersic profiles, we use a realistic catalog of mock observations from cosmological N-body simulations as the one produced for the VR/LSST (CosmoDC2; (Korytov et al., 2019)), which contains galaxy parameters up to redshift \(z=3\). In CosmoDC2, each 2-component galaxy is characterized by a multitude of galaxy proper
Figure 3: Structural and positional parameters are used in fitting Sérsic model for both bulges and disks. To make realistic simulations, we combined 2-Sérsic model convolved with _PSF_ and CSST background value (see §2.3.2).
ties including stellar mass, morphology, spectral energy distributions, broadband filter magnitudes, host halo information, and weak lensing shear. For our work, we are interested on so-called "structural parameters" for the two components, including luminosity, effective radius, axis ratio and position angle, for different photometric bands, namely \(ugrizy\), although for this first test we will use only the \(r\)-band5, while we leave the multi-band analysis making use of the other bands for future work (see also 4.1 for a discussion). Regarding magnitudes, the CosmoDC2 provides the total luminosities of the bulge and disk components of each galaxy, and the conversion formula:
Footnote 5: We need to remark that the CosmoDC2 mock catalogs are tailored to VR/LSST observations and the \(r\)-band filter, as well as the other filters, might slightly differ from the ones that will be used by CSST. We expect though that the differences are insignificant to try to correct these for possible zero points. We also remark that this color term does not impact our analysis as the train and test samples will make use of the same calibration.
\[mag_{r}=-2.5*\log(L_{r})-2.5*\log(1+z)+\mu(z) \tag{7}\]
where \(L_{r}\) is the \(r\)-band luminosity of either component, while \(z\) is the redshift and \(\mu(z)\) is the distance modulus of the galaxy.
Even though CosmoDC2 contains parameters of \(\sim 2\) billions of galaxies in the 440 deg\({}^{2}\) simulated area, most of them are too faint to be observed by CSST, which is \(\sim 1.5\) mag fainter than VR/LSST, if considering the wide survey. This difference is worsened as we are using only half of the total depth of the final CSST wide survey program. We will come to the issue of completeness in SS2.3.3.
3) _Other Sersic profile parameters_. To make predictions of the 10 parameters of the two-component Sersic model discussed in SS2.1, we need to create a training sample realistically reproducing the expected distribution of intrinsic galaxy parameters. In Table 1 we give a summary of the parameters adopted in the simulation of the training sample as derived from the CosmoDC2 catalog. Here we remind that, for our training sample, we have adopted a \(n\)-index distribution wider than the one of the original CosmoDC2 catalog, and assumed the one from an observational sample from Kennedy et al. (2016). In particular, we see that the \(n\)-index of the bulges, \(n_{\rm bulge}\), span from \(n_{\rm bulge}\sim 1\) (pseudo-bulges) to \(n_{\rm bulge}=8\), while the \(n\)-index of the disks, \(n_{\rm disk}\), are canonically smaller than 2. We also remark that the CosmoDC2 mock catalogs do not have any conditions on the relative size of bulges and disks, including also cases where the effective radius of the bulges is larger than the ones of the disks (embedded disks). Although in principle we could adopt some empirical relation to bind bulge and disk sizes (see e.g. (Davari et al., 2014)), we decide here to leave more freedom to these particular prior distributions. Note that these latter are not the final "observed" parameter distributions, as the imposition of the SNR cut for accurate surface photometry will produce a further selection of the mock sample parameters (see Sect. 2.3.3).
The parameters, as in Table 1, are randomly sampled and used in Eqs. 1 and 3 to model the simulated galaxies that will be used as training and test samples. The only parameters that we decided to keep fixed because of minor physical meaning are the galaxy centers (\(x_{\rm cen}\), \(y_{\rm cen}\)). These are assumed to be (0,0), i.e. the center of the "background" image. Here below we illustrate the steps to produce the mock images of these simulated galaxies.
Figure 4: A set of reduced \(r\)-band CSST simulated images (first row) and the same images with simulated galaxies according to the procedure illustrated in §2.3.2 (second row).
4) _PSF_. Accurately accounting for the _PSF_ is a crucial pre-requisite for unbiased structural parameter estimates ((Trujillo et al., 2001a)), even for space observations ((Gillis et al., 2020)). To do that, for our mock galaxies we make use of self-made "Gaussian PSF images" (see Fig. 2), although these might be slightly different of the ones resulting from the ray tracing for the CSST simulations. For sake of generality, to produce a Gaussian profile we assume a circularly symmetric, Moffat-like profile, but with parameters reproducing a Normal distribution. In particular we adopt the following equation ((Trujillo et al., 2001b)):
\[\textit{PSF}(r)=\frac{\beta-1}{a^{2}\pi}[1+(\frac{r}{a})^{2}]^{-\beta} \tag{8}\]
where \(r\) is the distance from the center of the "_PSF_ image" in pixels, and where we choose \(\beta\)=100 ((Trujillo et al., 2001b)), corresponding to a Gaussian distribution. The choice of the Gaussian profile is motivated by ray tracing tests on the CSST optics showing that the _PSF_ can be approximately Gaussian (Fang Y., private communication). According to these tests, the FWHM in \(r\)-band is conservatively close to \(0.1^{\prime\prime}\) with \(\sim\)10% fluctuation. However, due to charge diffusion on the CCD, the "observed" FWHM can become significantly larger and asymmetric. To check that, we directly measure a dozen of "simulated stars" in CSST images, obtaining a mean FWHM\(=0.150^{\prime\prime}\pm 0.005^{\prime\prime}\), with no signs of significant ellipticity and a minimal presence of tails deviating from a Gaussian in the outermost _PSF_ profiles. Finally, to add more realism to the _PSF_ effect on the mock galaxy images, we assume some CSST chip-by-chip variation, by taking the local Gaussian _PSF_ having the FWHM drawn from a Gaussian distribution with mean FWHM\(=0.150^{\prime\prime}\) and variance \(\sigma(\text{FWHM})=0.015^{\prime\prime}\). The size of these "PSF images" is 51\(\times\)51 pixels, or 3.8\(\times\)3.8 arcsec\({}^{2}\), which is wide enough to fully sample the FWHM of the PSF. Note that, at this stage, the details of the _PSF_ model are secondary, as these do not impact the accuracy of the predictions: as we have discussed elsewhere (Li+22), to maximize the accuracy the GaLNets need to learn the local PSF, regardless the kind of model adopted. This latter can be changed for the training and the direct modeling of real galaxies when the real _PSF_ of the instrument will be measured on real images.
5) _PSF convolution and final images_. This is the step where we convert a 2D, 2-component galaxy model into a realistic mock observation of the galaxy sample. As anticipated in 2.1, we first convolve the 2D model in Eq. 3 with the _PSF_ and then, after having added Poisson noise to the convolved profile, we add the resulting 135\(\times\)135 pixel (i.e. \(10^{\prime\prime}\times 10^{\prime\prime}\)) image to the background cutout. We remark here that this provides an area large enough to sample most of the light profile of galaxies with \(R_{\rm e}\sim 1^{\prime\prime}\) (corresponding to \(\sim 14\) pixels). For larger galaxies, though, the fraction of the total light enclosed in the cutout can be smaller than the total one. As discussed in Li+22, this does not constitute a problem as the CNN is sensitive to the light gradients in the brighter regions, while it looses sensitivity in the low surface brightness, low SNR regions. Here, given the range of \(R_{\rm e}\) adopted in this paper, mostly \(<2^{\prime\prime}\), with a small fraction of systems having \(R_{\rm e}>2^{\prime\prime}\) at low-\(z\) (see next SS2.3.3), we still expect to sample generally an area enclosing \(2-3R_{\rm e}\)s, which is enough to clearly separate different profiles. Overall, the final cutout size was decided as a compromise between area sampling and computational speed, as this latter is a non-linear function of the cutout size, especially for data reading/writing.
This latter step produces a typical real-looking galaxy, as shown in Fig. 4 (bottom), for which we can measure the signal-to-noise ratio to verify if this is large enough to perform accurate surface brightness analysis. In particular, this is measured over a central area, covered by the effective radius (\(R_{\rm e}\times R_{\rm e}\)).
In this paper, we use \(\text{SNR}=40\), which is slightly smaller than previous GaLNets' experiments based on ground-based observations in Li+22, but large enough to obtain a reasonable accuracy for the structure pa
\begin{table}
\begin{tabular}{l l l l} \hline \hline Parameter & Range & Unit & Distribution \\ \hline Mag & 17-24 & mag & Given by DC2 \\ \(R_{\rm e,bulge}\); \(R_{\rm e,disk}\) & 0.2-6 & arcseconds & Given by DC2 \\ q & 0.02-1 & & Given by DC2 \\ pa & 0.00-180 & degrees & Given by DC2 \\ \(x_{\rm cen}\) & 0 & pix & set \\ \(y_{\rm cen}\) & 0 & pix & set \\ \(n_{\rm bulge}\) & 0.3-8 & & F \\ & & & (n=30,d=5) \\ \(n_{\rm disk}\) & 0.5-2 & & Normal \\ & & & (\(\mu=1\), \(\sigma=0.5\)) \\ \hline \end{tabular} Note. – Range and distribution of parameter values used to simulate the galaxies. \(\mu\) and \(\sigma\) are the mean value and standard deviation of a normal distribution. n is the degrees of freedom in the numerator and d is the degrees of freedom in the denominator.
\end{table}
Table 1: Parameters for Simulating the Training Samples
rameters (see also (Simonyan & Zisserman, 2014), (Roy et al., 2018)). We will eventually evaluate the impact of the lower SNRs in the parameter predictions in future analyses, with the aim of pushing the completeness limit toward the smaller fluxes (see also SS2.3.3).
In Fig. 4 (bottom), we can also see the variety of "blending situations" we have included in our training sample, realistically accounting of the presence of compact stars and extended galaxies, often overlapping with the simulated systems, whose centers are, by construction, coincident with the cutout center.
Following steps 1-5 above, we simulate 250 000 mock galaxies with redshift up to 1.5 (according to Table 1). Every simulated galaxy, to be selected as part of this mock galaxy sample, has to pass the SNR = 40 criterion (see also SS2.3.3 here below for a discussion on the selection effects). We finally split this sample into the training data, made of 200 000 mock galaxies, of which 40 000 galaxies are used for the validation, and the test sample, made of the residual 50 000 simulated galaxies.
#### 2.3.3 SNR selection and final parameter distributions
In Fig. 5 we show the distribution of the galaxy magnitude as a function of redshift of the galaxy obeying to the \(\mathrm{SNR}=40\) criterion. From the plot, we see that the number of galaxies fainter than \(r=23\) suddenly drops at all redshifts, despite the input distribution reaches \(r=24\) (see Table 1), showing that this is a fair approximation of the "completeness" limit of our B-D decomposition with one epoch CSST observations, for SNR=40. Below this limit, we do not attempt, at this stage, to perform any surface brightness analysis. In the same figure we also see that for \(z>1\), galaxies with SNR\(\leq\)40 are sparsely distributed, and almost disappear at \(z>1.5\). Hence, we keep only galaxies at \(z<1.5\) for the B-D modeling. Besides the magnitude, the SNR cut has an impact also on the distribution of all the other parameters. This is shown in Fig. 6, where we plot the \(n\)-index, \(r\)-mag and \(R_{\mathrm{e}}\) for bulges and disks. Here, starting from magnitudes, we notice that the distribution of disks becomes flatter at \(r>21.5\), meaning that faint disks are difficult to model and are suppressed in our final statistics. We also see that the distribution of the \(R_{\mathrm{e}}\) of the disks is changed, with a stronger suppression of the \(R_{\mathrm{e}}>0.3\) and in particular of the peak at \(R_{\mathrm{e}}\sim 0.5\). Overall, the stronger effect we can see is on the \(n_{\mathrm{bulge}}\) distribution. In particular, bulges with \(n_{\mathrm{bulge}}<3\) (including pseudobulges) tend to be selected off because their more diffuse profile produces lower SNRs. This "selection effect" is possibly not a major issue for our analysis as both the training and test samples will follow the same parameter distributions (but see Appendix). However, it rings a bell about the application to real galaxies, where the "selection effect" from SNR limitation can produce an incompleteness of intermediate/high redshift pseudobulges (see top-left panel of Fig. 6). The predominance of high \(n_{\mathrm{bulge}}\) has been found in previous analysis (e.g. (Allen et al., 2006)), which confirms that this is a realistic feature incorporated in our training sample. Finally, following the strong effect on \(n_{\mathrm{bulge}}\) we also show the impact of the SNR on the selection of the Disk-to-Bulge ratio, measured via the difference of the disk and bulge magnitudes, \(\Delta\)(D-B)\(=mag_{\mathrm{disk}}-mag_{\mathrm{bulge}}\) in \(r\)-band. Here we see that disk-dominated systems disks-dominated (\(\Delta\)(D-B)\(<-2.5\)), suffer a strong selection at all redshifts, except at \(z\mathrel{\hbox{\hbox to 0.0pt{\hbox{\lower 4.0pt\hbox{$\sim$}}}\hbox{$<$}}}0.1\) while bulge-dominated-disks (\(\Delta\)(D-B)\(>2.5\)) generally pass the SNR=40 criterion at \(z<0.5\), while they become looser at higher redshifts. This is a combined effect of the parameter selection seen on the top row of the same figure. The positive note is that fully disk dominated or bulge dominated systems, can be rather accurately modeled by single component model (see e.g. a discussion in (Allen et al., 2006)). Despite in this paper we will consider a similar SNR cut also for the 1-component analysis (see SS4.4), in future we can test to reduce the SNR requirement for these simpler model and possibly reduce this incompleteness effect.
## 3 Training and testing the GalNet-BD
As mentioned in SS2.1 (see also Fig. 2), the inputs of the GaLNet-BD are the 135 \(\times\) 135 pixel galaxy images and the 51 \(\times\) 51 pixel _PSF_ images. The outputs are the 10 parameters, i.e. mag, \(R_{\mathrm{e}}\), n, q, and PA, for the B and D components, which best predict the observed
Figure 5: Redshift versus total magnitude before (gray points) and after (color + bar) the SNR cut. We can see that the SNR cut produces a drop in the galaxies accessible to our analysis at \(r>23\). Also, galaxies at high redshift (\(z>1.5\)) have too low SNR (\(<40\)) to also qualify for the 2-component fitting.
surface brightness distribution of each galaxy. Here below we detail the CNN training and testing of the B-D decomposition, using the simulated datasets introduced in SS2.3.2.
### Training and Validation
The key to train any CNN is to minimize the loss function. Instead of traditional loss functions like mean square error (MSE) and mean absolute error (MAE), we choose "Huber" Loss function ((Friedman, 2001)) with an "Adam" optimizer ((Kingma & Ba, 2014)). As discussed in Li+22, unlike MSE, "Huber" loss is less sensitive to outliers in the data by giving them smaller weights, hence allowing the CNN to quickly and efficiently converge by focusing on the low-scatter datapoints. Although, by definition, the MAE is also a little sensitive to outliers, it tends to give convergence problems as it does not efficiently weight the gradients in the loss function with the errors (for instance, it does not allow large gradients to converge in the presence of small errors). This affects the overall convergence speed of the training process, sometimes even leading to convergence failures. On the other hand, "Huber" loss has been proven to combine the advantages of MSE and MAE, providing us better accuracy and robust convergence. The "Huber" loss is defined as:
\[L_{\delta}(a)=\left\{\begin{array}{ll}\frac{1}{2}(a)^{2},&|a|\leq\delta\\ \delta\times(|a|-\frac{1}{2}\delta),&\text{elsewhere}\end{array}\right. \tag{9}\]
where \(a\) is defined as \(a=t-p\), in which \(t\) is the label (real value) of the simulation and \(p\) is the prediction value given by CNNs. While prediction deviation \(|a|\) is smaller than \(\delta\), the loss would be a square error; otherwise, the loss reduces to a linear function. After some trials on different \(\delta\)s, we have found that the CNN performs at the best when \(\delta=0.001\).
### Statistical indicators
To statistically assess the accuracy and precision of the predictions obtained from the GaLNet-BD, both in the training/validation phase and testing phase (see SS4), we adopt three diagnostics.
Figure 6: Distribution of the mock galaxy parameters before (blue) and after (orange) the SNR cut. Top row: disk and bulge distributions of \(R_{e}\), \(n\), \(mag\), from left to right show that SNR = 40 criterion suppress especially the small sized disks, the small \(n_{\rm bulge}\) and the low-luminosity disks. All other parameters maintain the same distribution behavior. The black histogram on the magnitude plot represents the distribution of the total magnitudes of the B+D systems, showing the confusion limit at \(r\sim 23\) (see text). Bottom row: the redshift distribution of the \(n_{\rm bulge}\) and the Disk/Bulge ratio as measured by the \(\Delta\)(D-B) parameter (see text). Here we see (left panel) that only a small part of bulges with \(n_{\rm bulge}\) survive the SNR cut. We also see (right panel) that disks-dominated (\(\Delta\)(D-B)\(<-2.5\)), suffer a strong selection at all redshifts, while bulge-dominated-disks (\(\Delta\)(D-B)\(>2.5\)) pass the SNR=40 criterion, while they become looser at \(z>0.5\).
1) R squared (\(R^{2}\)), defined as:
\[R^{2}=1-\frac{\sum_{i}(p_{i}-t_{i})^{2}}{\sum_{i}(t_{i}-\bar{t})^{2}}, \tag{10}\]
where \(t_{i}\) are the ground-truth values, \(p_{i}\) are GaLNets' predicted values, and \(\bar{t}\) is the mean value of the \(t_{i}\)s. According to this definition, \(R^{2}\) is 0 for no correlation (low accuracy) between ground-truth values and predicted values while 1 for the perfect correlation (high accuracy). This is a diagnostic that quantify how much the labeled input values and the CNN outputs are close to the 1-to-1 relation, i.e. the accuracy of the GaLNet-DB predictions.
2) Normalized median absolute deviation (NMAD). We first define the relative bias as
\[\Delta p=\frac{p_{i}-t_{i}}{t_{i}}, \tag{11}\]
except for magnitude that are logarithmic quantities and for which the \(\Delta p=p_{i}-t_{i}\). Then, the NMAD is defined as:
\[\text{NMAD}=1.4826\times\text{median }(|\Delta p-\text{median }(\Delta p)|). \tag{12}\]
This gives a measure of the overall scatter of the predicted values with respect to the 1-to-1 relation, i.e. the precision of the method.
3) Fraction of outliers. This is defined as the fraction of discrepant estimates larger than 15%, using the condition \(|\Delta p|>0.15\), similarly to what usually adopted for outliers in photometric redshift determination (see, e.g., (Amaro et al., 2021)). This gives a measure of the catastrophic predictions, which strongly deviate from the true values and can be driven by anomalous data, failures in the convergence of the CNN, etc.
The hyperparameters are optimized using the validation sample and found to reach the minimal of the loss after about a dozen of epochs. We also do not see signs of overfitting at later epochs.
After training, we use the test sample (see SS2.3.2) to check the GaLNets' performances and compare the ground-truth values of each parameter used to simulate the galaxies and the predicted values of disk components and bulge components.
## 4 Test Sample Results and Discussion
In this section we discuss in details the GaLNet-BD performance over the test sample. As we have currently no real data at our disposal, this is the only sample we can use to benchmark the performance of the tool for future applications on observations. Although idealized, the simulated sample contains a rather high level of observational details in terms of seeing, noise, background and parameter distribution. Furthermore the 2-component models can capture most of the physical properties of real galaxies (see e.g. (Gao and Ho, 2017)), although the caveat in order is that presence of substructures in real galaxies (like bars and spiral arms) can impact the correct parameter estimates in real applications (see e.g. (Gao et al., 2019), (Li et al., 2022)). Despite this latest limitation, we believe the simulated data adopted here is a fair knowledge base to train our tools for future CSST data (see e.g. Li+22 for a similar application to KiDS galaxies).
We conclude this section by remarking that, regardless the specific application to CSST we discuss in this paper, the procedure illustrated above can be easily generalised to any other space instrument, provided that an accurate knowledge of the "background" and the _PSF_ of the typical observations are available (see again 2.3.1, and also Li+22).
### GaLNet-BD performances
The diagnostics obtained for all the 10 Bulge and Disk parameters are listed in Table 2, while the predictions (output) of the GaLNET-BD versus the ground truth values (input), for the 50k galaxies of test sample, are shown in Fig. 7. Looking at this figure and the \(R^{2}\) in Table 2, we find a very good accuracy (\(R^{2}\)) for the \(mag\), \(R_{\text{e}}\), and PA for both disk and bulge, although for the PA there is a rather large fraction of outliers around 0/180 deg, mainly driven by the round systems (\(q\sim 1\)), for which the PA is rather uncertain. For the three quantities, we find similar scatters (NMAD) and outlier fractions for both Bulges and Disks (from Ta
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline \multicolumn{1}{c}{ Test} & \multicolumn{1}{c}{Component} & \multicolumn{1}{c}{\(mag\)} & \multicolumn{1}{c}{\(R_{\text{e}}\)} & \multicolumn{1}{c}{\(PA\)} & \multicolumn{1}{c}{\(q\)} & \multicolumn{1}{c}{\(n\)} \\ \hline \(R^{2}\) & & & & & & \\ & Disk & 0.8723 & 0.9237 & 0.8654 & 0.9292 & 0.1019 \\ & Bulge & 0.9475 & 0.8854 & 0.8654 & 0.6416 & 0.9875 \\ \hline Outlier frac. & Disk & 0.1004 & 0.0257 & 0.1139 & 0.0082 & 0.1918 \\ & Bulge & 0.0835 & 0.0206 & 0.1142 & 0.0218 & 0.0067 \\ \hline NMAD & Disk & 0.1137 & 0.1753 & 0.0599 & 0.1155 & 0.3035 \\ & Bulge & 0.1042 & 0.1726 & 0.0599 & 0.0863 & 0.0537 \\ \hline \end{tabular} Note. – Statistical properties of the prediction on simulated testing data. From top to bottom we show \(R^{2}\); the fraction of outliers and the NMAD for the magnitude \(mag\); effective radius \(R_{\text{e}}\); position angle \(PA\); axis ratio \(q\) and Sérsic index \(n\). Generally the prediction of bulge component are better than those of Disk.
\end{table}
Table 2: Statistical Properties of the Prediction
ble 2). These show that \(mag\), \(R_{\rm e}\), and PA are rather robustly constrained.
Moving to the other parameters, the Sersic index of the bulge, \(n_{\rm bulge}\), is rather well constrained, showing a high \(R^{2}\) and small outlier fraction. On the other hand, the \(n\)-index of the disk, \(n_{\rm disk}\) is poorly predicted (\(R^{2}=0.10\), see Table 2). We see an opposite situation for the axis ratio. The \(q\) of the bulge, \(q_{\rm,bulge}\), is poorly predicted (\(R^{2}=0.64\)) and show a rather large fraction of outliers, while the one of the disk, \(q_{\rm disk}\), shows a much higher accuracy (\(R^{2}=0.93\)). This is also well understood as the bulge is embedded in the disk and its axis ration gets easily mixed with the disk, while the disk geometry is better defined toward the edge of the galaxy.
The most strident result is the bad performance of the \(n_{\rm disk}\) with respect to the tighter constraints on the \(n_{\rm bulge}\), although not unexpected. In fact, in the galaxy center, the \(n_{\rm bulge}\) is generally dominant over the one of the disk, and the CNN tends to associate the peak in the central, high SNR regions of a galaxy image, to the dominant component (the bulge). On the other hand, the smoother peak of the lower \(n\)-index (\(<2\)) disks is generally embedded in the bulge intensity. Here, the only way for the GaLNet-DB to constrain the \(n_{\rm disk}\) is to guess it, together the other disk parameters, from the outer galaxy regions, where the disk dominates. Indeed, as also discussed in Li+22, the GaLNets seem to learn how to predict the parameters from the light density gradients. For this reason, we can also see that the \(R_{\rm e}\) of the disk, \(R_{\rm e,disk}\), is better predicted than the one of the bulge, \(R_{\rm e,bulge}\), because disks dominate in the lower SNR outer regions and the GaLNet-BD can correctly recover the main parameters connected to the surface brightness gradient far from the center (i.e. effective radius and total luminosity, not the \(n\)-index as discussed above). Going toward the center, though, the gradients of the bulge component is strongly affected by the disk density profile around the \(R_{\rm e,bulge}\), and, because of that, this latter parameter is slightly worse constrained than the \(R_{\rm e,disk}\) (see Fig. 7 and \(R^{2}\) in Table 2).
Finally, in Fig. 8, we show a sample of data/residual images. For each example, on the left, there is the galaxy cutout (labeled as "Input") used as input for the GaLNet-BD, where we also report the "true" parameters. On the right (labeled as "Residual"), the residual image after the predicted 2D model from the CNN has been subtracted, with (at the bottom) the the reduced \(\tilde{\chi}^{2}\) defined following Li+22:
\[\tilde{\chi}^{2}=\sum_{i\neq 0}\frac{(f_{i}-m_{i})^{2}}{\sigma_{\rm bkg}^{2}}/ \mathrm{dof} \tag{13}\]
Figure 7: Comparison between the true value and the predicted value of two components on simulated data. In each panel, the horizontal axes are the true values and the vertical axes are the predictions from GaLNets. The error bars are the absolute mean errors in each bin, while labels report the absolute errors for mag and relative errors for others.
where \(f_{i}\) are the observed pixel fluxes of the galaxy within the effective radius, \(m_{i}\) the model values in each pixel, the \(\sigma_{\rm bkg}^{2}\) the background noise, and dof = (N. pixels \(-\) N. fit parameters). Note that, in Eq. 13 we exclude the central pixel (\(i=0\)) because it is too sensitive to small variation of the parameters (especially the Sersic index), during the convolution with the _PSF_ used to reconstruct the modeled galaxy. This could cause strong deviations from the true "observed" fluxes at \(i=0\), hence artificially degrading the overall \(\chi^{2}\). The Fig. 8 shows three different groups of best-fit "goodness", the very good ones (\(\tilde{\chi}^{2}<1\)), the mid quality ones (\(1<\tilde{\chi}^{2}<2\)), and the bad ones (\(\tilde{\chi}^{2}>>2\)).
From the figure we can see that the performances of the GaLNet-BD is generally good both for isolated galaxies and for more crowded situation, where there are close systems. Major failures occur if the galaxy has a compact bulge and disk (e.g. high redshift) in crowded area, or for very bright systems (\(r>18\)). We believe this latter case is partially due to the smaller number of galaxies present in the training sample for the light magnitude range (see e.g. Fig. 5). We discuss the impact of the distribution of priors for the training sample in Appendix A.
In Fig. 9, we finally show the distribution of the \(\chi^{2}\) for the full test sample. We can see here that \(\sim 60\%\) of the sample shows a good fit (\(\tilde{\chi}^{2}<2\)), which represents a reasonably high fraction. We remark here that this might be a lower limit as we are not subtracting the cases where the model is good even in presence of a blended source, which can still contribute to the residual image in the \(\chi^{2}\) calculation. To have a measure of these possible "contaminants", that can even make the prediction to fail (see e.g. Fig. 8), we have estimates the numbers of background images with RMS larger than the median value of the majority of them (RMS\(\sim\)32 counts) to be of the order of 15%. This means that very likely a significant fraction of the 40% having \(\tilde{\chi}^{2}>2\) might have still a rather "good" fit and residual map. We show some of these examples in Fig. 10.
As a final note, the \(\chi^{2}\) is not a measure of the accuracy of the predictions, but, rather, of the ability to reproduce the observed surface brightness distribution of the galaxies. Due to the degeneracies among the pa
Figure 8: Residual maps, obtained from simulated “galaxy” images (left panels), after subtracting the reconstructed Sérsic models using GaLNets-BD trained on normal “Residual images” (right panels) – see §4.1. At the left side of each input image, we report the _mag_, \(R_{e}\) and and \(n\) of disk and bulge. We also report the a posteriori reduced \(\tilde{\chi}^{2}\) at the bottom of each residual image. Images are ordered by \(\chi^{2}\) (top to bottom), with the left column presenting cases with \(\tilde{\chi}^{2}<1\), the two central columns the cases with \(1<\tilde{\chi}^{2}<2\) and the right column \(\tilde{\chi}^{2}>>2\).
Figure 9: Distribution of the \(\chi^{2}\), as defined in Eq. 13, for the GaLNet-DB predictions. The peak at \(\chi^{2}<2\) contains almost the \(\sim\)60%.
rameters, the predicted target values can deviate from the ground truth, but yet combine to give a good "fit" to the data. This is possibly an intrinsic problem of the multi-parameter fitting that does not have a simple solution. One possibility will be to use a multi-band approach (see e.g. (Vika et al., 2014), (Haussler et al., 2022)), and trying to minimize the systematics on the Sersic parameters assuming that they do not change too much from one band to another. We will address this multi-band analyses in a future paper, where we foresee that using "transfer learning" from one band to the others will likely help break these degeneracies.
### Accuracy and Precision dependence on SNR, mag, D/B and redshift
To conclude this section we want to check the performances of the GaLNet-DB as a function of specific parameters that can differently impact the accuracy and precision of the predicted targets. In particular, we have seen that the high-SNR is a pre-requisite for accurate predictions, at the cost of a significant selection effect on the accessible parameter space (see Fig. 5). Hence, it is natural to ask what degradation of the accuracy and precision we need to expect for SNRs close to the lowest adopted limit and, consequently, to the faintest reachable magnitudes. A different piece of argument comes with redshift. Although the high\(-z\) sample fully obeys the SNR requirements, a further complication of its analysis comes from the intrinsically small galaxy angular sizes, implying that most of the galaxy information is concentrated in a dozen of pixels around their centers. In this case, the performance of the GaLNet-DB can be ruled by the limited number of features, which is close to the number of parameters that the CNN aims at constraining, rather than the SNR. This is a situation equivalent to having a limited number of degrees of freedom in standard best-fitting techniques, which causes either high-degeneracy among the parameters or rather large uncertainties. Finally, we also consider the total magnitudes in \(r\)-band, and the Disk-to-Bulge, \(\Delta\)(D-B). This latter property is particular important to see how the relative mix of the two components might impact the ability of the GaLNet-DB to infer their intrinsic structural parameters.
In Fig. 11 we give an impression of the accuracy and precision of the predictions using three indicators. First, the relative accuracy is measured by the relative bias, \(\Delta p\), defined in Eq. 11. Then, the outlier fraction and the scatter as measured by the NMAD. They are all derived for the main structural parameters of the Bulges and Disks, i.e. the \(n\)-indexes and \(R_{\rm e}\)s. Finally, in the bottom row of the same figure, we report the distributions of the test sample parameters, that mirror the ones of the training sample, by construction. From left to right we notice that:
1) The \(\Delta p\), outlier fraction and NMAD, show almost no variation with the SNR, meaning that the limit of \({\rm SNR}=40\) is well justified. For the bulge and disk \(R_{\rm e}\) and for the bulge Sersic index the bias and outlier fractions are almost absent and the NMAD show a rather small scatter, only \(n_{\rm disk}\) shows a rather systematic offset of \(\sim\)5%, which is yet within the scatter (NMAD\(\sim 0.2\)). The constancy of the offset and scatter suggest the systematics of the \(n_{\rm disk}\) are independent of the SNR.
2) The behaviour of the statistical indicators as a function of the (\(r\)-band) luminosity mirrors the one of the SNR in the left column. The \(n_{\rm disk}\) is the quantity that shows the largest systematic offset and outlier fraction. The NMAD in the other hand is similar to the one of the other quantities. From these plots we also conclude that there is not significant variation on the overall accuracy and precision of the predictions as a function of the luminosity.
3) On the other hand, there is a clear dependence of the all indicators on the \(\Delta\)(D-B). When disks dominate (i.e., \(\Delta\)(D-B)\(\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt\hbox{$<$}}0\)), almost all indicators are reasonably small (\(|\Delta p|<0.1\), out. frac. \(<0.1\) and NMAD\(<0.1\)), except for very small \(\Delta\)(D-B) (\(<-3\)) where the training/test samples are underrepresented. For \(\Delta\)(D-B)\(\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt\hbox{$>$}}0\) the disk parameters start to degrade (especially the outlier fraction and NMAD), with the disk Sersic index being the worst affected parameter. This is consistent with what
Figure 10: Some examples of residual map exhibiting ”good” fitting but bad \(\tilde{\chi}^{2}\) due to the presence of a close or blended sources. Top: mock galaxies in the center of the field; bottom: residual images. As for Fig. 8, the left side of each residual image, we report the _mag_, \(R_{\rm e}\) and and \(n\) of disk and bulge. We also report the a posteriori reduced \(\tilde{\mathcal{X}}^{2}\)at the bottom of each residual image.
seen in Fig. 7 and Table 2. This suggests that the real driver of the systematic of the disk parameter is the presence of a dominant disk. We also note that there is a trend of the \(R_{\rm e,bulge}\) to degrade in accuracy (i.e. larger negative \(\Delta p\)) for dominant bulges (\(\Delta\)(D-B)\(\gtrsim 0\)), which seems counter-intuitive, but that we can track to the decreased sample size (see histogram at the bottom), due to the lower density of bulge dominated systems at higher-\(z\) (see below).
4) There seems to be a weak trend of the three indicators toward a degradation at higher redshift for \(n_{\rm disk}\) and \(R_{\rm e,bulge}\). As we have seen in Table 2 and discussed in 4.1, these are the two quantities that are recovered with lower accuracy in general, and moving toward higher redshift, these quantities result to be less accurate and more noisy. This is mainly due to the small angular-size of galaxies that reduce the number of pixels reaching enough SNR in the outskirt to perform an accurate analysis of the density profiles. For the bulge quantities, at high-\(z\), there is the additional problem (see above) of the poorer training sample due to the lower numerical density of dominant bulges. We finally notice that for \(z<1\), all the indicators are stably at the same level of the average values found as a function of the SNR (in the left panels), i.e. \(\Delta p\sim\)5%, NMAD\(\sim\)0.15 or smaller and oulier fractions \(<\)5% except for the \(n_{\rm disk}\) which is \(\sim 25\%\). This shows that \(z\leq 1\) is possibly a conservative upper limit for the B-D analysis, with 1-epoch data. Likely, with deeper images, the SNR of these systems will reach higher levels for further away regions, hence increasing the number of pixels to be used as features from the CNN. Along the same line of arguments, we can also expect that limiting the number of parameters to constrain, we can possibly push this limit further ahead in redshift. E.g. as not all galaxies are multi-component systems, but can often be well approximated by a single dominant component, we can check whether using a 1-Sersic GaLNet we can reach high accuracies and precision for higher-\(z\) galaxies. With this aim, in the next section we will assume a population of 1-component systems still represented by a general Sersic profile with a wide range of parameters spanning from disks to spheroidal galaxies.
Figure 11: Bias (\(\Delta p\)), Outlier fraction (out. fr.), and Scatter (NMAD) as functions of SNR, total magnitude, \(\Delta\)(D-B)\(=mag_{\rm disk}-mag_{\rm bulge}\) in \(r\)-band, and redshift. In each panel, blue and orange lines are for Sérsic index of bulge and disk, respectively, and green and red for effective radius of bulge and disk, respectively (see legenda). Last row at the bottom: the number distribution in the corresponding parameters.
### On the B/T prediction
A clear result of the B-D decomposition, performed in the previous section, is that the best constrained targets are the bulge and disk magnitudes (mean \(R^{\sim}0.9\)). Typical precisions of \(mag_{\rm bulge}\) and \(mag_{\rm disk}\) are of the order of 15%, although we also observe a significant degradation at magnitudes \(r>22\). Looking at these quantity separately, though, does not give a sense of the overall accuracy of the bulge-to-total ratio, B/T, which is a standard proxy of the galaxy morphology ((Simien and de Vaucouleurs, 1986), (Graham and Worley, 2008)) and correlate it with other physical parameters (e.g. (Kormendy and Ho, 2013), (Bluck et al., 2014)). Hence, here we want to explicitly quantify the accuracy and precision of the B/T derived by our B-D decomposition. In Fig. 12 we show the predicted vs. true B/T values. The former are obtained from the ratio the predicted magnitude of the bulge and total predicted magnitudes, while the latters are the same input quantities from CosmoDC2. From the data in Fig. 12, we have estimated an \(R^{2}\) of 0.80, an NMAD of 0.06 and outlier fraction of 0.05. Note that being the B/T a quantity smaller than one, by definition, we have re-defined the \(\Delta p=(p_{i}-t_{i})/(1-t_{i})\) in Eqs. 11 and 12, similarly to what is usually done for galaxy redshifts (e.g. (Amaro et al., 2021)). The \(R^{2}\) we obtain is worse than the ones found for the bulge and disk magnitudes separately, suggesting a lower overall accuracy of the B/T. This might come from the tails of the faint bulges and disks seen in Fig. 7 at \(r>22\), which is propagated in the B/T plot above. To check that, in the same figure we also plot (orange points) the B/T predictions of a "bright" sample defined as the galaxies having \(mag_{\rm disk}\) and \(mag_{\rm bulge}\) smaller than \(r=22\) (orange points and errorbars in Fig. 12). In this case we obtain a \(R^{2}=\)0.83, NMAD=0.06 and outlier fraction of 0.06, which are almost equivalent to the full sample. This means that the B/T parameter is very sensitive to the intrinsic scatter of the magnitudes (regardless how small) of the two components, which is turned into a low accuracy. Looking at Fig. 12, we also notice that the maximum of the scatter (and outlier fraction) is concentrated in the interval \(0.2<\) B/T \(<0.8\) while at small and large B/T, where either the disks, or the bulges dominate respectively, the scatter is reduced. This suggests that the uncertainties are larger where there is a coexistence of the two components and are reduced in presence of a dominant component. In this latter case though, we notice that being the disks more poorly constrained (see Table 2), at B/T\(<0.2\) there are some clear systematics, which are reduced for the "bright" sample (see orange mean datapoint at B/T\(<\)0.2). We notice that a systematic overprediction of the B/T was found also by the CNN presented in Grover et al. (2021), although the two results cannot be directly compared as their analysis is based on a completely different approach (no B-D decomposition) and data (ground based, bright \(r<18\) galaxies).
### One-component galaxy systems
The 2-Sersic profiles reproduce the majority of lenticular and late-type systems. However, most of the elliptical galaxies are characterized by a dominant spheroidal component, if one excludes the extended stellar haloes which are ubiquitously found around bright elliptical galaxies at very faint surface brightness levels ((Iodice et al., 2016), (Duc, 2020)). These latter are generally detected at low redshift and are possibly difficult to be recognised at \(z>0.5\)(e.g. (Sackett et al., 1994); (Zibetti and Ferguson, 2004); (Bakos and Trujillo, 2012), but see, e.g., (Trujillo and Bakos, 2013; Golden-Marx et al., 2022)). Hence, there is a large variety of galaxies that are well described by a 1-Sersic profile. For these systems, due to the lower complexity and absence of strong degeneracies introduced by the superposition of multi-components, we can reasonably expect to obtain robust GaLNets predictions even at \(z>1\), which somehow sets an upper limit for the GaLNet-BD on 1-epoch CSST
Figure 12: Comparison between the true B/T and the predicted B/T of two components on the full sample of mock CSST galaxies (blue dots) and the “bright” sample with \(mag_{\rm disk}\) and \(mag_{\rm bulge}\) being selected to be \(r<22\) (orange points). The horizontal axes are the true values and the vertical axes are the predictions from GaLNets. The error bars are the normalized median absolute deviation (NMAD) in each bin. The four stripes of galaxies at B/T\(>0.8\) come from some peaks in the CosmoDC2 distributions.
data as seen in SS4.2. In the same section, we have also discussed that the major limitation imposed by the high\(-z\), even for space observations, is the small angular size of galaxies, that reduces the number of useful pixel data to constrain a large number of parameters. By limiting ourselves to 1-Sersic profiles we reduce the number of the parameters by a factor of two, hence leaving a larger number d.o.f. for our models.
In this test, we use the same method and CNN architecture as adopted in the first GaLNet work (Li+22). In particular, we use the GaLNet-2 and train it over the CSST mock observations and local _PSF_ (as in SS2.3.2). For this purpose, we randomly collect 200 000 galaxies from the 1-Sersic galaxy catalog provided by the CosmoDC2. Even in this case, though, we needed to override the \(n=4\) set in CosmoDC2 for the 1-Sersic models, and instead use the a more realistic log-normal \(n\)-index distribution as from Li+22. After having produced the _PSF_ convolved models, with Poisson noise, and added these to the "Background" images as described in SS2.3.2, we impose the condition of SNR\(>\)40. Once again, this produces an alteration of the final parameter distribution, but less severe than the 2-component model, as shown in Fig. 13 for the magnitude distribution vs. redshift of 200 000 mock galaxies as compared with the original CosmoDC2 catalog. Here we see that there is no sharp drop of magnitudes after \(r\sim 23\), while there is a rather large, albeit patchy, population of galaxies at \(z>1.5\). The non uniform distribution is possibly due to the volume and resolution of CosmoDC2 simulation, which looses details of the field galaxies and picks "Large Structure" populations at high redshift, as the ones shown around \(z=2\) and \(z=2.5\) in Fig. 13. We finally decide to avoid the sparse sample at \(z\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt\hbox{$>$}}1.75\) and us this latter as upper limit of our analysis.
The training and testing of the GaLNet-2 both follow the same steps as described in SS3. Furthermore, in order to fully evaluate the model performance on high redshift galaxies, besides the test sample of 50 000 galaxies randomly taken at all redshifts (full-\(z\), hereafter), we specifically selected 25 000 galaxies with redshift \(>0.5\) as a further high-\(z\) test sample (high-\(z\), hereafter).
The final results on the two test samples are reported in Fig. 14, where we plot the predicted parameters vs. ground truth values. In Table 3, we report the statistical indicators for the predicted targets broken in the low-\(z\) (\(z<0.5\)) and high-\(z\) (\(z>0.5\)) samples. For the low-\(z\) sample, we can see a general good accuracy of the main galaxy parameters (\(mag\), \(R_{\rm e}\) and \(n\)), with small systematics only at higher \(n\). The accuracy is degraded for the high-\(z\) sample, especially for a certain tendency to underestimate the effective radii at \(R_{\rm e}>1^{\prime\prime}\) and to overestimate the Sersic index at \(n=4-6\) and underestimate at larger \(n\). This is reflected in Table 3, where we clearly see that \(mag\) and \(q\) are rather accurately reproduced at all redshifts, while the indicators all degrade for the higher redshift sample. This is also shown in more details in Fig. 15, which contains the statistical indicators as a function of SNR, \(mag\), and redshift.
Overall the relative biases (see inset labels) are usually \(<10\%\) at all luminosities, while a rather increase of the \(\Delta_{p}\) at \(z>1\) is observed, remaining confined within the 20% at \(z\sim 1.5\) though. All other indicators show
Figure 13: Redshift versus total magnitude before (gray points) and after (color \(+\) bar) the SNR cut, for the 1-Sérsic model run. At a fixed SNR, the 1-Sérsic models result generally fainter than the 2-Sérsic ones. This is due to the fact that the disk component generally contribute with a shallower profile that dilute the SNR inside the \(R_{\rm e}\) as defined in §2.3.3. Compared to the 2-Sérsic in Fig. 5, here galaxies pass the \({\rm SNR}=40\) criterion down to \(mag_{r}>23.5\) and redshift \(z>1.5\), although they are too sparse at \(z\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt\hbox{$>$}}1.75\) and we will retain only galaxy below this redshift limit.
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline \multicolumn{1}{c}{ Test} & \multicolumn{1}{c}{Components} & \multicolumn{1}{c}{mag} & \multicolumn{1}{c}{\(R_{\rm e}\)} & \multicolumn{1}{c}{PA} & \multicolumn{1}{c}{q} & \multicolumn{1}{c}{n} \\ \hline \(R^{2}\) & low redshift & 0.9981 & 0.8910 & 0.8415 & 0.9776 & 0.8894 \\ & high redshift & 0.9782 & 0.7641 & 0.7410 & 0.9332 & 0.6718 \\ \hline Outlier Frac. & low redshift & 0.0005 & 0.0707 & 0.1718 & 0.0013 & 0.1053 \\ & high redshift & 0.0015 & 0.0910 & 0.2485 & 0.0100 & 0.1875 \\ \hline NMAD & low redshift & 0.0095 & 0.0536 & 0.0387 & 0.0134 & 0.0661 \\ & high redshift & 0.0144 & 0.0684 & 0.0685 & 0.0225 & 0.0920 \\ \hline \end{tabular} Note. – Statistical properties of the prediction on simulated testing data. From top to bottom we show \(R^{2}\); the fraction of outliers and the NMAD for the magnitude \(mag\); effective radius \(R_{\rm e}\); position angle \(PA\); axis ratio \(q\) and Sérsic index \(n\). Generally, the log-prediction is better than linear-prediction.
\end{table}
Table 3: Statistical Properties of the 1-Sérsic-Prediction
reasonably modest values in terms of scatter (NMAD\(<0.2\)) and outlier fraction (\(<0.25\)).
Compared to the result of the GaLNet-2 in Li+22, here we can show the advantages of the space observations in terms of depth and angular size. Indeed, with the SNR=40 cut, GaLNet-2 can perform the Sersic analysis down to \(r\sim 24\), while in Li+22, where we used a more conservative SNR=50, we reached \(r\sim 22\). In terms of angular size, using space observations Fig. 14 shows that also at \(z>1\) (see right column) the accuracy of sub-arcsec effective radii is reasonably good down to \(\sim 0.15^{\prime\prime}\), i.e. \(\sim 2\times\) pixel scale (\(0.074^{\prime\prime}\)). Also the \(n\)-index show a nice consistency with the ground truth values, although for high-z there is more scatter and outlier fraction, due to the few pixels available for the fit.
Overall this test shows that GaLNet can push the CSST data to the analysis of the structural parameters for galaxies with a dominant single component to high-\(z\) (up to 3). Here we expect to see several millions of galaxies even with the 1-epoch depth, hence providing an unprecedented dataset for understanding the assembly of galaxies in their early phase of their formation (e.g. (Conselice et al., 2005); (Clauwens et al., 2018))
## 5 Summary and Conclusions
The high-quality and deep data from new generation large sky surveys from space (EUCLID, CSST, Roman) will give us the opportunity to study the evolution of the morphological mix of galaxies up to early phases of their formation, over unprecedented statistical samples.
Figure 14: Comparison between the true value and the predicted value of low redshift samples (\(z<0.5\), left panel) and high redshift samples (\(z>0.5\), right panel). In each panel, the horizontal axes are the true values and the vertical axes are the predictions from GaLNets. The error bars are the normalized median absolute deviation (NMAD) in each bin, while labels report the absolute errors for mag and relative errors for others.
Figure 15: Accuracy and precision for 1-component systems. Bias (\(\Delta p\)), Outlier fraction (out. fr.), and Scatter (NMAD) as functions of spec-z, photo-z, and magnitudes in 20 bins for the “Log images” sample. In each panel, blue line is for Sérsic index and orange is for effective radius. In the last row we also present the number distribution in the corresponding parameter space. The y-axes scales are the same as the Fig. 11 for comparison.
Bulge-Disk structural analysis is a crucial diagnostic to clarify the physics of the galaxy transformation and evolution (e.g. (Conselice et al., 2005), (Costantin et al., 2021), (Ferreira et al., 2022)). However, this is also a fundamental information in cosmological analyses based on weak lensing, as correctly account for the galaxy morphology will prevent strong biases in galaxy shape measurements (e.g. (Kacprzak et al., 2014)). Thus, the multi-component analysis of even billions of galaxies, in different optical and NIR bands, will be a challenging task for upcoming survey programs.
So far, B-D decomposition analyses have been based on traditional analytical tools (e.g., (Gao and Ho, 2017; Tortorelli and Mercurio, 2023)), which are computationally demanding. For this reason even in the local universe, these studies are generally limited to small sample of thousands of galaxies ((Gao et al., 2020; Lang et al., 2014; Casura et al., 2022)).
In order to cope with the massive amounts of data generated by the upcoming large-scale observations, deep learning techniques have been proposed to perform parametric galaxy surface brightness analyses. Modeling of single-component, either considering or neglecting the effect of the _PSF_ (e.g. (Li et al., 2022) and (Tuctillo et al., 2018), respectively), have been proved to be a viable alternative to standard techniques, as they provide similar accuracy of standard methods, but with \(\sim 10^{3}\times\) faster computational time.
As an evolution of the Galaxy Light profile neural Network (GaLNet) series (see Li+22), we have introduced the first deep learning tool for the bulge-disk decomposition, dubbed GaLNet-BD. In particular, we have trained the GaLNet-BD to be applied to CSST single-epoch mock simulations, as prototype of optical space observations capable of providing billion galaxy samples. We have considered the measurement of structural parameters, as derived by the 2-Sersic profiles, specifically the effective radius, \(R_{\rm e}\), the surface brightness at the effective radius, \(I_{e}\), the axis ratio, \(q\), the Sersic index, \(n\), and the position angle, \(PA\), for the bulge and the disk respectively.
To produce realistic galaxies to be used for the training and testing of the new GaLNet-BD sample, we have followed the B/D luminosity, size, axis ratio and redshift from CosmoDC2, and expanded the \(n\)-index distribution to account for bulges with \(n>2\) and disks with \(n<2\). We have also tested the case of 1-Sersic galaxies, to check the ability of the GaLNets to push the structural parameters analysis to higher redshifts (\(\sim 1.5\)).
We summarize here below the main results of the paper:
1. The GaLNet-BD can accurately predict the magnitude of the Bulge and the Disk components of galaxies (\(R^{2}\mathrel{\hbox to 0.0pt{\lower 3.0pt\hbox{$\sim$}}\raise 1.0pt\hbox{$>$} }0.87\)) with NMAD\(\sim 0.1\) and typical precision of the order of \(\sim 20\%\), implying a robust estimate of the bulge over total (B/T), although the accuracy of this latter parameter is poorer that the one found for the magnitude of the individual components and we observe the presence of some systematics at low B/T. If considering a "bright" sample made of systems with \(mag_{\rm disk}\) and \(mag_{\rm bulge}\) brighter than \(r=22\), then the B/T parameter shows a slightly improved accuracy and no sign of systematics.
2. High accuracies (\(R^{2}\mathrel{\hbox to 0.0pt{\lower 3.0pt\hbox{$\sim$}}\raise 1.0pt\hbox{$>$} }0.9\)) are found for the effective radius, \(R_{\rm e}\), the position angle \(PA\) of both components. For the Sersic index we have found mixed results as the ones of the bulge, \(n_{\rm bulge}\), is recovered extremely well (\(R^{2}\mathrel{\hbox to 0.0pt{\lower 3.0pt\hbox{$\sim$}}\raise 1.0pt\hbox{$>$} }0.98\)), while the ones of the disk, \(n_{\rm disk}\), shows a large scatter and a relative bias at low (\(n_{\rm disk}<0.75\)) and high end (\(n_{\rm disk}>1.50\)) of its distribution. The overall accuracy measured by the \(R^{2}\) is \(\sim 0.1\).
3. The relative bias, NMAD and outlied fraction of the main parameters (namely bulge and disk \(n\) and \(R_{\rm e}\)) as a function of the galaxy magnitudes, SNR and redshift (see Fig. 11) show that GaLNet-BD produces stable performances for SNR\(>40\). This is possibly a conservative lower SNR limit that can be pushed down to increase the completeness of the structural parameter catalogs. However all the disgnostics start to degrade at \(z\mathrel{\hbox to 0.0pt{\lower 3.0pt\hbox{$\sim$}}\raise 1.0pt\hbox{$>$} }1\), especially the disk \(n\)-index and the bulge \(R_{\rm e}\). \(z=1\) eventually represents the current upper limit for robust estimates of the B-D parameters, at least with a single epoch CSST observations.
Figure 16: Distribution of the \(\chi^{2}\), as defined in Eq. 13, for the GaLNet-2 predictions simulated images. The peak at \(\chi^{2}<2\) contains almost the \(\sim\)75%.
4. We have finally trained the GaLNet-2 (Li+22), using 1-Sersic mock galaxies and the local PSF, to demonstrate the capability of deep space observations, as the ones expected from the CSST, in pushing the galaxy structural parameter analysis toward fainter galaxies and higher redshift. When considering simpler systems and a smaller number of parameters. One-component model catalogs naturally complement the ones derived from the B-D decomposition, e.g. for normal spheroids. Furthermore, 1-Sersic profiles can be robustly used to study the size evolution of galaxies (e.g. (Trujillo et al., 2007), (van der Wel et al., 2014)) of for quantitative morphology (e.g. using the Sersic index, see e.g. (Huertas-Company et al., 2012)) up to redshifts well beyond \(z>0.5\). In this case, we have found that the GaLNet-2 correctly predict the ground truth values with accuracies of the order of \(R^{2}\sim 0.9\) for most of the parameters (see Table 3). Looking at the dependence of the relative bias, outlier fraction and NMAD as a function of magnitude and redshift, we have found that they start to degrade at \(z>1\), but they remain reasonably under control up to \(z\sim 1.5\) or slightly larger, and down to \(r\sim 24\).
To conclude, we have provided the first deep learning tool which is capable of efficient and accurate bulge-disk decomposition of galaxies from optical space observations and specifically optimized for upcoming CSST observations. In the era of all sky-surveys, the big advantage of Deep Learning tools is the very easy applicability and scalability, combined with an enormous gain in computation time. For instance, after a fairly short training time (\(\sim\)60 minutes on NVIDIA GeForce RTX 3080 Laptop Graphic Processing Unit - GPU), GaLNet-BD needs only 140 seconds for single component and up to 200 seconds for two-components models, to fully analyse \(\sim 50\)k galaxies. This means that with the use of a limited number of commercial GPUs it is possible to analyse 1 billion galaxy samples in one band in a few days. If one combines data from optical and near infrared bands, for instance from CSST and EUCLID, 1 billion galaxies will become doable in one or two months.
The variety of opportunities offered by this capability is enormous. Besides galaxy formation and evolution science, this will allow us to perform a rapid classification of host systems for spectroscopic follow-ups and/or new discoveries (e.g. transient, gravitational lensing etc.), that needs morphology "and" full spectral energy distribution (SED) based on multi-band surface photometry, including a quick characterization of the galaxy stellar population (e.g. age, star-formation etc.). This would be of great benefit for all the community working on these survey facilities. Science-wise, the study of the morphological mix of galaxies from \(z\sim 3\) over an unprecedented sample of galaxies, down to \(r\sim 23\) will put stringent constraints upon the evidences that are emerging from the study of even high-redshift galaxies from the James Web Spatial Telescope (e.g. (Linzer and Steinhardt, 2020), (Kartaltepe et al., 2023))
## Acknowledgements
NRN acknowledges financial support from the National Science Foundation of China, Research Fund for International Senior Scientists (grant n. 12150710511), from the research grant from China Manned Space project n. CMS-CSST-2021-A01 and A06 and from the Guangdong provincial fund "Understanding structural evolution of galaxies with machine learning". NRN thanks the students of the course "Reading Astronomy" at SYSU, for useful suggestions and criticism on the first manuscript draft. RL acknowledges the support of the National Nature Science Foundation of China (No. 12022306) and the science research grants from the China Manned Space Project (CMS-CSST-2021-A01). CT acknowledges funding from INAF Research Grant 2022 LEMON. LCH was supported by the National Science Foundation of China (11721303, 11991052, 12011540375, 12233001), the National Key R&D Program of China (2022YFF0503401), and the China Manned Space Project (CMS-CSST-2021-A04, CMS-CSST-2021-A06). We thank Cheng Li (Tsinghua University) for useful discussions and suggestions.
|
2304.01325 | Deep learning neural network for approaching Schrödinger problems with
arbitrary two-dimensional confinement | This article presents an approach to the two-dimensional Schr\"odinger
equation based on automatic learning methods with neural networks. It is
intended to determine the ground state of a particle confined in any
two-dimensional potential, starting from the knowledge of the solutions to a
large number of arbitrary sample problems. A network architecture with two
hidden layers is proposed to predict the wave function and energy of the ground
state. Several accuracy indicators are proposed for validating the estimates
provided by the neural network. The testing of the trained network is done by
applying it to a large set of confinement potentials different from those used
in the learning process. Some particular cases with symmetrical potentials are
solved as concrete examples, and a good network prediction accuracy is found. | Adrian Radu, Carlos A. Duque | 2023-04-03T19:48:33Z | http://arxiv.org/abs/2304.01325v2 | Deep learning neural network for approaching Schrodinger problems with arbitrary two-dimensional confinement
###### Abstract
This article presents an approach to the two-dimensional Schrodinger equation based on automatic learning methods with neural networks. It is intended to determine the ground state of a particle confined in any two-dimensional potential, starting from the knowledge of the solutions to a large number of arbitrary sample problems. A network architecture with two hidden layers is proposed to predict the wave function and energy of the ground state. Several accuracy indicators are proposed for validating the estimates provided by the neural network. The testing of the trained network is done by applying it to a large set of confinement potentials different from those used in the learning process. Some particular cases with symmetrical potentials are solved as concrete examples, and a good network prediction accuracy is found.
Artificial intelligence, neural network, deep learning, stochastic gradient descent, Schrodinger equation, quantum well.
## 1 Introduction
The speed of computerized data processing and the ability to analyze large datasets have increased exponentially as a direct consequence of the rapid development of processors and computing techniques. This evolution proved important not only from a quantitative perspective but also led to a paradigm shift in terms of methods and algorithms for solving difficult problems. In recent years, more and more categories of concrete technical problems but also of fundamental interest problems have been addressed by new artificial intelligence methods, such as machine learning (ML) [1]. A concrete way in which this method can be implemented is with the help of neural networks (NNs), logical structures inspired by the functioning of the biological nervous system. With this approach, the problems of automatic classification or recognition of shapes that are almost unapproachable by classical algorithms can be solved. However, various more abstract problems from fundamental disciplines have proved to be approachable from this new perspective. Using NNs to solve problems that already admit more or less sophisticated conventional solutions can also be instructive and present the potential for further extensions to more complicated contexts. One such relatively recent challenge of interest in nanomaterials science and quantum chemistry is to solve the Schrodinger Equation (SE) in one or more dimensions using artificial intelligence methods. Some advantages of using ML methods compared to existing numerical methods for quantum physics are: obtaining much faster estimates of the energy of particles
confined in nontrivial potentials, better approaching many-body problems, and predicting phase transitions and properties of quantum systems in inaccessible physical conditions.
ML predictive approaches, generally built upon statistical learning theory, represent a different paradigm from the classical methods for solving SE, although they can be based on their results in the training stage. Exact analytical solutions of SE for nanostructures can be obtained in very few simple cases and are therefore of little practical relevance [2]. Approximate quasi-analytical solutions can be obtained using variational techniques [3] or perturbative methods [4], which have been extensively studied in the last century [5]. However, they are limited in accuracy and impractical for physical systems with nontrivial geometries, dimensionalities, and interactions. Asymptotic iteration methods can be an alternative for solving 1-D Schrodinger-type problems [6,7]. Meshless methods and diagonalization techniques provide good results; however, they are more demanding in terms of computational effort [8,9]. In parallel with the accelerated development of computers, numerical methods based on spatial discretizations and finite-difference approximations of SE have been increasingly used [10-12]. Shooting methods use iterative solving of finite-difference equations with discrete energy values in a search interval and the selection of those that best meet the boundary conditions [13,14]. For all SE dimensionalities, an accurate and versatile mesh-based approach is the finite element method, which uses a weak formulation of the equation and involves large algebraic systems [15-20].
So far, there have been a few studies dealing with artificial intelligence methods for solving SE, the most important of which are mentioned below.
Lagaris _et al._ demonstrated how artificial NNs can be used to solve partial differential equations [21] and applied the concepts for solving SE in several cases with different dimensionalities [22]. Sugawara proposed a new approach for solving one-dimensional (1-D) SE by combining a genetic algorithm and an NN [23]. In a remarkable study, Mills _et al._ introduced a deep learning method for solving two-dimensional (2-D) SE by calculating the ground and first excited states of an electron in different types of confining potentials (CPs) [24]. Vargas-Hernandez _et al._ presented an ML method based on Gaussian process regression to predict sharp transitions in a Hamiltonian phase diagram by extrapolating the properties of quantum systems [25]. Han _et al._ solved many-electron SE using a deep NN with wave function (WF) optimization through a Monte Carlo approach [26]. Using NNs, Mutuk addressed the eigenvalue problem of a 1-D anharmonic oscillator [27]. Manzhos reviewed recent ML techniques used to solve electronic and vibrational SEs, which are typically related to computational chemistry [28]. Hermann _et al._ proposed PauliNet, which is a deep NN representation of electronic WFs for molecules with up to 30 electrons, and proved that it can outperform variational quantum chemistry models [29]. Pfau _et al._ introduced a novel deep learning architecture, the Fermionic NN, to approach many-electron SE [30]. Li _et al._ used an NN model to solve SE by computing multiple excited states [31]. Grubisic _et al._ used a dense deep NN and a fully convolutional NN to approximate eigenmodes localized by a CP [32]. Yuksel _et al._ applied multilayer perceptron architectures to predict the ground-state binding energies of atomic nuclei [33]. In a study by da Silva Macedo _et al._ an NN was trained to predict the energy levels and energy-dependent masses as nonparabolic properties of semiconductor heterostructures [34]. The learning ability of a physics-informed proper orthogonal decomposition-Galerkin simulation methodology for QD structures was investigated by Veresko and Cheng [35]. In a recently published paper, we used two different neural architectures to approach 1-D SE in quantum wells (QWs) with arbitrary CPs [36]. The results were
compared and discussed using accuracy indicators and represent the starting point of the 2-D generalization addressed in the present study.
Beyond the theoretical interest, solving a 2-D confinement problem can be useful in practice, mainly for quantum wires [37, 38], highly oblate or flat 3D quantum dots [39, 40], and 2-D quantum dots [41, 42]. In the first case, quantum confinement occurs along the transverse directions of the wire, which is where the 2-D character of the SE comes from [43]. The 2-D SE energy solutions under the effective mass approximation give the subband edges in the quantum wires. In the second case, the 3-D SE specific to a quantum dot can be adiabatically decoupled into a 1-D problem of strong confinement along the small size of the nanostructure and a transverse 2-D problem with a modified potential [44]. In the third case, calculations of the electronic properties are usually performed by expressing the Hamiltonian in a basis function set of atomic orbitals or by using the density functional theory [45, 46].
In this study, we propose a deep NN with two hidden layers (HLs) and thousands of subnets to estimate the ground state energy and wave function of a particle confined in an arbitrary 2-D QW. The NN was trained using a set of CPs, energies, and WFs previously generated using the finite element method (FEM). The NN can be understood as a set of separately trained subnets for each element of the position discretization. This makes the training process more transparent and allows for parallelization. Several accuracy indicators have been proposed for the NN testing. The subnets are trained on a large dataset (DS) using the stochastic gradient descent (SGD) method with variable data batches, and the training is validated with respect to a second similar DS. The network was then tested with a third DS, prepared using a different algorithm. In addition, several cases of analytical CP have been solved and discussed.
The contents of the work are as follows: Section 2 contains the statement of the problem and the mathematical principles of our approach; Section 3 presents in detail the obtaining of data samples, the training of the NN, the validation of the results obtained, and their testing based on several accuracy indicators; general conclusions are given in Section 4; examples and technical details concerning the potential samples of the DSs are provided in the Appendix.
## 2 Methods
### Two-dimensional Schrodinger equation and sample data
If a constant effective mass approach is used, the time-independent SE of a particle confined in a 2-D QW is
\[-\frac{k^{2}}{2m^{*}}\begin{bmatrix}\frac{\partial^{2}\varphi(x,y)}{\partial x ^{2}}+\frac{\partial^{2}\varphi(x,y)}{\partial y^{2}}\end{bmatrix}+V(x,y) \varphi(x,y)=E\varphi(x,y). \tag{1}\]
The potential energy
\[V(x,y)=\begin{cases}V_{i}(x,y),\ \sqrt{x^{2}+y^{2}}<R_{o}\\ V_{o},\ \sqrt{x^{2}+y^{2}}\geq R_{o}\end{cases} \tag{2}\]
is defined such that \(0\leq V_{i}(x,y)\leq V_{o}\) and the discontinuity domain of \(V_{i}(x,y)\) has a Lebesgue measure of zero. \(V_{o}\) is the maximum "depth" of the QW, and \(R_{o}\) is the outermost radius of the confinement zone, that is, of the circle including the subdomain of \(\mathbb{R}^{2}\) where \(V_{i}(x,y)<V_{o}\).
We denote by \(R\) the "effective radius" of the QW, defined as the radius of the circle circumscribing the same area as the confinement zone. With the notations \(\xi\equiv x/R,\ \eta\equiv y/R,\ \ \psi(\xi,\eta)\equiv\varphi(R\xi,R\eta),\ \ v(\xi,\eta)\equiv
\(V(R\xi,R_{\eta})/V_{0}\), \(e\equiv E/V_{o}\), and the dimensionless scale factor \(\mu\equiv 2m^{*}V_{o}R^{2}/\hbar^{2}\), the Eq. (1) can be expressed in the following standardized form:
\[-\frac{1}{\mu}\left[\frac{2^{\phi}\psi(\xi,\eta)}{\delta\xi^{2}}+\frac{\partial^ {2}\psi(\xi,\eta)}{\partial\eta^{2}}\right]+v(\xi,\eta)\psi(\xi,\eta)=e\psi(\xi,\eta), \tag{3}\]
where \(0\leq v(\xi,\eta)\leq 1\) and \(e\in(0,1)\). The dimensionless quantity \(e\) will continue to be referred to as energy. The real WF can be range-normalized so that \(max|\psi(\xi,\eta)|=1\).
We denote by \(\{v_{\sigma}\}_{1\leq\sigma\leq 5}\colon\mathbb{R}^{2}\to[0,1]\) a set of \(S\) dimensionless confinement functions defined in such a way as to ensure the existence of the ground bound state \(\{\psi_{\sigma},e_{\sigma}\}\) for each of them. In any case of a bound state in the QW, the WF decreases exponentially towards zero outside the geometric confinement domain such that \(\lim_{\rho(\xi,\eta)\to\infty}\psi_{\sigma}=0\), where \(\rho(\xi,\eta)\equiv\sqrt{\xi^{2}+\eta^{2}}\) is the radial position. An FEM numerical solver using the Dirichlet boundary condition at \(\rho(\xi,\eta)=r_{\text{b}}\) may be used to approximate the ground state WFs \(\{\psi_{\sigma}\}_{1\leq\sigma\leq 5}\) and energies \(\{e_{\sigma}\}_{1\leq\sigma\leq 5}\). \(r_{\text{b}}\equiv R_{b}/R\) denotes the circular boundary radius, which is sufficiently high with respect to \(r_{\text{o}}\equiv R_{o}/R\) such that the calculation accuracy is satisfactory. The FEM uses a mesh of nodes \(\Xi_{\text{N}}\equiv\{(\xi_{n},\eta_{n})\}_{1\leq n\leq N}\) densely distributed inside a circle of radius \(r_{\text{b}}\). In the following, we use the notation \(\psi_{\sigma}(\Xi_{\text{N}})\equiv[\psi_{\sigma}(\xi_{n},\eta_{n})]_{1\leq n \leq N}^{t}\equiv[\psi_{\sigma}^{n}]_{1\leq n\leq N}^{t}\) for the column vector of the FEM numerical solution corresponding to the sample CP \(v_{\sigma}(\Xi_{\text{N}})\equiv[v_{\sigma}(\xi_{n},\eta_{n})]_{1\leq n\leq N }^{t}\equiv[v_{\sigma}^{n}]_{1\leq n\leq N}^{t}\). We call "sample-problem", or "sample" for short, the set consisting of a CP function \(v_{\sigma}\), the WF of the ground level \(\psi_{\sigma}\), and the corresponding energy of the particle \(e_{\sigma}\), as they are calculated by the FEM. A collection of samples is called a "dataset". At least two different DSs \(\{v_{\sigma};\psi_{\sigma};e_{\sigma}\}_{1\leq\sigma\leq 5}\) defined in this way are necessary, one for training and validating the NN, and the other for testing it.
### Neural networks architecture and underlying functions
Since the mesh used by the FEM can be extremely fine, the data sampling used for the NN may be done in a subset \(\Xi_{\text{M}}\equiv\{(\xi_{\nu},\eta_{i})\}_{1\leq i\leq M}\) of the mesh, provided that it sufficiently covers the bounded domain on which the standardized SE is defined. The same mesh subset was used for sampling the input data (values of the CP functions) and estimating/predicting the output data (values of the ground state WFs). A deep NN with two HLs is proposed, as shown in Fig. 1. The mesh subset determines the number \(M\) of neural nodes in both the input and output layers. Figure 1(a) shows that the NN can be decomposed into \(M\) similar separate subnets, which allows easy parallelization of the calculations. Each subnet receives the same data input from all nodes in the input layer (IL) and has a single output node, which is the estimate of the WF at a single mesh point. The subnets are identical in their internal structure but differ in functionality: their neurons are not equivalent after training the network. Figure 1(b) shows the neural architecture of a single subnet enclosing two HLs, each with \(P\) neurons. In a subnet, all nodes of a layer are interconnected with all the nodes of the previous layer, and each neural connection is mathematically coded using a weight coefficient. The number of neural connections in a subnet is \(P(M+P+1)\) such that the total number of weights in the NN is \(MP(M+P+1)\).
The activation function of the neurons of the output layer (OL) is the widely used standard logistic sigmoid (sigm), whose codomain fits the interval into which the values of the range-normalized WFs fall. The neurons of the HLs have the hyperbolic tangent (tanh) as an activation function. We selected these related functions because they are continuously differentiable in \(\mathbb{R}\) and have simple derivatives.
Because the functioning of the entire NN can be reduced to the mathematics of its subnets, we explain the flow of data in a single generic subnet q. In the following expressions, all matrix operations are element-wise, except for the matrix product explicitly denoted by "-".
Given a particular CP function \(\nu_{\sigma}\), the data sent by HL1 to a neuron in HL2 are
Figure 1: (a) Neural network composed of \(M\) independent subnets; both input and output layers have \(M\) nodes; (b) Subnet with \(M\) nodes in the input layer, \(P\) nodes in each of the hidden layers, and one output node.
\((h_{1})_{\sigma}^{q}=H_{1}\tanh[\Lambda_{1}^{\text{q}}\cdot v_{\sigma}(\Xi_{ \text{M}})]\equiv H_{1}\frac{\exp[2\Lambda_{1}^{\text{q}}\cdot v_{\sigma}(\Xi_{ \text{M}})]-1}{\exp[2\Lambda_{1}^{\text{q}}\cdot v_{\sigma}(\Xi_{\text{M}})]+1},\) (4)
where \((h_{1})_{\sigma}^{q}\) is a \(P\times 1\) column vector, \(H_{1}\) is a scale coefficient, \(\Lambda_{1}^{\text{q}}\) is the \(P\times M\) weight matrix of HL1 of subnet q, and \(v_{\sigma}(\Xi_{\text{M}})\) denotes the \(M\times 1\) column vector of the sample CP data in the IL, that is, \([v_{\sigma}(\xi_{i},\eta_{i})]_{1\leq i\leq M}^{t}\equiv[v_{\sigma}^{1}]_{1 \leq i\leq M}^{t}\).
The data sent by the HL2 to the output neuron is
\[(h_{2})_{\sigma}^{q}=H_{2}\tanh[\Lambda_{2}^{\text{q}}\cdot(h_{1})_{\sigma}^{ q}]\equiv H_{2}\frac{\exp[2\Lambda_{2}^{\text{q}}\cdot(h_{1})_{\sigma}^{q}]-1}{ \exp[2\Lambda_{2}^{\text{q}}\cdot(h_{1})_{\sigma}^{q}]+1}, \tag{5}\]
where \((h_{2})_{\sigma}^{q}\) is a \(P\times 1\) column vector, \(H_{2}\) is a scale coefficient, and \(\Lambda_{2}^{\text{q}}\) is the \(P\times P\) weight matrix of HL2 of the subnet q.
The estimated WF value at node q of the submesh \(\Xi_{\text{M}}\), for the CP \(v_{\sigma}\), that is, the subunitary output of the subnet q, is
\[\bar{\psi}_{\sigma}(\xi_{q},\eta_{q})\equiv\bar{\psi}_{\sigma}^{q}=\text{sigm }[\Lambda_{\text{o}}^{\text{q}}\cdot(h_{2})_{\sigma}^{q}]\equiv\frac{1}{1+ \exp[-\Lambda_{\text{o}}^{\text{q}}\cdot(h_{2})_{\sigma}^{q}]}, \tag{6}\]
where \(\Lambda_{\text{o}}^{\text{q}}\) is the \(1\times P\) weight-row vector of the OL of the subnet q.
Combining Eqs. (4-6) into a compact expression, we get
\[\bar{\psi}_{\sigma}^{q}=\text{sigm}[H_{2}\Lambda_{\text{o}}^{\text{q}}\cdot \tanh[H_{1}\Lambda_{2}^{\text{q}}\cdot\tanh[\Lambda_{1}^{\text{q}}\cdot v_{ \sigma}(\Xi_{\text{M}})]]]. \tag{7a}\]
An extra single subnet can be used for ground-level energy estimation. Formally replacing \(\bar{\psi}_{\sigma}^{q}\) with \(\bar{\epsilon}_{\sigma}\) and index q by e, we obtain
\[\bar{\epsilon}_{\sigma}=\text{sigm}[H_{2}\Lambda_{\text{o}}^{\text{e}}\cdot \tanh[H_{1}\Lambda_{2}^{\text{e}}\cdot\tanh[\Lambda_{1}^{\text{e}}\cdot v_{ \sigma}(\Xi_{\text{M}})]]]. \tag{7b}\]
Further we will use the notation \(\bar{\psi}_{\sigma}(\Xi_{\text{M}})\equiv\left[\bar{\psi}_{\sigma}^{q}\right]_ {1\leq q\leq M}^{t}\), i.e. the neural estimation of the FEM solution \(\psi_{\sigma}(\Xi_{\text{N}})\).
### Loss function, weight optimization and network training
Training the NN involves approaching the optimal values of the weight matrices such that the network estimates are as close as possible to the solutions expected from the training DS: that is, the loss function is minimized. Optimization is performed iteratively using the gradient descent (GD) method, starting from the initial values \(\left\{\Lambda_{1}^{\text{q}}(0);\Lambda_{2}^{\text{q}}(0)\;;\Lambda_{\text{o} }^{\text{q}}(0)\right\}\), \(q\in\{1,2,...,M\}\). The updated weight matrices and the neural estimation of the solution after \(\tau\) iterations will be denoted by \(\left\{\Lambda_{1}^{\text{q}}(\tau);\Lambda_{2}^{\text{q}}(\tau)\;;\Lambda_{ 0}^{\text{q}}(\tau)\right\}\), \(q\in\{1,2,...,M\}\) and \(\left[\bar{\psi}_{\sigma}(\Xi_{\text{M}})\right](\tau)\), respectively.
In this study, the global NN loss function corresponding to the training DS is defined as
\[\mathcal{L}(\tau)\equiv\sum_{\sigma=1}^{S}\left\|\left[\bar{\psi}_{\sigma}(\Xi_ {\text{M}})\right](\tau)-\psi_{\sigma}(\Xi_{\text{M}})\right\|^{2}\equiv\sum_{ \text{q}=1}^{\text{M}}L^{q}(\tau), \tag{8}\]
where \(\left\|\cdot\right\|\) is the Euclidian norm on \(\mathbb{R}^{M}\) and \(L^{q}(\tau)\) is the local subnet q loss function:
\[L^{q}(\tau)=\sum_{\sigma=1}^{S}\left[\bar{\psi}_{\sigma}^{q}(\tau)-\psi_{ \sigma}^{q}\right]^{2}. \tag{9a}\]
Additionally, the energy loss function can be defined in a similar way
\[L^{e}(\tau)=\sum_{\sigma=1}^{S}[\bar{\epsilon}_{\sigma}(\tau)-e_{\sigma}]^{2}, \tag{9b}\]
where \(\tilde{e}_{\sigma}(\tau)\) and \(e_{\sigma}\) are the neural estimations of the energy after \(\tau\) iterations and the expected energy, respectively.
Because each subnet is trained independently, the loss function can be minimized separately for each output node q, as presented below. The gradient components of the loss function \(L^{q}(\tau)\) with respect to the weights are the \(P\times M\), \(P\times P\), and \(1\times P\) matrices, respectively:
\[\forall L^{q}(\tau)\equiv\begin{cases}\frac{\partial L^{q}}{\partial A_{1}^{q} }(\tau);\frac{\partial L^{q}}{\partial A_{2}^{q}}(\tau);\frac{\partial L^{q} }{\partial A_{0}^{q}}(\tau)\end{cases}. \tag{10a}\]
The gradient of the energy loss function \(L^{e}(\tau)\) is
\[\forall L^{e}(\tau)\equiv\begin{cases}\frac{\partial L^{e}}{\partial A_{1}^{ e}}(\tau);\frac{\partial L^{e}}{\partial A_{2}^{e}}(\tau);\frac{\partial L^{e} }{\partial A_{0}^{e}}(\tau)\end{cases}. \tag{10b}\]
The starting values of all the weight coefficients are randomly chosen and determine the initial values \(L^{q}(0)\) and \(L^{e}(0)\) of the subnet loss functions. The weight matrices and, implicitly, the loss functions are iteratively updated by a first-order approximation to minimize the losses in Eqs. (9a-9b), that is, the GD method:
\[\begin{split}\left\{A_{1}^{0}(\tau+1);\A_{2}^{0}(\tau+1)\;;A_{0} ^{0}(\tau+1)\right\}&=\left\{A_{1}^{0}(\tau);\A_{2}^{0}(\tau)\;;A_{0}^{ 0}(\tau)\right\}-\lambda\nabla L^{q}(\tau),\end{split} \tag{11a}\] \[\begin{split}\left\{A_{1}^{0}(\tau+1);\A_{2}^{0}(\tau+1)\;;A_{0 }^{0}(\tau+1)\right\}&=\left\{A_{1}^{0}(\tau);\;A_{2}^{0}(\tau)\;;A _{0}^{0}(\tau)\right\}-\lambda\nabla L^{e}(\tau).\end{split} \tag{11b}\]
Here, \(\lambda\) is the learning rate and \(\tau\) is an integer index such that \(0\leq\tau<T\), where \(T\) denotes the maximum number of iterations. Theoretically, if the loss function decreases monotonically, the value of \(T\) can be established based on the criterion imposed by the maximum allowed variation in the loss function from one iteration to another. Because working with DSs of hundreds of thousands of samples is, in practice, a very expensive computational burden, one may opt for the SGD variant of the GD method. This method replaces the gradient calculation based on the complete DS containing \(S\) samples by an estimate calculated from a randomly selected batch of \(S^{\prime}\) samples (\(S^{\prime}\ll S\)) which can be totally or partially changed at each iteration. The SGD method achieves faster iterations; however, the convergence has a lower rate and fluctuating behavior. Because the loss function has fluctuations that overlap with the average decreasing trend, \(T\) should be imposed by the average behavior of the loss function and/or by the available computing resources.
### Testing the network. Accuracy indicators
Several quantitative indicators for testing the trained NNs are proposed based on the dispersion from the expected values of the WF, predicted energy, and average position of the particle. The indicators are calculated for each sample \(\sigma\) in a DS (\(0\leq\sigma\leq S\)). Network efficiency is determined by analyzing and comparing the distribution of the values of these indicators in the DSs involved.
The main accuracy indicator is the relative difference between the NN-estimated WF \(\tilde{\psi}_{\sigma}\) and the FEM-calculated solution \(\psi_{\sigma}\), that is, the WF relative deviation
\[\epsilon_{\sigma}\equiv\frac{\|\tilde{\psi}_{\sigma}(\Sigma_{\text{M}})-\psi_{ \sigma}(\Sigma_{\text{M}})\|}{\|\psi_{\sigma}(\Sigma_{\text{M}})\|}. \tag{12}\]
The spatial overlap of the exact and estimated WFs may be another indicator of the NN accuracy. The estimated WF relative spatial overlap is defined as
\[\omega_{\sigma}\equiv\frac{\sum_{\omega=1}^{M}\bar{\omega}_{\sigma}^{2}\bar{\omega} _{\sigma}^{4}\bar{\omega}}{\left\|\bar{\psi}_{\sigma}(\Xi_{\omega})\right\|^{2} \left\|\psi_{\sigma}(\Xi_{\omega})\right\|^{2}}. \tag{13}\]
The NN-estimated average positions and FEM-calculated average positions are compared by calculating the deviations of the average \(\xi\) and \(\eta\) positions, respectively:
\[\Delta(\xi)_{\sigma}\equiv\frac{\sum_{\omega=1}^{M}\xi_{\sigma}(\bar{\psi}_{ \sigma}^{4})^{2}}{\left\|\bar{\psi}_{\sigma}(\Xi_{\omega})\right\|^{2}}-\frac{ \sum_{\omega=1}^{M}\xi_{\sigma}(\psi_{\sigma}^{4})^{2}}{\left\|\bar{\psi}_{ \sigma}(\Xi_{\omega})\right\|^{2}}, \tag{14a}\] \[\Delta(\eta)_{\sigma}\equiv\frac{\sum_{\omega=1}^{M}\eta_{\sigma}(\bar{\psi}_{ \sigma}^{4})^{2}}{\left\|\bar{\psi}_{\sigma}(\Xi_{\omega})\right\|^{2}}-\frac{ \sum_{\omega=1}^{M}\eta_{\sigma}(\psi_{\sigma}^{4})^{2}}{\left\|\psi_{\sigma} (\Xi_{\omega})\right\|^{2}}. \tag{14b}\]
As the estimated WF approaches the exact WF, the limits of \(\varepsilon_{\sigma}\), \(\omega_{\sigma}\), \(\Delta(\xi)_{\sigma}\) and \(\Delta(\eta)_{\sigma}\) are 0, 1, 0 and 0, respectively.
The deviation of the NN-estimated energy with respect to the FEM-calculated value is:
\[\Delta\varepsilon_{\sigma}\equiv\bar{\varepsilon}_{\sigma}-e_{\sigma}\] (15a) or, in a relative expression: \[\kappa_{\sigma}\equiv\frac{\varepsilon_{\sigma}}{\varepsilon_{\sigma}}-1. \tag{15b}\]
## 3 Calculations and Results
In the following, we use the abbreviation ksamp (kilosample) to denote 1000 samples. To train, validate, and test the NN, three DSs have been prepared (hereinafter referred to as DS1, DS2, and DS3), each of them containing \(S=100\) ksamp. DS1 is used for training, DS2 for validation, and DS3 is reserved for testing. Various algorithms have been implemented to generate arbitrary CPs for DSs, as explained in the Appendix. DS1 and DS2 are distinct but they were prepared using the same randomization method, whereas DS3 was prepared using a different algorithm. When we refer to the output provided by the NN, we usually use the word "estimate" but if we want to emphasize that the input was not the training DS, we may use the word "prediction" instead.
### Two-dimensional finite element calculation and sample preparation
COMSOL Multiphysics(r) FEM software was used to generate samples [47]. To compute the expected ground state WFs and energies, the FEM model was built to solve the SE with \(r_{b}=5\) and \(\mu=26\) for all CP functions in the DSs. A very small value of the border radius (\(r_{b}<2r_{o}\)) may lead to an artificial increase in the confinement, whereas an excessively large value (under the conditions of a prefixed number of nodes) will lead to an important decrease in the density of nodes in the confinement area, which will further negatively influence the accuracy of the calculation. The reference scale factor \(\mu\) was reasonably chosen in accordance with the typical values of real confining systems, so that several bound energy levels exist for all CPs [37, 48]. It is noteworthy that the scale factor introduced in Eq. (3) is proportional to what we could call the "confining volume" \(\pi V_{o}R^{2}\), that is, a cylindrical pseudovolume with the real confining base area \(\pi R^{2}\) and the energetic "height" \(V_{o}\). Intuitively, the NN is trained on a set of problems with confining volumes distributed around the chosen reference value. The variance is introduced through the very algorithm that generates the CPs of the DS by randomly changing the confining perimeter and the variations in the potential inside the confining zone. The
numerical choice of \(\mu\) therefore does not mean drastic particularization, but rather offers a plausible reference value in relation to the physical systems of interest. For example, for cylindrical confinement in semiconductor wires based on GaAs/AlGaAs, the value of \(\mu\) previously defined is given by a realistic confinement radius of approximately \(8\,\mathrm{nm}\). For any given material, a scale factor that is too small corresponds to an extreme confinement regime, which is difficult to achieve in real semiconductor structures. In addition, for the large values of the kinetic term of the Hamiltonian that are reached in very small semiconductor structures, the effective mass approximation is questionable [2]. Too high values of \(\mu\) correspond to systems with a high density of energy levels, in which the quantization fades and the interest in solving the SE is limited. We define a user-controlled mesh of nodes \(\Xi_{\mathrm{N}}\equiv\{(\xi_{n},\eta_{n})\}_{1\leq n\leq N}\), unevenly distributed inside a circle of radius \(r_{b}\). Nodes are relatively rare close to the boundary, densely distributed in the vicinity of the QW perimeter, and very dense in the confining zone. When solving a 2-D differential problem, the FEM software considers the standard physics-controlled mesh with approximately \(13000\) nodes to be "extremely fine" [47]. Indeed, for typical CPs, the spatial element size of this mesh is sufficiently small such that the accuracy of solving the SE is sufficiently high. However, in this study, we allowed the random CPs of the DSs to exhibit fast variations and sometimes several points or lines of discontinuity. Therefore, we chose a user-controlled mesh with a larger total number of nodes, that is \(N=18724\). Given that for any bound state, the WF decreases exponentially to zero sufficiently far from the confinement zone, the Dirichlet boundary condition \(\psi(r_{b})=0\) may be assumed. We determined that the mesh was sufficiently refined and the chosen value of the parameter \(r_{b}\) was sufficiently high to ensure an accuracy better than \(5\times 10^{-5}\) for the calculation of the ground state energy. The typical error can be estimated by comparing the FEM result \(e=0.154021\) with the exact semi-analytical solution \(e_{sa}=0.154005\), in the particular case of a finite-wall cylindrical confinement with \(\nu(\xi,\eta)=H(\rho(\xi,\eta)-1)\), where \(H\) denotes the Heaviside step function [49].
Figure 2 shows the graphical details of the mesh used, at various degrees of image magnification. The mesh is triangular, has circular symmetry and contains four distinct concentric zones colored blue in the figures. The outer area (we call it the "far mesh" in Fig. 2a), has a low density of nodes, because in this region the values of the WFs are very small and slowly variable. This is the circular crown between the circle of radius \(r_{3}=1.3\sqrt{2\pi/3/sin(2\pi/3)}\cong 2\) and the border of radius \(r_{b}\). Its inner radius \(r_{3}\) was calculated such that none of the irregular confinement zones generated by the CP randomization algorithms would enter this zone (\(r_{o}<r_{3}\) for all samples). Another region in the shape of a circular crown of inner radius \(r_{2}=1.2\sqrt{2\pi/4/sin(2\pi/4)}\cong 1.5\) and outer radius \(r_{3}\) follows inwards (the "intermediate mesh" in Fig. 2b). Its node density is higher than that of the outer mesh but remains relatively low. This mesh domain ensures the adaptation between the low-density outer mesh and the high-refinement mesh of the confining perimeter, and only a small number of samples with a highly eccentric confining perimeter marginally penetrate this zone. The next smaller circular crown of the mesh (the "near mesh" in Fig. 2c) is highly refined and corresponds to the region where the CP can already exhibit large variations from the external constant value to lower values; that is, it contains large portions of the confining perimeter. The main zone of the mesh (the "central mesh" in Fig. 2d), shaped like a circular disk with radius \(r_{1}=1.1\), is extremely refined and corresponds to the confinement zone in which the most important variations in the CP and WF occur. Only a small number of samples (those with reduced eccentricity, close to a circular confinement perimeter) had confining zones that were completely contained in the central mesh domain (\(r_{o}<r_{1}\)).
The complete technical details of the mesh used are presented in Table I.
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Mesh & Radial & Mesh & Percentage & _Edge_ & _Edge_ & _Average_ & _Element_ \\ \multirow{2}{*}{entity} & Radial & nodes & from total & _elements_ & _length_ & _element_ & _length_ \\ & position & (vertices) & number & & & & \\ & & & of nodes & & & & \\ \hline _Boundary_ & \(\rho=r_{b}\) & _48_ & _0.256\%_ & _48_ & _31.39_ & _0.654_ & \(1\) \\ \hline Domain (a) & \(\rho\in[r_{3},r_{b}]\) & 451 & 2.409\% & 758 & 65.48 & 0.0864 & 0.0544 \\ \hline _Border (a-b)_ & \(\rho=r_{3}\) & _96_ & _0.513\%_ & _96_ & _12.7_ & _0.1323_ & _0.9828_ \\ \hline Domain (b) & \(\rho\in[r_{2},r_{3}]\) & 960 & 5.127\% & 1632 & 5.726 & 35.09E-4 & 0.2309 \\ \hline _Border (b-c)_ & \(\rho=r_{2}\) & _192_ & _1.025\%_ & _192_ & _9.449_ & _0.0492_ & _0.9821_ \\ \hline Domain (c) & \(\rho\in[r_{1},r_{2}]\) & 3456 & 18.458\% & 6336 & 3.304 & 5.215E-4 & 0.2079 \\ \hline _Border (c-d)_ & \(\rho=r_{1}\) & _384_ & _2.051\%_ & _384_ & _6.911_ & _0.018_ & _0.9387_ \\ \hline Domain (d) & \(\rho\in[0,r_{1}]\) & 14529 & 77.596\% & 28672 & 3.801 & 1.326E-4 & 0.3224 \\ \hline \multirow{2}{*}{Total} & \multirow{2}{*}{\(\rho\in[0,r_{b}]\)} & \multirow{2}{*}{\(N=18724\)} & \multirow{2}{*}{100\%} & _720_ & \multirow{2}{*}{78.32} & \multirow{2}{*}{2.094E-3} & \multirow{2}{*}{3.603E-4} \\ \cline{1-2}
SE was solved for all \(3\times 10^{5}\) randomized CPs in the training, validation, and testing sets. The energies \(e_{\sigma}\) and WFs \(\psi_{\sigma}\) of the ground level were stored, forming together with the corresponding CPs \(\nu_{\sigma}\) what we call the "data sets" DS1, DS2 and DS3. In the following, we consider that these energy and WF solutions represent the true (exact) values. Figure 3 shows the energies obtained for each of the three sets of random CPs. The mean energy value \(e_{m}\) over the entire set, the most frequent (probable) energy \(e_{p}\), and the standard energy deviation \(\delta e\) are indicated. The color scale illustrates the deviation of the energy of each sample from the mean value of the set. Histograms of the energy occurrence frequencies were created by dividing the energy interval (0,1) into 100 equal bins and counting the results in each bin. The numbers on the horizontal axis of the histograms also indicate the linear probability density.
As expected, the energy distributions obtained for DS1 and DS2 are very similar (Figs. 3a and 3b, respectively) and confirm that the sets are sufficiently large to be representative of their common CP randomization algorithm. In these cases, the upper positive skewness of the frequency histograms indicates that the energies are unevenly distributed around the mean. Thus, it is more likely to obtain energies higher than the most frequent value. Figure 3a shows the results obtained with the CPs of the third set generated using a different randomization algorithm. The mean energy value was significantly higher and almost equal to the most frequent value. The appearance of the distribution is quite different, almost symmetrical, and shows aspects of a normal distribution. The standard deviation of the energy was also much smaller than that in the previous cases.
### Defining the input/output layers of the neural network
The SE was solved numerically using a mesh with a relatively large number of nodes, which was justified by the need to obtain reliable results in cases with rapid variations in the CP or many discontinuities. However, it is impractical to associate each computational node with an input/output node in the NN because the ground state WF generally exhibits a slow variation with position. Nevertheless, if computational cost had not been a limiting factor, we would have opted for a full representation of the FEM mesh in the IL and OL of the NN. We are aware that better effectiveness of the NN would be obtained if the IL corresponded to an even denser representation of the nodes in the original mesh. In this work, we chose a subset \(\Xi_{\text{M}}\equiv\{(\xi_{t},\eta_{t})\}_{1\leq t\leq M}\) of the FEM calculation mesh to represent both the IL and OL of the NN. The manner in which we selected the nodes is illustrated in Fig. 4 and is based on the intention of approximately uniform coverage of a circular region that is representative of the most important variations of the CP and WF. The disk containing all the nodes of the selection was chosen to be of radius \(r_{a}=1.75\), which is intermediate between \(r_{2}\) and \(r_{3}\) (Fig. 4a). Because this domain contains regions with various mesh refinements and the original mesh is not regular, we selected the
Figure 3: Energy distribution of the ground level corresponding to the sets of confinement potentials: a) DS1 – training set; b) DS2 – validation set; c) DS3 – testing set. The lateral histograms illustrate the occurrence frequency of the results as a function of energy.
original nodes that are closest to the vertices of the triangular tiling {3,6} (Fig. 4b). These are the centers of the densest possible circle packing in the plane. The number of nodes in the subset is then controlled by a single parameter, that is, tiling edge length. If we choose this parameter to be 0.06, we get \(M=2764\). The CP values in these nodes represent the input data of the NN that estimates the WF as well as the input of the subnet that estimates the energy of the ground level.
### Subnet training and energy evaluation
Heuristically optimizing the NN architecture is prohibitively difficult because of the large time required for multiple calculations with different combinations of node numbers. Concerning neurons in the HLs, it seems that there is no strict or clear rule in the literature for setting the optimal number. Thus, we followed the empirical choice of taking it close to the geometric mean of the numbers of nodes in the IL and OL. Therefore, all subnets in this study (energy subnet included) have \(M=2764\) nodes in the IL and \(P=53\) neurons in each of the two HLs. The training of all subnets is done with the SGD method, using batches with \(S^{\prime}=2\) ksamp. For this, the training DS1 of 100 ksamp random CPs was divided into subsets of 1 ksamp and, for each iteration, the current batch is rebuilt from two different subsets chosen randomly and independently. Figure 5 shows the SGD learning graph for the energy neural subnet. Because there was only one subnet for energy estimation, we were able to perform a longer training, with \(T=10^{5}\) iterations. However, for all other 2764 subnets used to estimate the WF, the training had to be limited to \(T=2000\) for computational cost reasons. The relatively small fluctuations that appear along the learning graph are specific to the SGD method, and are caused by the stochastic variance of the different batches used for each iteration.
Figure 4: a) The mesh subset \(\Xi_{M}\equiv\{(\xi_{i},\eta_{i})\}_{1\leq i\leq M}\) intended for neural network training: blue dots are the original mesh nodes of the finite element method and red dots are the \(M=2764\) selected nodes. Several particular nodes marked A, B, C, D were chosen to further illustrate the training of the corresponding neural subnets and the distribution of the wave function estimation errors. b) Details of the triangular tiling and the selected nodes in the vicinity of the central node A.
The initial values \(\{\Lambda_{1}^{\varepsilon}(0);\Lambda_{2}^{\varepsilon}(0)\;;\Lambda_{0}^{ \varepsilon}(0)\}\) of the weight matrices were normally distributed random numbers with a mean 0 and standard deviations 1/100, 1/75 and 1/50, respectively. We noticed that a normal distribution of the initial weights ensures better stability of the preliminary learning than a uniform distribution; however, the choice of standard deviations was rather empirical. With these initial weights, the starting value \(L^{\varepsilon}(1)\) of the loss function is obtained after the first iteration and is then used as a reference to calculate the relative loss \(L^{\varepsilon}(\tau)/L^{\varepsilon}(1)\). The learning rate \(\lambda\) must be adjusted so that the training is neither too fast nor too slow. If the learning graph decreases rapidly, it can enter a regime of strong fluctuations, instability, and even divergence. If it varies too slowly, the total training duration will increase unreasonably. In our model, the learning rate is adjusted according to the number of samples in the batch. We used a pre-learning rate \(\lambda_{p}=5\times 10^{-5}\) in the first stage of the training and \(\lambda=10^{-5}\) afterwards. The change in the learning rate occurs when an empirical condition on \(L^{\varepsilon}\) is met, which is achieved after the 49\({}^{\text{th}}\) iteration, as shown in Fig. 5. If the higher value \(\lambda_{p}\) was to be kept further, for most subnets, this would lead to instabilities and sudden increases in the cost function, compromising convergence. If the pre-learning stage is omitted by using the \(\lambda\) parameter from the start, the total training time necessary to achieve the same loss function decrease will be approximately 10% higher for \(T=2000\). We divided the learning graph into four distinct intervals, as shown in the insets (i)-(iv) of Fig. 5, to better follow the variation trend of the loss function. The decrease is very fast in interval (i) and in the first part of (ii), followed by reaching a first quasi-plateau at the end of interval (ii) and the beginning of (iii), after approximately 2000 iterations. Considering that the main graph is semi-logarithmic, it should be noted that the further progress is very slow: after approximately \(5\times 10^{4}\) iterations, a second quasi-plateau is reached, as shown in inset (iv). The loss function decreased to 2% of the initial value after 2000 iterations and to 0.2% after \(10^{5}\) iterations.
Figure 5: Discrete learning graph of the energy subnet: relative loss versus the number of iterations. Insets detail the loss variation on different training intervals.
Figure 6: Estimated/predicted energy as a function of the true energy. The diagonal lines represent the ideal, perfect prediction. The color scale is a measure of the in-plane density of samples.
After training, the energy neural subnet was applied to the three DSs in two scenarios: T=2000 and \(T=10^{5}\). The results are presented in Fig. 6 in the form of bivariate histogram plots with 2-D bins. The bins were squares with side of \(10^{-3}\). The true energy (FEM solution) is represented on the horizontal axis and the estimated/predicted energy (NN solution) is represented on the vertical axis. The number of samples in each bin is coded by color. The numerical labels on the color bars give the bin counts and, if multiplied by 10, provide the surface probability density. It can be observed that the distributions of the estimated energies for the training set (a,b) and validation set (c,d) are extremely similar. This is proof of the validity of the network training: that is, the number of samples in the training DS1 is large enough to ensure neural learning related to the SE itself and possibly to the algorithm generating DS1 and DS2 random CPs, but not to a particular group of samples. Figures (e) and (f) show that the predictions are also very good for DS3, which is very different from the other two. Therefore, the subnet is efficient in correctly predicting the energy of the ground state for CPs that are very dissimilar to those with which it was trained. The in-plane distribution density of the samples was significantly affected by the number of iterations used. The scattering of bins with non-zero counts is greater after only 2000 iterations (a,c,e) than after \(10^{5}\) iterations (b,d,f). Therefore, the representative points of the graphs tended to accumulate near the diagonal lines as the subnet training improved.
It is worth mentioning that the energy can be estimated with this method without the involvement of the WF, which can be an advantage in applications where a fast response is required. For example, with the spatial discretization used in this work, the NN provides the energy approximately 60 times faster than the FEM.
### Training and testing neural subnets for ground state WF estimation in particular nodes
To illustrate the training and predictive efficiency of the individual subnets we select 4 particular nodes of the mesh subset \(\Xi_{M}\). These points are illustrated in Fig. 4a: A is the closest node to the origin of the coordinates, where in general the WF has relatively large values; B is approximately the midpoint of the radius of a cylindrical confinement, where there is a large dispersion of possible values; C is close to the perimeter of the confinement zone, where there are generally fast variations of the WF; and D is found outside the confinement, in a position where the values of the WFs are relatively small. There is practically no difference in the settings between the subnets used to estimate the WF and the previously described subnet for energy estimation. All \(M=2764\) subnets used for the WF estimation have the same structure and IL; they differ only in the output node. Figure 7 shows the learning graphs of the subnets related to output nodes A, B, C, and D. Subnet A learns relatively quickly, reaching a plateau value after 10 iterations, but this value of the loss function is rather high. Between iterations 100 and 2000, there was an additional decrease of approximately 5%. Based on these observations, we can anticipate that in the central area of the confinement zone, where WF maxima are usually found, underestimates of the true values are often obtained. Subnet B learns more slowly and with greater fluctuations in the relative loss precisely because the dispersion of the possible values of the WF in node B is greater than that in the other cases. After 55 iterations, there was instability in the learning graph, with a sudden increase in the loss function. In the next 50 iterations, the learning stabilizes, and it is observed that around the \(200^{\text{th}}\) iteration, there is a transition to a 5 times lower value of the learning rate, after which the fluctuations remain relatively small. The learning of subnets A and B shows, by the slope of the graphs between iterations 1000 and 2000, that they will still have the potential to decrease if a larger number of iterations is practically feasible. The behavior of the learning graph of subnet C shows some notable differences compared to B: no
instability occurs, it seems to reach a plateau value after 1000 iterations, and the value of the loss function after 2000 iterations reaches 5% of the initial value, which is much lower than in the previous cases. Finally, subnet D learns very quickly at the beginning, similar to A, but the learning curve decreases more. After 1000 iterations, the loss function becomes less than 1% of the initial value, and the slope seems to show that the subnet still has a slight learning potential. The partial similarity between the graphs of subnets A and D comes from the fact that, in both cases, the marginal codomain of the logistic activation functions is involved, close to 1 and 0, respectively. Learning reaches the plateau regime faster, because the activation function derivative of the output neurons decreases rapidly. After the same number of iterations, the relative decrease in the loss function exhibited the following trend from the center to the outside: \(\sim\)7% for A, \(\sim\)12% for B, \(\sim\)5% for C, and \(\sim\)0.7% for D.
Trained neural subnets A, B, C, and D were applied to the training and testing DSs. Bivariate histograms are shown in Fig. 8. The bins are squares with side of \(10^{-3}\), the true value of the WF (FEM solution) is on the horizontal axis, and the estimated/predicted value (NN solution) is on the vertical axis. The values on the color bars indicate the number of samples in the bins and, if multiplied by 10, provide the probability density. Observing histograms (a) and (b) corresponding to subnet A, it is found that the spread of the results is greater in the case of DS3, which translates into a lower prediction efficiency than the estimation efficiency for DS1. In addition, for WF values close to the maximum, the predictions for DS3 considerably underestimate the true values. Regarding subnets B (c,d) and C (e,f), similar behaviors were found: slightly better for DS1 in the case of B and, surprisingly, slightly better for DS3 in the case of C. The histograms (g,h) of subnet D, magnified four times in the insets of the figures, show a higher concentration and lower dispersion of the samples from DS3 in
Figure 7: Learning graphs of the subnets corresponding to the nodes marked in Fig. 4a.
the vicinity of the diagonal line. In addition, for WF values close to zero, subnet D has a systematic tendency to overestimate.
## 6 Conclusion
In this paper, we have presented a new method for the determination of the \(\beta\)-function of
As explained above for the learning graphs, the predictive efficiency of the subnets for which the typical values of the WF approach the ends of the interval (0,1) is limited by the decrease in the logistic sigmoid derivative of the output neurons.
It was mentioned in Section 2.3 that the subnets are not correlated during training. An attempt to improve the method would require the encoding of the SE itself in the mathematical model of the network, that is, obtaining a physics-informed NN [50]. Because the partial derivatives in the equation assume variations with the coordinates of the WFs, it is clear that such an approach must correlate the training of the subnets related to the energy and neighboring nodes. Presumptively, the advantages would be more accurate predictions of the WFs and a better extrapolation of the method for inputs that are significantly different from those in the training DSs. However, these improvements will come with the cost of more calculations per iteration and a longer training time.
### Training and testing the neural network. Calculation of quantitative indicators
After training on DS1 the entire NN formed by \(M=2764\) subnets for estimating the WF and one subnet for estimating the energy, the accuracy indicators can be calculated and compared. For this, the trained network was provided with all input data (sample CPs) from the three DSs, and the estimated/predicted results were statistically analyzed.
Figure 9 shows the results obtained in the form of superposed histograms. The training DS1 histogram was plotted as a contour line to demonstrate how well it matched the validation DS2 histogram for each indicator. It should be noted that although individually different, as shown in the Appendix, the CPs in DS2 are related to those in DS1 by the nature of their generating algorithm. However, the functions of DS3 are much more dissimilar to those of DS1 and DS2.
Figure 8: Estimated/predicted wave function values versus true solution values (given by the finite element method) for: node A (a,b), node B (c,d), node C (e,f), and node D (g,h). The diagonal lines represent the ideal, perfect prediction. The color scale is a measure of the in-plane density of samples. Histograms (a,c,e,g) present the estimation made on the training DS1, while (b,d,f,h) show the prediction for the testing DS3.
Figure 9: Multiple histograms comparing the neural network estimations/predictions for all 300 ksamp in the training DS1, validation DS2 and testing DS3: (a) relative difference between the estimated wave function \(\vec{\psi}_{\sigma}\) and the true solution \(\psi_{\sigma}\); upper inset: deviation of the estimated/predicted energy from the true value; lower inset: energy relative deviation; (b) relative spatial overlap of estimated/predicted wave functions; insets: deviations of average \(\xi\) and \(\eta\) positions.
be seen that the relative difference between the predicted and true WFs is somewhat larger for DS3, which shifts the histogram profile to the right as compared to the DS1 histogram. Thus, the maximum of the DS3 histogram is shifted to the right by approximately 3%, but it is approximately the same height. From the insets of Fig. 9a, it appears that for DS3, the energy subnet has a greater tendency to underestimate. This is intuitive because the average value of the energies in DS3 was considerably higher than those in DS1 and DS2, as shown in Fig. 3. Figure 9b demonstrates that the true and estimated WFs overlap spatially in a proportion of over 98% for the vast majority of samples, and to an even greater extent for DS3. This can be explained by the stronger nature of the specific confinement of DS3, as demonstrated by the higher energies in this set. Stronger confinement means that functions are constrained in a narrower spatial region, resulting in better overlap. The insets of Fig. 9b show histograms of the average position differences and are almost identical, which was expected given that the algorithms generating the randomized CPs do not favor any particular direction.
### Neural network predictions for several symmetric confinement potentials
The NN accuracy indicators in particular cases of symmetrical CPs (which cannot be found in any of the three DSs) were calculated. These cases were also used to plot and compare the aspect of the WFs predicted by the NN with the expected WFs calculated by the FEM. The following analytically defined 2-D finite-barrier potential wells were considered, all with equal confining zone areas:
a) \(v(\xi,\eta)=H\big{(}\sqrt{\xi^{2}+\eta^{2}}-1\big{)}\), with \(H\) denoting the Heaviside step function;
b) \(v(\xi,\eta)=v_{min}+(1-v_{min})H\big{(}\sqrt{\xi^{2}+\eta^{2}}-1\big{)}\), where \(v_{min}=1/4\);
c) \(v(\xi,\eta)=H\left(\sqrt{\frac{\xi^{2}}{a^{2}}+\frac{\eta^{2}}{b^{2}}}-1\right)\), where \(ab=1\) and eccentricity \(e=\sqrt{1-\frac{b^{2}}{a^{2}}}=\frac{7}{10}\);
d) \(v(\xi,\eta)=\min(\xi^{2}+\eta^{2},1)\);
e) \(v(\xi,\eta)=H\left(|\xi|+|\eta|-\sqrt{\frac{n}{2}}\right)\).
It should be noted that CP (b) does not have the same scale factor as the others in the sense defined in the theory section; however, even in this case, the NN makes a good prediction. Figure 10 presents the CPs, solutions given by the FEM, and corresponding predictions of the NN. All surface plots were prepared via 2-D interpolation of the scattered function values at the nodes of the mesh subset used to train the network. It is observed that the NN solutions are not as smooth as the exact solutions, and generally have slightly smaller amplitudes.
Figure 10: Confinement potentials, finite element method solutions and neural network predictions in five particular cases with analytically-defined, symmetrical potential functions. The semi-transparent surface plots allow the contour plots of isolines to be observed in the plane \((\xi,\eta)\). Quantum wells (a) and (b) are cylindrical confinements of depths 1 and 3/4, respectively; (c) has an elliptical confinement perimeter; (d) is a confinement with variable parabolic depth, and (e) has a square perimeter of confinement.
A quantitative assessment of the prediction accuracy of the NN in these cases is presented in Table II. Columns 3-10 of the table contain, from left to right: the WF relative deviation, WF relative spatial overlap, deviation of
the average \(\xi\) position, deviation of the average \(\eta\) position, exact value of the energy, predicted value of the energy, deviation of the energy, and relative deviation of energy. The best value in each indicator category is underlined, and the worst value is italicized.
There were no double QW samples in any of the DSs. However, systems where the potential evolves from a single well to a double-well structure are very interesting for the study of quantum phase transitions [51, 52] and the design of quantum gates [53]. Recent studies have investigated whether ML methods can be applied to predict the onset of a quantum phase transition and extrapolate the properties of the system in the phase separated from the training data by the phase transition [25]. It is interesting to investigate whether the NN from the present study can make reasonable predictions on a double-well configuration, given that it has not "met" any double-well during training. For this, we divided the cylindrical confinement (a) from Table II into two equal semi-cylindrical wells by a potential barrier of variable width \(\mathbf{w}_{b}\). The width was gradually increased from 0 (cylindrical confinement) to 3/4 of the confinement radius (in which case the two wells can be understood as almost distinct systems). The identical wells were distanced without changing the initial total confinement area. We calculated the ground energy of the double QW using both methods, as shown in Fig. 11.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline Case & Potential & \(\epsilon\)(\%) & \(\omega\)(\%) & \(\Delta\)(\(\xi\))(E-4) & \(\Delta\)(\(\eta\))(E-4) & \(\mathbf{e}\) & \(\bar{\mathbf{e}}\) & \(\Delta\mathbf{e}\) & \(\kappa\)(\%) \\ \hline (a) & 3.72 & 99.95 & -4.45 & -7.29 & 0.1541 & 0.1560 & 0.0019 & 1.23 \\ \hline (b) & _14.58_ & 99.93 & -2.71 & -10.41 & 0.3959 & 0.3904 & _-0.0055_ & -1.39 \\ \hline (c) & 5.80 & 99.90 & _5.68_ & -22.37 & 0.1589 & 0.1598 & 0.0009 & 0.57 \\ \hline (d) & 11.92 & _99.63_ & 2.41 & 15.80 & 0.3906 & 0.3929 & 0.0023 & 0.59 \\ \hline (e) & 4.91 & 99.92 & -0.92 & -18.68 & 0.1609 & 0.1637 & 0.0028 & _1.74_ \\ \hline \end{tabular}
\end{table}
Table II: Accuracy indicators for the quantum wells presented in Fig. 10
Figure 11: Transition from single cylindrical to double semi-cylindrical quantum structure with the same total confining area. Dashed line shows the ground energy level of a single semi-cylindrical quantum well.
The solid line represents the FEM solution, with the typical asymptotical increase from the energy of the cylindrical well to almost twice that value for the single semi-cylindrical well of the halved area. The circular symbols represent the NN predictions that match the exact solution well, except for the intrinsic numerical fluctuations in the method. For \(w_{b}<0.6\) there is a tendency for the NN to underestimate the energy, but for larger values of \(w_{b}\) it is expected to overestimate.
Finally, we discuss the possibility of extending this neural model to the 3-D Schrodinger problem, provided that the parallel computation resources required for training are available. Concerning the NN architecture, no qualitative change is necessary because there is nothing intrinsically "two-dimensional" about the IL or OL of the network. The input/output nodes are values of the confining potential/wave function in an indexed set of points, which can be distributed on a line (1-D mesh), surface (2-D mesh), or volume (3-D mesh). Quantitatively, however, there are differences in terms of the number of nodes in the IL and OL and implicitly the computational effort required to train the network. The number of nodes involved must be adequate for the complexity of the 3-D confinement functions to ensure a satisfactory predictive performance. This aspect is not surprising, because even higher-dimensional FEMs involve meshes with a large number of nodes and high computation times. Therefore, a natural extension of our method is to address 3-D problems specific to quantum dots.
## 4 Conclusions
In this study we used a deep learning technique to approach SE in 2-D QWs with finite walls and random CPs. An NN with two HLs and 2764(+1) subnets was trained using a set of CPs and their corresponding WFs and energies, which were previously calculated using a FEM. An important advantage of this NN architecture is that it is easy to parallelize calculations. Several accuracy indicators have been proposed to test NN predictions. The network was trained on the DS1 containing 100 ksamp with the SGD method and the training was validated with respect to the equally sized DS2. The network was also applied to a different testing DS3 of 100 ksamp and the results were compared by using the accuracy indicators. It was found that the network has good prediction accuracy, which is slightly lower for the test set than for the training and validation sets, as expected. Several cases with analytical CP have also been approached, presenting explicit graphs of potentials and predicted WFs and listing the accuracy indicators. The ability of the NN to make predictions on the ground state energy of a double quantum well has been demonstrated.
The improvements and developments that can be made in future studies are as follows: (i) using adaptive learning parameters, (ii) generalization of the NN solutions to 3-D problems (quantum dots), (iii) predicting energies and WFs of excited levels, and (iv) improving the method in the context of physics-informed NNs.
**Funding Statement:** NO funding to declare.
**Conflict of Interest Declaration:** The authors declare that they have NO affiliations with or involvement in any organization or entity with any financial interest in the subject matter or materials discussed in this manuscript.
**Author Contributions:** AR prepared the datasets and implemented the neural network. AR and CAD contributed to the training, validation, and testing of the neural network. AR and CAD contributed to the analysis of the results and to the writing of the manuscript. |
2307.05375 | Emotion Analysis on EEG Signal Using Machine Learning and Neural Network | Emotion has a significant influence on how one thinks and interacts with
others. It serves as a link between how a person feels and the actions one
takes, or it could be said that it influences one's life decisions on occasion.
Since the patterns of emotions and their reflections vary from person to
person, their inquiry must be based on approaches that are effective over a
wide range of population regions. To extract features and enhance accuracy,
emotion recognition using brain waves or EEG signals requires the
implementation of efficient signal processing techniques. Various approaches to
human-machine interaction technologies have been ongoing for a long time, and
in recent years, researchers have had great success in automatically
understanding emotion using brain signals. In our research, several emotional
states were classified and tested on EEG signals collected from a well-known
publicly available dataset, the DEAP Dataset, using SVM (Support Vector
Machine), KNN (K-Nearest Neighbor), and an advanced neural network model, RNN
(Recurrent Neural Network), trained with LSTM (Long Short Term Memory). The
main purpose of this study is to improve ways to improve emotion recognition
performance using brain signals. Emotions, on the other hand, can change with
time. As a result, the changes in emotion over time are also examined in our
research. | S. M. Masrur Ahmed, Eshaan Tanzim Sabur | 2023-07-09T09:50:34Z | http://arxiv.org/abs/2307.05375v1 | # Emotion Analysis on EEG Signal Using Machine Learning and Neural Network
###### Abstract
Emotion has a significant influence on how one thinks and interacts with others. It serves as a link between how a person feels and the actions one takes, or it could be said that it influences one's life decisions on occasion. Since the patterns of emotions and their reflections vary from person to person, their inquiry must be based on approaches that are effective over a wide range of population regions. To extract features and enhance accuracy, emotion recognition using brain waves or EEG signals requires the implementation of efficient signal processing techniques. Various approaches to human-machine interaction technologies have been ongoing for a long time, and in recent years, researchers have had great success in automatically understanding emotion using brain signals. In our research, several emotional states were classified and tested on EEG signals collected from a well-known publicly available dataset, the DEAP Dataset, using SVM (Support Vector Machine), KNN (K-Nearest Neighbor), and an advanced neural network model, RNN (Recurrent Neural Network), trained with LSTM (Long Short Term Memory). The main purpose of this study is to improve ways to improve emotion recognition performance using brain signals. Emotions, on the other hand, can change with time. As a result, the changes in emotion over time are also examined in our research.
emotion recognition, EEG signal, DEAP dataset, fft, Machine Learning, SVM, KNN, DEAP, RNN, LSTM
## I Introduction
Emotion is defined as a person's conscious or unconscious behavior that indicates our response to a situation. Emotion is interconnected with a person's personality, mood, thoughts, motivation, and a variety of other aspects. Fear, happiness, wrath, pride, anger, panic, despair, grief, joy, tenseness, surprise, confidence, enthusiasm are the common emotions are all experienced by humans [1]. The experience can be both positive or negative. In the light of this, physiological indications such as heart rate, blood pressure, respiration signals, and Electroencephalogram (EEG) signals might be useful in properly recognizing emotions.Emotion recognition has always been a major necessity for humanity, not just for usage in fields like computer science, artificial intelligence, and life science, but also for assisting those who require emotional support. For a long time, experts couldn't figure out a reliable way to identify true human emotion. One method was to use words, facial expression, behavior, and image to recognize one's emotions [2, 3, 4, 5]. Researchers found that subject answers are unreliable for gauging emotion; people are unable to reliably express the strength and impact of their feelings. Furthermore, it is simple to manipulate self-declared emotions, resulting in incorrect findings. As a result, researchers had to shift their focus to approaches that do not rely on subject reactions. The development of Brain-Computer Interface (BCI) and Electroencephalogram (EEG) signals demonstrated more accurate methods for detecting human emotions. It introduced an involuntary approach to get more accurate and reliable results. Involuntary signals are uncontrollable and detect people's true feelings. They have the ability to express genuine emotions. The advancement of a reliable human emotion recognition system using EEG signals could help people regulate their emotions and open up new possibilities in fields like education, entertainment, and security and might aid people suffering from Alexithymia or any other psychiatric disease. The goal of our paper is to use effective techniques on DEAP dataset to extract features from EEG signals using band waves and apply machine learning algorithms and neural network models to check the efficiency of the used algorithms on valence-arousal, EEG regions and band waves.
## II Literature Review
The EEG research community is expanding its reach into a number of different fields. In her research, Vanitha V. et al. [6] aims to connect stress and EEG, and how stress can have both beneficial and bad effects on a person's decision-making process. She also discusses how stress affects one's interpersonal, intrapersonal, and academic performance and argues that stress can cause insomnia, lowered immunity, migraines, and other physical problems. Jin et al. [7] while analyzing emotions reported promising results, claiming that combining FFT, PCA, and SVM yielded results that were about 90 percent accurate. As a result, rather than the complexity of the classification algorithm used, the feature extraction stage determines the accuracy of any model. As a result, categorization systems can offer consistent accuracy and recall. Liu et al. [8] proposed a fractal-based algorithm to identify and visualize emotions in real time. They found that gamma band could be used to classify emotion. For emotion recognition, the authors analyzed different kinds of EEG features to find the trajectory of changes in emotion. They then proposed a
simple method to track the changes in emotion with time. In this paper, the authors built a bimodal deep auto encoder and a single deep auto encoder to produce shared representations of audios and images. They also explored the possibility of recognizing emotion in physiological signals. Two different fusion strategies were used to combine eye movement and EEG data. The authors tested the framework for cross modal learning tasks. The authors introduce a novel approach that combines deep learning and physiological signals. The DEAP Dataset was also utilized by the following writers to analyze emotion states. Xing et al. [9] developed a stacked autoencoder (SAE) to breakdown EEG data and classify them using an LSTM model. - The observed valence accuracy rate was 81.1 percent, while the observed arousal accuracy rate was 74.38 percent. Chao et al. [10] investigated a deep learning architecture, reaching an arousal rate of 75.92 percent. and 76.83 percent for valence states. Mohammadi et al. [11] classified arousal and valence using Entropy and energy of each frequency band and reached an accuracy of 84.05 percent for arousal and 86.75 percent for valence. Xian et al. [12] utilized MCF with statistical, frequency, and nonlinear dynamic characteristics to predict valence and arousal with 83.78 percent and 80.72 percent accuracy, respectively. Ang et al. [13] developed a wavelet transform and time-frequency characteristics with ANN classification method. For joyful feeling, the classification rate was 81.8 percent for mean and 72.7 percent for standard deviation y. The performance of frequency domain characteristics for sad emotions was 72.7 percent. Alhagry et al. [14] developed a deep learning technique for identifying emotions from raw EEG data that used long-short term memory (LSTM) neural networks to learn features from EEG signals and then classified these characteristics as low/high arousal, valence, and liking. The DEAP data set was used to evaluate the -e technique. -The method's average accuracy was 85.45 percent for arousal and 85.65 percent for valence.
## III Methodology
### _Data Materials_
For our research, we have chosen the DEAP [15] dataset. The DEAP dataset for emotion classification is freely available on the internet. A number of physiological signals found in the DEAP dataset can be utilized to determine emotions. It includes information on four main types of states: valence, arousal, dominance, and liking. Due to the use of various sample rates and different types of tests in data gathering, the DEAP Dataset is an amalgamation of many different data types. EEG data was gathered from 32 participants, comprising 16 males and 16 women, in 32 channels. The EEG signals were collected by playing 40 different music videos, each lasting 60 seconds, and recording the results. Following the viewing of each video, participants were asked to rate it on a scale of one to nine points. According to the total number of video ratings received, which was 1280, the number of videos (40) multiplied by the number of volunteers (40) yielded the result (i.e. 32). Following that, the signals from 512 Hz were downsampled to 128 Hz and denoised utilizing bandpass and lowpass frequency filters, as well as a lowpass frequency filter. 512 Hz EEG signals were acquired from the following 32 sensor positions (using the worldwide 10- 20 positioning system): Fp1, AF3, F3, F7, FC5, FC1, C3, T7, CP5, CP1, P3, P7, PO3, O1, Oz, Pr2, FP2, AF4, Fz, F4, F8, FC6, FC2, Cz, T8, CP2, P4, P8, PO4, and O2. It was also possible to take a frontal face video of each of the 22 participants. Several signals, including EEG, electromyograms, breathing region, plethysmographs, temperature, and so on, were gathered as 40 channel data during each subject's 40 trials, with each channel representing a different signal. EEG data is stored in 32 of the 40 available channels. The rest of the channels record EOG, EMG, ECG, GSR, RSP, TEMP and PLET data.
### _Data Visualization_
We extracted valence and arousal ratings from the dataset. The combination of Valence and Arousal can be converted to emotional states: High Arousal Positive Valence (Excited, Happy), Low Arousal Positive Valence (Calm, Relaxed), High Arousal Negative Valence (Angry, Nervous) and Low Arousal Negative Valence (Sad, Bored). We have analyzed the changes in emotional state along with the number of trials for each group by following Russell's circumplex model. Russell's circumplex model helped classify the DEAP dataset. Russell's methodology for visualizing the scale with the real numbers, the DEAP dataset employs self-assessment manikins (SAMs) [16]. 1-5 and 5-9 were chosen as the scales based on self-evaluation ratings [17, 18, 19]. The label was changed to "positive" if the rating was greater than or equal to 5, and to "negative" if it was less than 5. We utilized a different way to determine "positive" and "negative" values. The difference in valence and arousal was rated on a scale of 1 to 9 by the participants of DEAP. We believe that categorizing the dataset using a mean value is not a good approach because there may be no participants who rate between 1-2 and 4-6. As a result, using a mean value to derive the separation could lead to bias. On the other hand, all users may have given ratings ranging from 5 to 9. To avoid biased analysis, we wanted to utilize the value from the mid range to separate the positive and negative values. As a result, to distinguish between "positive" and "negative" numbers, we used median values. We looked for a positive or negative valence as well as a positive or negative arousal level in each experiment. Numbers greater than the median are considered "positive", while values less than the median are considered "negative". Four labels for our research have been created: high arousal low valence (HALV), low arousal high valence (LAHV), high arousal high valence (HAHV), and low arousal low valence (LALV).
### _Channel Selection_
We used two types of studies for FFT analysis. For making an RNN model with LSTM with the help of FFT processing, Emotiv Epoch+ was fitted with a total of 14 channels, which were carefully selected. The number of channels is
[1,2,3,4,6,11,13,17,19,20,21,25,29,31].The number of bands is 6. band = [4,8,12,16,25,45]. We also discovered the relation between Time domain and Frequency domain with the help of FFT in another study.
### _Fft_
Fourier Transform (FFT) is a mathematical procedure that computes the discrete Fourier transform (DFT) of a sequence. It is used to solve a variety of different types of equations or graphically depict a range of frequency activity. Fourier analysis is a signal processing technique used to convert digital signals (x) of length (N) from the timedomain to the frequency domain (X) and vice versa. FFT is a technique that is widely utilized when estimating the Power Spectral Density of an EEG signal. PSD is an abbreviation for Power spectral distribution at a specific frequency and can be computed directly on the signal using FFT or indirectly by altering the estimated autocorrelation sequence.
### _RNN and LSTM_
RNNs have risen to prominence as computing power has improved, data volumes have exploded, and long short-term memory (LSTM) technology became available in the 1990s. RNNs may be incredibly precise in forecasting what will happen next because of their internal memory, which allows them to retain key input details. The reason they're so popular is because they're good at handling sequential data kinds like time series and voice. Recurrent neural networks have the advantage over other algorithms in that they can gain a deeper understanding of a sequence and its context. A short-term memory is common in RNNs. When linked with an LSTM, they have a long-term memory as well (more on that later). Due to the data sequence providing important information about what will happen next, an RNN may do jobs that other algorithms are unable to complete. [23] Long short-term memory networks (LSTMs) are a sort of recurrent neural network extension that expands memory effectively. As a result, it's well-suited to learning from big experiences separated by long periods of time. RNN extensions that increase memory capacity are known as long short-term memory (LSTM) networks. The layers of an RNN are built using LSTMs. RNNs can either assimilate new information, forget it, or give it enough importance to alter the result thanks to LSTMs, which assign "weights" to data. The layers of an RNN, which is sometimes referred to as an LSTM network, are built using the units of an LSTM. With the help of LSTMs, RNNs can remember inputs for a long time. Because LSTMs store data in a memory comparable to that of a computer, this is the case. The LSTM can read, write, and delete information from its memory. This memory can be thought of as a gated cell, with gated signifying that the cell decides whether to store or erase data (i.e., whether to open the gates) based on the value it assigns to the data. To allocate importance, weights are utilized, which the algorithm also learns. This basically means that it learns over time which data is critical and which is not. Long-Short-Term Memory Networks (LSTMs) are recurrent neural network subtypes (RNN).
### _Feature Extraction_
Extracting features from EEG data can be done in a variety of methods. Periodogram and power spectral density calculations and combining band waves of various frequencies are required for feature extraction with the help of FFT. The Welch method is [20] a modified segmentation scheme for calculating the average periodogram. Generally the Welch method of the PSD can be described by the equations below, the power spectra density, P(f) equation is defined first. Then, for each interval, the Welch Power Spectrum, \(P_{\text{wech}}\)\((f)\), is given as the mean average of the periodogram.
\[P(f)=\frac{1}{MU}\left|\sum_{n=0}^{M-1}x_{i}(n)w(n)e^{-j2\pi f}\right|^{2} \tag{1}\]
\[P_{\text{wech}}\left(f\right)=\frac{1}{L}\sum_{i=0}^{L-1}P(f) \tag{2}\]
The power spectral density (PSD) shows how a signal's power is distributed in the frequency domain. Among the PSD estimators, Welch's method and the multitaper approach have demonstrated the best results [21]. The input [22] signal x [n], n = 0,1,2,...,N-1 is divided into a number of overlapping segments. Let M be the length of each segment, using n=0,1, 2,...,M-1, M.
\[x_{i}=x[i\times\frac{M}{2}+n] \tag{3}\]
where n=0,...,M-1,i=0,1,2,...,N-1
Each segment is given a smooth window w(n). In most cases, we employ the Hamming window at a time. The Hamming window formula for each segment is as follows:
\[w(n)=0.54-0.46cos[\frac{2n\pi}{M}] \tag{4}\]
Here,
\[U=(1/M)\sum_{n=0}^{M-1}w^{2}(n) \tag{5}\]
denotes the mean power of the window w(n). So,
\[MU=\sum_{n=0}^{M-1}w^{2}(n) \tag{6}\]
denotes the energy of the window function w(n) with length M.
It is to be noted that, L denotes the number of data segment.
For validation, "Accuracy" is the most popular metric. However, a model's performance cannot be judged based only by the accuracy. So, we have used other metrics, such as - precision, recall, and f-score. The metrics were calculated using the mean of metrics for all the folds through cross validation.
## IV Results
In our research, we tried to come up with a relation among EEG channel, time domain and frequency domain using Welch's Periodogram with the help of band wave and FFT. The band waves identify the following emotions.
The following figure shows the time domain of the EEG signals. From the figure, we can see that there has been lots of electrical activities going on the EEG channels. And from the time domain, we can get the graphs of the frequency domain along with Power Spectral Density across the channels with the help of Fourier Transformation. In our study, we used Fast Fourier Transformation, the sine wave was taken from 4 Hz to 45 Hz. So, by comparing the sine wave with the time domain, we can get the PSD at the frequency domain. From the time-frequency domain, we can see the electrical activity in brain in multiple time intervals which shows a relation between different frequencies, brain activity and voltage.
For the first FFT analysis, the research calculates mean, std, min, first quartile, median, third quartile and max values of 1240 trials of the six regions based on sensors and four band power values. For this research, we used SVM and K-NN classifiers. SVM classifier used "linear" kernel in this research. The research also calculates the accuracy of Valence and Arousal.
We attempted to experience the variations in electrical activities in the brain over time in the first study. To extract EEG signals, the 32 sensor sites were separated into globally recognizable zones. The position of the electrode are frontal, central, temporal, parietal, and occipital placements, respectively. The topographical maps are used to visualize spatial distribution of activity. This useful visualization method allows us to examine how data changes over one time point to another. The subject in this study was watching a video while we analyzed the changes in electrical activity from 0.153 to 0.273 seconds. We can see the changes of electrical activity in voltage based various frequencies as band waves are determined by the range of frequency and different band waves indicate different ranges of emotion. From this, it can be said that the subject can feel different emotions in a particular time point.
For the second research, during the FFT processing, we employed meta data for the purpose of doing a meta vector analysis. Raw data was split over a time span of 2 seconds, with each slice having a 0.125-second interval between it. A two-second FFT of channel was carried out in different frequencies in a sequence. Emotiv Epoch+ was fitted with a total of 14 channels, which were carefully selected. The number of channels is [1,2,3,4,6,11,13,17,19,20,21,25,29,31].The number of bands is 6. band = [4,8,12,16,25,45]. A band power of 2 seconds on average is used. The window size was 256 with a step size of 16, with each update occurring once every 0.125 seconds. The sampling rate was set to 128 hertz. The FFT was then performed on all of the subjects using
Fig. 1: Time Domain of the EEG Signals
Fig. 5: Topographical Map for Beta band wave
Fig. 3: Topographical Map for Theta band wave
Fig. 2: Power Spectral Density Across Channels
these settings in order to obtain the required output. Neural networks and other forms of artificial intelligence require a starting collection of data, referred to as a training dataset, that serves as a foundation for subsequent application and use. This dataset serves as the foundation for the program's developing information library. Before the model can interpret and learn from the training data, it must be appropriately labeled. The lowest value of the data is 200 and the greatest value is above 2000, which means that trying to plot it will result in a lot of irrelevant plots, which will make conducting the analysis tough. The objective of machine learning is to create a plot and then optimize it further in order to obtain a pattern. And if there are significant differences between the plotted points, it will be unable to optimize the data. As a result, in order to fix this issue, the values have been reduced to their bare minimum, commonly known as scaling. The values of the data will not be lost as a result of scaling; instead, the data will be optimized to the point where there is little difference between the plotted points. In order to achieve this, StandardScaler must transform your data into a distribution with a mean of zero and a standard deviation of one. When dealing with multivariate data, this is done feature-by-feature to ensure that the data is accurate (in other words independently for each column of the data). Because of the way the data is distributed, each value in the dataset will be deducted from the mean and then divided by the standard deviation of the dataset. After that, we divided the data set into two parts: a training data set and a testing data set. Training will be carried out on 75% of the data, and testing will be carried out on 25% of the data. A total of 456768 data were used in the training process. A total of 152256 data were used in the testing. RNN has been kept sequential. The first layer LSTM of sequential model takes input of 512. The second layer takes input of 256. The third and fourth layer takes an input of 128 and 64. And, the final layer LSTM of sequential model takes input of 10. Since we are conducting classification where we will need 0 or 1 that is why sigmoid has been used. The activation functions used are relu and for the last part sigmoid. The rectified linear activation function, abbreviated ReLU, is a piecewise linear function that, if the input is positive, outputs the value directly; otherwise, it outputs zero. Batch normalization was used. Batch normalization is a method for training extremely deep neural networks in which the inputs to a layer are standardized for each mini-batch. This results in a stabilization of the learning process and a significant drop in the total of training epochs required for training deep networks. Through randomly dropping out nodes while training, a single model can be utilized to simulate having a huge variety of distinct network designs.[2] This is referred to as dropout, and it is an extremely computationally efficient and amazingly successful regularization technique for reducing overfitting and improving generalization error in all types of deep neural networks. In our situation, dropout rates began at 30%, increased to 50%, then 30%, 30%, 30%, and eventually 20%. We worked with three-dimensional datasets; however, when we converted to a dense layer, we obtained a one-dimensional representation in order to make a prediction. RMSprop was used as the optimizer with a learning rate of 0.001, a rho value of 0.9, and an epsilon value of 1e-08. RMSprop calculates the gradient by dividing it by the root of the moving (discounted) average of the square of the gradients. This application of RMSprop makes use of conventional momentum rather than Nesterov momentum. Additionally, the centered version calculates the variance by calculating a moving average of the gradients. As we can see, accuracy increases very gradually in this case, and learning rate plays a major part. If we increased the learning rate, accuracy would also increase rapidly, and when optimization is reached, the process would reverse, with accuracy decreasing at a faster rate. That is why the rate of learning has been reduced. When one zero is removed, the accuracy decreases significantly. As our loss function, we utilized the Mean Squared Error. The Mean Squared Error (MSE) loss function is the most basic and extensively used loss function, and it is typically taught in introductory Machine Learning programs. To calculate the MSE, take the difference between your model's predictions and the ground truth, square it, and then average it across the whole dataset. The MSE can never be negative since we are constantly squaring the errors. To compute loss, we utilized mean squared error. Because of the squaring portion of the function, the MSE is excellent for guaranteeing that the trained model does not contain any outlier predictions with significant mistakes. Because of this, the MSE places greater emphasis on outlier predictions with large errors. We tried our best to reduce the percentage of value loss and increase the accuracy rate. We saved the model and kept track by every 50 epochs. In the first picture, we can see that for the first 50 epochs the training loss 0.1588 and validation loss reduced to 0.06851 and 0.06005. And the training accuracy rate increased from 9.61 percent to 45.784 percent and validation accuracy increased to 53.420 percent. For the second 50 epochs, the training loss reduced to 0.06283 and the validation loss reduced to.05223 where the training accuracy increased to 51.661 percent and validation accuracy increased to 60.339 percent. For the third 50 epochs, the training loss reduced to 0.05992 and the validation loss reduced to.04787 where the training accuracy increased to 54.492 percent and validation accuracy increased to 64.413 percent. After 200 epochs the ratio started to change at a very slow rate.We ran 1000 epochs and got the training accuracy rate of 69.21% and the validation accuracy rate was 78.28%.
## V Conclusion
To summarize, in this research, we describe the EEG-based emotion recognition challenge, as well as existing and proposed solutions to this problem. Emotion detection by the
Fig. 6: Topographical Map for Gamma band wave
use of EEG waves is a relatively new and exciting area of study and analysis. To identify and evaluate on numerous emotional states using EEG signals acquired from the DEAP Dataset, SVM (Support Vector Machine), KNN (K-Nearest Neighbor). According to the findings, the suggested method is a very promising option for emotion recognition, owing to its remarkable ability to learn features from raw data in a short period of time. When compared to typical feature extraction approaches, it produces higher average accuracy over a larger number of people.
|